This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-01-12
Channels
- # arachne (1)
- # aws (2)
- # beginners (123)
- # boot (22)
- # boot-dev (8)
- # chestnut (3)
- # cider (38)
- # clara (36)
- # cljs-dev (148)
- # clojars (2)
- # clojure (76)
- # clojure-austin (2)
- # clojure-greece (1)
- # clojure-italy (6)
- # clojure-russia (5)
- # clojure-spec (8)
- # clojure-uk (65)
- # clojurescript (45)
- # core-async (38)
- # cursive (9)
- # data-science (5)
- # datomic (28)
- # docs (1)
- # emacs (2)
- # fulcro (34)
- # hoplon (18)
- # jobs-discuss (7)
- # keechma (8)
- # lumo (5)
- # om (3)
- # onyx (31)
- # parinfer (1)
- # pedestal (1)
- # re-frame (20)
- # reagent (5)
- # ring-swagger (16)
- # shadow-cljs (56)
- # spacemacs (11)
- # specter (8)
- # sql (5)
- # unrepl (29)
- # yada (6)
I’m writing a event emitter to simulate website traffic using core.async, and since I’m aiming at ~100k concurrent sessions I’ve been running into https://groups.google.com/forum/#!topic/clojure/OSwKOKEupGc
at the moment I’m considering to implement some sort of barrier around the timeout chan or to just fork core.async and modify the limit
I also want to point out that this is something I’m writing for fun, so it’s not like it’s blocking some sort of critical production system somewhere, but I’m still interested in finding a solution 🙂
core.async is limited, by default, to running something like 8 go blocks concurrently
it’s configurable
it is, but if you want a thread per go block, it seems like, maybe just use threads?
(I am just assuming the reason you would do that doseq is because you want to spin up a bunch of threads to test concurrent connections)
you misunderstand me, I’m not testing a web server and concurrent connections. I’m writing a library to simulate events, such as pageviews, through state machines
right - I'd say core.async might be a bad fit here because core.async isn't actually good at massive parallelization (which is a pre-req here I think) - it's good at async, coordinating things that are not synchronous, but isn't really a mass scale parallelization lib
though if you use something else for parallelizing on a mass scale then realize you are having trouble coordinating between those threads - then core.async can help again
that’s unfortunate… seemed like a great fit to me since I want to have a lot of sessions but most of them will be idle waiting to emit an event at any given time
seems like every time I find a problem where I think “oh, I can use core.async for this!“, the answer is “don’t use core.async for that” 😕
seems like with a prio-queue you could avoid needing so many state machines by having a queue of "next connection plus its callback"
and, with some fiddling you could make that work with a channel
but directly using that many events in flight is where the problems happen
@schmee core.async can handle that many events, but it has that error because with intended usage having that many things "in flight" indicates a design problem
meh, I don’t like when libraries impose arbitrary restrictions like that. I’m a grown man, I can take responsibility for my own stupidity 😄
it is not obvious to me that core.async wouldn’t work for this
formulated in core.async terms, my current idea is to have ~100k go blocks, where at any moment most of them are parked on a timeout channel, which each outputs events into a main event channel
but I have not read all the backchannel so might be missing something
another option may be agents, which were designed for simulation
yeah, I think agents would avoid some of thse issues - the specific reason I think this isn't good for core.async is wanting that many events waiting on timeouts at once (which might be a bug that is fixed in a future core.async but might be just not how core.async is meant to be used?)
they are just a bucket of state that you can enqueue functions to apply to it
but they are aware of and able to send functions to other agents
and the application is backed by a thread pool
if you send-off, that pool is unlimited in size, but you can monkey with it
I thought send was limited and send-off was resizing
yes it will resize without limit :)
but you can change these with set-agent-send-off-executor!
etc