This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # admin-announcements (6)
- # announcements (1)
- # beginners (17)
- # boot (69)
- # bristol-clojurians (1)
- # cider (7)
- # cljs-dev (115)
- # clojure (76)
- # clojure-russia (12)
- # clojure-sg (2)
- # clojurescript (152)
- # core-async (2)
- # core-logic (7)
- # cursive (18)
- # datascript (4)
- # datomic (2)
- # hoplon (12)
- # ldnclj (26)
- # off-topic (1)
- # re-frame (48)
- # reagent (6)
I'm thinking that core.async Timeout for cljs should probably be using goog.Timer.callOnce instead of js/setTimeout
Was thinking I would increment the version number and do a local lein install so that I'm running against my modified version of re-frame.
I'm at the point where I think it would be useful to have a discussion about some design issues.
Right now events are handled in a loop that is independent of RAF, and therefore independent of the precise timing of what gets rendered in the browser.
For example, imagine a browser view represented as Va that after an event takes place will look like Vb.
It will look like Vb because the event handler will modify the global state, which will fire off ratoms and reactions and all that good stuff.
Now imagine a view Vc that has a button that gets clicked, and that button clicking results in the dispatching of two events that represent the logical consequence of that action.
But because the current
router-loop runs independently it is possible that only the first event gets handled, then the rendering takes place which would render Vd, a briefly incorrect view of the world where only part of the "transaction" has been applied, then the second event would be handled and view Ve would be complete.
This may or may not be an issue. If it is understood that events dispatched in re-frame must be atomic then this problem goes away.
Another way to look at this is that right now the actual rendering is taking place on one Signal graph and re-frame event handling is taking place on another, parallel Signal graph and when everything happens really fast it isn't a noticeable issue, but when there is back-pressure from lots of events or CPU hogs then things aren't so pretty.
@meow: events are, indeed, meant to be atomic. They are meant to represent some logical external "thing" which happened to the system, which will put the system in a "new state" (think FSM).
So re-frame doesn't anticipate the idea of a "logical event" being split into multiple
dispatch. Or to put that another way, re-frame expects the system to be in a valid state after handling each
event, not in some intermediate, temporary, unsound state (waiting for a further event to happen, to put things right again).
One way is to have a main go-loop like there is now and for each event taken off use
goog.async.nextTick to handle the event asap, while still playing nice with the js gui handling. The other way would be to peg some process to RAF such that for each frame, every pending event got handled.
So you are saying use
goog.async.nextTick (which from memory uses the PostMessage hack to get a fast turn around) instead of (timeout 0) which takes 4ms?
From what I can tell, yes, goog.async.nextTick seems to solve the problems that js/setTimeout has.
goog.async.nextTick is used under the covers by core.async and in my own use of it it works very fast and doesn't block the ui
I was thinking/hoping that goog.async.nextTick would let you get rid of both uses of timeout channels in your router-loop and perhaps the need to call flush as well.
I'm definitely open to changes which: 1. allow the event processing loop to yield to the browser (to paint, etc) 2. allow greater throughput
From what I have seen, goog.async.nextTick does yield to the browser while a simple go-loop does not
So I'd just like to get agreement on the problem first, before looking at changes
(At this point,
goog.async.nextTick looks interesting, but it is more about a solution)
Problem Example #1:
- my app is in the background. And is throttled by the browser. So animation frames are sloooow
- At the same time, I have a websocket which is producing lots of events. Those events are getting
- but the handling of these events is slower than the generation.
I put the problem down to the leading
(<! (timeout 0)) (it happens ahead of any event processing)
That timeout hands back control to the browser .... except the browser is throttling and it doesn't hand back control to the go-loop to do the actual event processing until that peaky websocket (which is a squeaky wheel and gets more oil from the browser) has done some more
I think what I'd like is the for that go-loop to drain the channel before giving back control.
Talking of solutions, I was considering allowing someone to dispatch with metadata which says don't pause:
(dispatch ^:dont-pause [:event-id])
Then the code in the go-loop kinda looks like this:
(if-not (:dont-pause (meta event-v)) (<! (timeout 0)))
Those are the two usecases I have in mind when I look at this. Can you describe your version of the problem?
(Sorry I'm going to have to disappear. But would like to continue with this later. Please dump any thoughts in here)
I need to go take care of something for a bit, but in general I've just been exploring these issues and kind of got sucked into looking at possibly improving the router-loop, but I'm not currently doing much with re-frame so I don't really have a problem right now. Just interested in large streams of data.
@meow: bumping version number or changing groupid then lein installing would work. If you use lein snapshots that should make experimenting easier as you won’t need to keep lein installing. (NB, cljs autobuild doesn’t work with checkouts yet https://github.com/emezeske/lein-cljsbuild/pull/374#discussion_r36128569)
Problem Example #2:
- my app is in the foreground
- I’m receiving a large number of events from a websocket. Each message may be creating more dispatch side effects
- With a
(timeout 0) on every event, we are adding a ~4ms delay between each message process
- core.async channel fills up
If we’re paying a 4ms delay every time then we can sustain at maximum 250 dispatches/second. When we’re actually doing real work as well, this number would drop.
Using async.nexttick seems like it could reduce the delay to ~0ms. This is related to Amdahl’s law as an interesting side note https://en.wikipedia.org/wiki/Amdahl%27s_law
Question: when use of
goog.async.nexttick has been proposed, I've assumed that means "stop using core.async". So when a
goog.async.nexttick would be used to schedule the handler for execution "soon" (rather than the current process of putting the event onto a channel). That's the proposal, correct? There's no ingenious combination of
goog.async.nexttick which I'm missing?