Fork me on GitHub
#re-frame
<
2015-08-03
>
meow13:08:35

I'm thinking that core.async Timeout for cljs should probably be using goog.Timer.callOnce instead of js/setTimeout

meow13:08:42

@danielcompton: I'm going to experiment with the re-frame router-loop

meow13:08:09

What's the best way to do this? I forked re-frame and switched to the develop branch.

meow13:08:58

Was thinking I would increment the version number and do a local lein install so that I'm running against my modified version of re-frame.

meow13:08:24

But if there is a better way please let me know.

meow13:08:51

Like, should I instead create a new branch for this experiment?

meow17:08:41

I'm at the point where I think it would be useful to have a discussion about some design issues.

meow17:08:42

I'm wondering about the handling of events that have been dispatched.

meow17:08:22

Right now events are handled in a loop that is independent of RAF, and therefore independent of the precise timing of what gets rendered in the browser.

meow17:08:38

For example, imagine a browser view represented as Va that after an event takes place will look like Vb.

meow17:08:40

It will look like Vb because the event handler will modify the global state, which will fire off ratoms and reactions and all that good stuff.

meow17:08:02

Now imagine a view Vc that has a button that gets clicked, and that button clicking results in the dispatching of two events that represent the logical consequence of that action.

meow17:08:05

Ideally view Vd will be the result of all the state mutations of both events.

meow17:08:12

But because the current router-loop runs independently it is possible that only the first event gets handled, then the rendering takes place which would render Vd, a briefly incorrect view of the world where only part of the "transaction" has been applied, then the second event would be handled and view Ve would be complete.

meow18:08:00

This may or may not be an issue. If it is understood that events dispatched in re-frame must be atomic then this problem goes away.

meow18:08:49

Another way to look at this is that right now the actual rendering is taking place on one Signal graph and re-frame event handling is taking place on another, parallel Signal graph and when everything happens really fast it isn't a noticeable issue, but when there is back-pressure from lots of events or CPU hogs then things aren't so pretty.

mikethompson18:08:53

@meow: events are, indeed, meant to be atomic. They are meant to represent some logical external "thing" which happened to the system, which will put the system in a "new state" (think FSM).

mikethompson18:08:23

So re-frame doesn't anticipate the idea of a "logical event" being split into multiple dispatch. Or to put that another way, re-frame expects the system to be in a valid state after handling each event, not in some intermediate, temporary, unsound state (waiting for a further event to happen, to put things right again).

meow19:08:23

I see two ways to approach router-loop. Would love some more feedback.

meow19:08:29

One way is to have a main go-loop like there is now and for each event taken off use goog.async.nextTick to handle the event asap, while still playing nice with the js gui handling. The other way would be to peg some process to RAF such that for each frame, every pending event got handled.

mikethompson19:08:41

So you are saying use goog.async.nextTick (which from memory uses the PostMessage hack to get a fast turn around) instead of (timeout 0) which takes 4ms?

meow19:08:35

From what I can tell, yes, goog.async.nextTick seems to solve the problems that js/setTimeout has.

meow19:08:15

goog.async.nextTick is used under the covers by core.async and in my own use of it it works very fast and doesn't block the ui

meow19:08:08

unfortunately, core.async timeout channels use js/setTimeout

mikethompson19:08:33

Yes, I remembering looking into it's use once (`goog.async.nextTick`).

meow19:08:41

I was thinking/hoping that goog.async.nextTick would let you get rid of both uses of timeout channels in your router-loop and perhaps the need to call flush as well.

mikethompson19:08:51

I'm definitely open to changes which: 1. allow the event processing loop to yield to the browser (to paint, etc) 2. allow greater throughput

meow19:08:08

From what I have seen, goog.async.nextTick does yield to the browser while a simple go-loop does not

meow19:08:44

That's why you tend to see timeout channels inside go-loop examples.

mikethompson19:08:15

As with all changes, the key thing here is to be really clear on the problem

mikethompson19:08:47

So I'd just like to get agreement on the problem first, before looking at changes

mikethompson19:08:33

(At this point, goog.async.nextTick looks interesting, but it is more about a solution)

mikethompson19:08:58

Problem Example #1: - my app is in the background. And is throttled by the browser. So animation frames are sloooow - At the same time, I have a websocket which is producing lots of events. Those events are getting dispatched - but the handling of these events is slower than the generation.

mikethompson19:08:33

And then my async.core channel fills up.

mikethompson19:08:15

I put the problem down to the leading (<! (timeout 0)) (it happens ahead of any event processing)

mikethompson19:08:36

That timeout hands back control to the browser .... except the browser is throttling and it doesn't hand back control to the go-loop to do the actual event processing until that peaky websocket (which is a squeaky wheel and gets more oil from the browser) has done some more dispatch

mikethompson19:08:26

I think what I'd like is the for that go-loop to drain the channel before giving back control.

meow19:08:19

That's where my thinking was going.

mikethompson19:08:30

Talking of solutions, I was considering allowing someone to dispatch with metadata which says don't pause: (dispatch ^:dont-pause [:event-id]) Then the code in the go-loop kinda looks like this:

(if-not (:dont-pause  (meta event-v))  (<!   (timeout 0)))

mikethompson19:08:13

Those are the two usecases I have in mind when I look at this. Can you describe your version of the problem?

mikethompson19:08:02

(Sorry I'm going to have to disappear. But would like to continue with this later. Please dump any thoughts in here)

meow19:08:26

I need to go take care of something for a bit, but in general I've just been exploring these issues and kind of got sucked into looking at possibly improving the router-loop, but I'm not currently doing much with re-frame so I don't really have a problem right now. Just interested in large streams of data.

danielcompton22:08:23

@meow: bumping version number or changing groupid then lein installing would work. If you use lein snapshots that should make experimenting easier as you won’t need to keep lein installing. (NB, cljs autobuild doesn’t work with checkouts yet https://github.com/emezeske/lein-cljsbuild/pull/374#discussion_r36128569)

danielcompton22:08:47

Problem Example #2: - my app is in the foreground - I’m receiving a large number of events from a websocket. Each message may be creating more dispatch side effects - With a (timeout 0) on every event, we are adding a ~4ms delay between each message process - core.async channel fills up If we’re paying a 4ms delay every time then we can sustain at maximum 250 dispatches/second. When we’re actually doing real work as well, this number would drop. Using async.nexttick seems like it could reduce the delay to ~0ms. This is related to Amdahl’s law as an interesting side note https://en.wikipedia.org/wiki/Amdahl%27s_law

mikethompson23:08:12

Question: when use of goog.async.nexttick has been proposed, I've assumed that means "stop using core.async". So when a dispatch happens, goog.async.nexttick would be used to schedule the handler for execution "soon" (rather than the current process of putting the event onto a channel). That's the proposal, correct? There's no ingenious combination of core.async and goog.async.nexttick which I'm missing?