Fork me on GitHub
#re-frame
<
2018-07-05
>
zalky12:07:38

Hi all: I'm trying to batch a CPU heavy task by re-dispatching similar to the doc presented here: https://github.com/Day8/re-frame/blob/master/docs/Solve-the-CPU-hog-problem.md I have a simple event:

(reg-event-fx :map/progressive-load
  (fn [{:keys [db]} [_ config data]]
    (let [[now later] (split-at batch-n data)]
      {:db (add-points db now config)
       :dispatch-n [(when (seq later)
                      [:map/progressive-load config later])]
       :log [:info "Loading points"]})))
While this successfully re-dispatches to separate events, and terminates, it does not prevent the UI from being frozen. What I'm seeing is 200 separate events are processed at a time between updates. But with each event being about 15ms, this means the UI hangs for several seconds between updates.

p-himik12:07:49

@zalky For such intensive computations you might want to use web workers.

zalky13:07:30

@p-himik: thanks for the suggestion, I'll definitely look into that if I can't get this working. However, the re-dispatching documentation looked really simple. This doc string also suggests that it should work: https://github.com/Day8/re-frame/blob/master/src/re_frame/router.cljc#L8-L61 According to all the documentation, any events that are dispatched while processing that event queue should be put on hold and control should be given back to the browser. But that is clearly not what is happening.

p-himik13:07:46

@zalky That's probably because you use :dispatch-n instead of :dispatch. If you really want to execute them in batches, try smaller batch sizes.

zalky13:07:44

@p-himik: so I've tried :dispatch and it didn't change anything (and :dispatch-n is just :dispatch under the hood). I've tried a number of different batch sizes, and oddly, it doesn't change the number of events that are processed between updates, always ~200.

zalky13:07:35

So I can double or halve the time it takes to do each single event, and it will still do ~200 events between browser updates

p-himik13:07:40

:dispatch-n is not the same as :dispatch - it fills the queue right away.

p-himik13:07:20

Not sure what you mean. First you write "tried a number of different batch sizes" and then "I can double or halve the time it takes". These are independent values. You tried to change batch-n, right?

zalky13:07:27

Ah, I see what you mean, you're right if you actually dispatch n distinct events, but I believe the way I've used it I only dispatch one. I'm only using :dispatch-n because it makes conditional logic easy by removing nil, dispatch does not.

p-himik13:07:50

Ah, sorry - I automatically assumed that you passed a large list into :dispatch-n. Yeah, with a single event it's the same as :dispatch. A better approach would be to conditionally assoc :dispatch into the map of effects.

p-himik13:07:13

(cond-> {:db ...}
  (seq later) (assoc :dispatch [:map/...]))

p-himik13:07:31

But it still won't solve your problem with UI hangs, of course.

zalky13:07:45

Yeah, I tried to change batch-n. So to clarify: with batch-n set to 10, a single event takes about 15ms. With batch-n set to 50, each event takes about 60ms. In both cases, about 200 events are processed between renders, which seems odd to me.

zalky13:07:31

It seems something is preventing the event routing from giving control back to the browser.

p-himik13:07:31

How do you get the number 200?

zalky13:07:47

I'm logging to the console each event and render cycle.

p-himik13:07:22

Can you provide a minimal reproducible example?

zalky13:07:48

Sure, I'll try to whip something simple up.

p-himik13:07:56

And a complete one, preferably. Thanks.

zalky13:07:04

K, like a working repo?

zalky13:07:07

or just a gist

p-himik13:07:29

A repo would definitely be better. 🙂

zalky13:07:39

😛 I'll see what I can do

zalky15:07:37

@p-himik: here's a minimal working repo that reproduces the issue: https://github.com/zalky/re-demo

p-himik15:07:19

Thanks, I'll give it a try today.

p-himik16:07:33

@zalky Interesting. The issue appears to be with how Google Chrome treats window.requestAnimationFrame(). The issue is not reproducible in Firefox.

p-himik17:07:04

@zalky Here's a minimal example:

(ns re-demo.core
  (:require [reagent.core :as r]
            [reagent.ratom :as r*]
            [re-frame.interop]
            [reagent.impl.batching]))

(def result (r*/atom 0))

(def use-bad-next-tick? true)

(defn work [tasks]
  (js/console.log "Doing work")
  ;; Just do some busy waiting
  (reduce + (range 100000))
  (reset! result (first tasks))
  (when-some [tasks (next tasks)]
    (let [f #(work tasks)]
      (if use-bad-next-tick?
        (re-frame.interop/next-tick f)
        (reagent.impl.batching/next-tick f)))))

(defn root []
  (fn []
    (js/console.log "Rendering" @result)
    [:div "Result: " @result]))

(defn init! []
  (r/render [root] (.getElementById js/document "container")
            #(work (range 1000000))))

p-himik17:07:35

re-frame.interop/next-tick uses goog.async.nextTick which uses MessageChannel if it's available. reagent.impl.batching/next-tick uses requestAnimationFrame. Apparently, on Google Chrome MessageChannel works much faster than requestAnimationFrame and probably has a priority - not sure how else to explain it. That's why animation frame requests wait there for multiple seconds each time.

p-himik17:07:03

@mikethompson Does it make sense? Should it be fixed in re-frame?

pesterhazy18:07:06

wow, this stuff is confusing. There's requestAnimationFrame, setTimeout(f,0), setImmediate and MessageChannel, that are all doing roughly the same thing?

pesterhazy18:07:52

plus node's process.nextTick

p-himik18:07:10

If by "roughly" you mean just "schedule at some later time", then yeah. 🙂

pesterhazy18:07:54

they're all scheduling in 4ms or less AFAIK

pesterhazy18:07:15

do you have a pointer to an explanation of how these are different?

p-himik18:07:21

requestAnimationFrame at 16.7ms.

pesterhazy18:07:31

oh yeah? see I'm truly confused

p-himik18:07:45

Just their corresponding documentation. I prefer MDN.

p-himik18:07:30

MessageChannel is not even for scheduling per se. And setImmediate AFAIK is deprecated.

pesterhazy18:07:59

is "tick", "frame" and "execution context" the same?

pesterhazy18:07:16

I guess a frame is more than one tick typically so no...

p-himik18:07:16

Yeah, and that's the only thing I could answer you. 🙂 I'm far from being an expert here - I've just read some re-frame/reagent/Google Closure sources and some MDN documentation on the matter.

p-himik18:07:00

By the way, goog/async/nexttick.js may give some additional insights.

p-himik18:07:08

Maybe I should start an issue on GH?

p-himik18:07:28

At least the discussion will have a permanent place.

pesterhazy18:07:58

👍 for googlability

kennytilton18:07:30

@zalky I had this same problem (I think) with my progress bar. digging

zalky18:07:21

There's this issue, that is almost certainly related

pesterhazy18:07:58

nextTick really is much faster (times in ms) Chrome:

cljs.user=> (let [start (js/Date.)] ((fn step [n] (if (zero? n) (println "setTimeout" (- (js/Date.) start)) (js/setTimeout #(step (dec n))))) 1000))
setTimeout 4998
vs
(let [start (js/Date.)] ((fn step [n] (if (zero? n) (println "nextTick" (- (js/Date.) start)) (goog.async.nextTick #(step (dec n))))) 1000))
nextTick 28

kennytilton18:07:14

One trick is :dispatch-later, another is adding meta ^:flush_dom to each event.

pesterhazy18:07:47

Safari: setTimeout 4589 and nextTick 344

pesterhazy18:07:34

Firefox: setTimeout 4700 and nextTick 75

zalky18:07:08

@hiskennyness: I thought about just running ^:flush_dom on each event, but wouldn't that be rather inefficient? It is my (imperfect) understanding that after going through the current event queue, if we have not reached another animation frame, that we can optionally process more events before giving control back to the browser.

zalky18:07:40

At least that's the idea in theory.

p-himik18:07:34

I think that understanding is correct.

p-himik18:07:57

Flushing DOM would make sense when it actually can display something useful - so, each 16ms or so, depending on the hardware.

kennytilton18:07:30

What I did was make the processing chunks big enough (during a single event) that I was not overtaxing the dom flush.

kennytilton18:07:07

But then, yeah, I needed logic to make the beast work in chunks. I think you said you are doing that, tho.

p-himik18:07:51

But that's hardly scalable. I don't think one can write a robust algorithm on how to split work in chunks so that each chunk is always processed within a frame.

pesterhazy18:07:48

typically you'd know, roughly, how long a chunk will take, no?

p-himik18:07:01

That probably should be done of re-frame level, if that's possible at all. Not deciding on the chunking, but deciding on when to yield so that e.g. reagent can render.

p-himik18:07:21

E.g. one of my application parses XML in browser.

p-himik18:07:38

The files can vary greatly in size.

p-himik18:07:37

And even if you know - that's just shifting a one-time job (again, if possible) from the library to the multiple user implementations.

pesterhazy18:07:52

granted that's a difficult job to estimate

kennytilton18:07:53

Wait, what difference does it make if you over-/under-estimate? You just do not want the UI to freeze, right?

p-himik18:07:09

I do not want it to change FPS.

p-himik18:07:19

And I do not want my processing to be slow.

pesterhazy18:07:31

I'd agree, as long as the chunks are less than, say, half a frame I don't see the problem

p-himik18:07:40

Even if UI is not freezing, it still can be sluggish.

kennytilton18:07:42

The goal posts just moved! 🙂

p-himik18:07:16

The goal post?

kennytilton18:07:56

US football jargon for the requirements just changed from nit freezing the UI to never missing a frame.

p-himik18:07:44

Ah. 🙂 Shannon told us to think bravely. 🙂

p-himik18:07:56

Why fix something small when you can fix something great.

kennytilton18:07:34

Anyway, just keep the chunks under 16ms. If you goof the user will feel sluggish for another 16ms.

p-himik18:07:19

Oops, not Shannon - Hamming. How could I.

p-himik18:07:07

If you goof for each frame, the user won't be able to comfortably drag a map, zoom a page, rotate a figure,...

p-himik18:07:56

But yeah, as a jugaad solution chunking + ^:flush-dom works. And maybe it's the best solution possible.

p-himik18:07:00

And there's always place for web workers, as I have mentioned initially. They're not that complicated.

zalky19:07:11

So let's say I have two similar components on the page at once, does that change how I have to chunk? Like would that compound how many events are processed between flush events?

kennytilton19:07:52

What does Shannon want us to do? Keep an eye on the clock and jump out of a chunk when we get to 13ms? That could take a good twenty minutes to code! 🙂

kennytilton19:07:14

btw, what is this, OpenGL gaming or sth? 🙂

zalky19:07:34

😛 It's just a map with a lot of points

kennytilton19:07:30

Wait. Multiple components? I knew we should not have put the goal posts on wheels!

kennytilton19:07:50

Not sure how multiple components changes this. We have one busy task… well, if the components themselves do a lot of work to prepare the next frame, yeah, you have to consider that. But a handler strictly watching the clock just has to worry about the clock.

eoliphant22:07:32

Hi, i’m having a bit of a chicken/egg problem. I’m using the aws-amplify library that comes with a HOC for authentication. You just wrap your root component with it and off to the races, it passes the auth status, etc into it’s child’s props. The issue i’m having is trying to bridge that into re-frame. In my root component, I’d check for the flag that indicates the user is logged in, then fire off the event. This is currently not working, as it seems that the firing of the event is happening and updating the DB, but I guess after everything’s been rendered, so say my menu, that’s looking at a logged-in sub is still showing the login option as the sub is returning the earlier value. If do a figwheel refresh, everything looks fine. So I’m not sure how to fix this, as far as I can tell, the issue is due to the fact that the event is being fired by the parent component, in the midst of that render cycle or whatever, such that when a child like my menu is rendered, it’s seeing the earlier state. I’m thinking that I may have to skip the HOC and just use their API, but was wondering if anyone had any ideas.