This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
How would you handle some events source that streams data at a really high rate ? We tried doing it as you’d do with ajax, using a handler and dispatch, but it fills the channel that re-frame uses under the hood and raises a lot of errors. We thought about updating app-db directly, as it worked properly before we started re-frame but we're not sure that’s the right way of doing it. We could of course do some operations to slow things down, but I’m still curious how we’d do it if we were to display these events in our logview, where you’d typically want to see everything
For some context, we’re building a drone control panel, which typically have things like a map showing current position, various flight data, etc … The drone itself sends us loads of flight data, some of which are required for example, to graph things or reproducing flight controls.
( if anyone is interested we already open sourced our code that transforms drone’s data to a websocket that streams json https://github.com/openskybot/skybot-router )
btw, the error we’re getting as it is now
Uncaught Error: Assert failed: No more than 1024 pending puts are allowed on a single channel. Consider using a windowed buffer.
(< (.-length puts) impl/MAX-QUEUE-SIZE)
cljs.core.async.impl.channels.ManyToManyChannel.cljs$core$async$impl$protocols$WritePort$put_BANG_$arity$3 @ channels.cljs?rel=1438379373829:81
cljs$core$async$impl$protocols$put_BANG_ @ protocols.cljs?rel=1438379373727:17
cljs.core.async.put_BANG_.cljs$core$IFn$_invoke$arity$2 @ async.cljs?rel=1438379373509:107
cljs$core$async$put_BANG_ @ async.cljs?rel=1438379373509:101
re_frame$router$dispatch @ router.cljs?rel=1438379372614:79
(anonymous function) @ handlers.cljs?rel=1438512081521:17
callHandlers @ Client.js:134
handleMessage @ Client.js:307
bound
and our current code
(register-handler
:subscribe-uav
(fn [db [_ c uav-name]]
(bot/on-uav-update c uav-name #(dispatch [:store-uav uav-name %1]))
db))
(register-handler
:store-uav
(fn [db [_ uav-name response]]
(let [data (u/uav->map response)]
(assoc-in db [:uavs uav-name] data))))
hum, I wrote a wall of text, if this isn’t the right place to post this, please tell me
I would throttle dispatching to the frequency of preferred UI updating frequency, say 10 times per second
I assume the data is only sampled state of some measurements and dropping samples does not affect other parts of your app
and what about updating app-db directly ? not saying it's the right solution, throttling clearly is but I'm curious of the consequences of a such approach
updating app-db directly will still trigger all watchers on the ratom, which might be slow if your app is more complex
look at this for inspiration: https://github.com/darwin/plastic/blob/master/cljs/src/main/plastic/util/helpers.cljs#L47-L58
so if I were to add some logviews, the best would be too buffer our events and then only update app-db in chunks right ?
I wasn't clear, when I said a hypothetic logview, I meant something where you can see everything that had been received
but if I update app-db only at an acceptable rate, like ten per seconds but this time updating it with a big chunk it'd be acceptable right ?
in we're going to have some view that hold huge logs, they have nothing to do with app-db
you are welcome, and yes, I think updating app-db with bigger chunks of data would be acceptable, you trigger reaction watching machinery just once for each chunk
btw, your project felt like golden for us, we intend to build an atom plugin to edit code that is send to a drone
cool, but it is still long way to go, it is not yet suitable even for experimental use
so having some code that shows us the ropes of writing an atom plugin in cljs is really great
just a tip, if you are not using it already: https://github.com/binaryage/cljs-devtools
for now we're just floating in pure bliss thanks to figwheel / reagent / reframe / clojure / repl
@jhchabran: I'm late to the party here, but I'll try to provide a new piece of info ...
> Uncaught Error: Assert failed: No more than 1024 pending puts are allowed on a single channel. Consider using a windowed buffer.
means that a huge volume of dispatch
es have filled up the core.async channel before the events could be handled (by event handlers).
So possible solutions:
1. dispatch less often, by either dropping data or, better, doing less dispatches but making each one carry more data.
2. Increase the size of that core.async buffer. Well, I'd have to give an API for that. So that's not an immediate solution.
@mikethompson: thanks for the insights
that's exactly what we did, basically wiring our callbacks to dispatch, wasn't a good idea 😄
route 2, while interesting in our case would just push the problem a bit further, we'd hit the same wall once we'll start handling more things on the drone
We have run into the same issue with websokets WHEN the app is in the background. Tons of updates flying down the websocket, and because the app is in the background, the javascript is throttled. You aren't getting 60 fps. Annimation frames are much loooooonggggger
Neither were we.
Hash lessons!! Is there no end to the pain??
if you could have seen the number of times my friend who started cljs a week ago was like "OH BOY THIS IS EXACTLY WHAT I NEEDED" while he was reading your re-frame's readme 😄
Hey, just read it. Thanks for the kind words!!
I've been exploring these issues of channels as streams and dealing with large volumes of inputs and while I can't give you a definitive answer I think I can share a few things that are worth considering.
First, its good to understand the inner workings of core.async channels a bit. Every channel has two queues: one for putters and one for takers, each 1024 in size. This is a given and is independent of buffering.
If you want to deal with back pressure then that's where the buffer comes in. You can size the buffer and pick a type that drops values either as fifo or lifo.
@mikethompson: for the re-frame main channel I don't think it makes sense to add a buffer - I think what you have is fine as is. When someone runs into the 1024 puts limit then they need to rethink their app.
@meow: music to my ears!! Thanks.
If the events in question aren't terribly important and can be dropped in response to back pressure then adding a buffered channel between the event source and the process of feeding those events on to re-frame is the way to go. That way the buffering and dropping is handled by core.async and is independent of timing and other browser issues.
If you want to deal with all the source data then you need to control that in a loop that is independent of re-frame or any other frame based mechanisms because those typically use a js requestAnimationFrame mechanism and browsers will slow down or stop the RAF loop for tabs that don't have the active focus.
If timing is really important, say to make a game appear smooth and at an appropriate speed given that the browser won't always be able to operate at 60fps or even necessarily at a fixed, slower rate, like 30fps, well now you are in a whole other world and will have to know quite a bit more about how timings really take place in the browser.
That's where I'm still getting up-to-speed. Google Closure has some tools for this in their async module.
I read through https://github.com/Day8/re-frame and there’s no mention of URLs and how to change them and dispatch and so on. Does re-frame work in a single URL?
@mikethompson: I hope that all makes sense. This is something I'm still learning but thought it might be useful in general.
This wiki page might be useful in terms of lessons learned, such as how awesome transducers are for channel data transformations: https://github.com/clojure/core.async/wiki/Sieve-of-Eratosthenes
@jhchabran: I hope that helps some. Your kind of situation where you have a firehose of data streaming in is the kind of thing I have in the back of my mind as I've been exploring this issue. Would love to see how your app shapes up.
In short, the browser event loop is never faster than 60 frames per second so you can't feed it more events than it can handle in that timeframe of 16.667 ms for each frame. That applies to the event loop in re-frame as well. So the main event loop needs to be fast and primarily focused on what you want to display in the next frame update.
darwin: well, many SPAs frameworks update the URL so that if you pass the URL to someone else it’ll load the same part of the application. For example, EmberJS makes that straightforward and easy. Without that, apps behave like they flash apps. And yes, that makes SPA a misnomer, but there are very few real SPAs out there. Look at the one that started it all, Google Maps, it constantly updates the URL to represent your current view as best as possible.
call it whatever you like, you could use something like secretary lib, but that is not routing to different urls, same url with subset of app’s state encoded in url hash
I know about secretary and silk and bidi.
But it looks like reframe is not doing anything with the URL, so all that needs to be manually wiring and the way reframe handles state, it looks like it would be a horrible hack. This is my question.
@pupeno: I'm not sure if any of the react-based libraries in clojurescript do any kind of automatic url updating: om, reagent, re-frame, quiescent, freactive, etc.
why hack? keep routing state in your app-db, when url changes by user action, call re-frame’s dispatch to change that state, from the opposite direction: subscribe to that routing info in app-db, when it changes, update the hash, without triggering dispatch
@meow: yeah, I’m curious too. I wish there was an emberjs-like URL handling built on top of react.
darwin: well, I never used reframe before, so I don’t have a good mental model of how a finished app looks like. An all the documentation I seen there’s not a single mention of URLs which makes me thing I might be going against the grain of the library by using URLs.
IMO re-frame does not want to care about “outer world”, it is your job to map app state changes to changes of the world around and dispatch proper commands to modify app state in reaction to world events, re-frame even does not care about react/reagent/whatever-you-use to present your app state to the user
@jhchabran: You might find this useful:
(defn listen-next-tick!
[callback]
(letfn [(step [] (if (callback) (goog.async.nextTick step)))]
(goog.async.nextTick step)))
It's part of a library I'm working on: https://github.com/decomplect/ion/blob/master/src/ion/poly/core.cljs
mm, I’m having some trouble understand what it does, could you provide an example ?
I believe that listen-next-tick!
is the fastest looping mechanism for js that plays nicely with the js event loop
Think of it like an event listener - you pass in your callback, which gets called every time there is an opening in the js processing loop. (Sorry for my sloppy language, the details are in the Google Closure library).
I see what you mean well in our case, the drone is just sending stuff permanently, we can’t read from it
Ok, well, you're going to read from some data source that's being populated by the drone, yes?
(I’ve just finished reading what you wrote earlier, thanks for the clear explanation ! )
And I'm just focusing on being able to do everything in the browser but that's just my thing and obviously this kind of problem could be split into a client/server solution.
actually, since we’re also the ones building the software that connects to the drone, we could do some handling/buffering directly in there
but for now, we tried to not drift too far away from how it’s internally done by the drone’s controller (Taulabs)
typically, we receive objects that represents some drone subsystem, like current altitude, position, etc ..
so, since our SPA is just a dashboard where you can see what’s happening and adjust some settings, dropping most of these objects coming from the WS doesn’t create any issue on our side
presently, we’re just wiring the callbacks, but as things progresses, it clearly hints we should add in core.async in there
populating some array in low level fashion, then periodically reading from it could be an even simpler solution
if it were me (and keep in mind you know far more about your app than I do) I would wire up a callback for handling the json stream and put that data onto a core.async channel, using a transducer, if necessary, to transform the data into something easier or more appropriate for the rest of your app to use, and I would give that channel a buffer size and type to deal with the back pressure
then you'd have some other loop that takes the data off that channel to do whatever you want with it
yeah I thought about doing things that way (not clearly expressed as you just wrote) and I feel like it’s the direction we should take
yes indeed just discovered it thanks to @jhchabran very cool so far
yeah re-frame made things much clearer for us, with solely reagent, we would have been a bit lost
yes actually using some kind of channel might be a good idea in our case. For exemple the thing that broke is the update of the quaternion decribing the actual attitude of the drone, which means that it’ll certainly feed a 3d representation of the drone
to follow up on my looping suggestion, if you look at the source code for core.async you'll see that they use goog.async.nextTick to schedule the work done by the go
macro, so if you use core.async you're already using goog.async.nextTick
whether you knew it or not.
And since you folks are new to clojure you might not be that familiar with Google Closure yet, but it is the compiler used by ClojureScript so it becomes readily available for use in cljs apps, and by "it" I mean a large library that is part of the Google Closure compiler+library tool set.
My poly
library is an attempt to pull in bits and pieces of Google Closure that can be enhanced or reshaped for easier use in cljs, but Google Closure can also just be used as is since it's just js code. But you might see some interesting things in poly
where I'm taking js events and turning them into clojure maps and supplying them via channels.
yeah the more I read about cljs, the more it feels I should get familiar with google closure
Yes, there are definitely good things in goog and dnolen is always telling devs to look there.
poly is definitely alpha code and an exploration for me so keep that in mind, but I would definitely recommend reading it to get an idea of what is available in goog and how one person is using it, especially the async stuff.
If you want a readme, @mikethompson has got a readme
we’re gonna get back at our code and take some time to digest the huge amount of feedback we got here
Just following up on the discussion about 1024 pending puts
1024 pending puts on the wall, 1024 pending puts, <! one down, pass it around, 1023 pending puts on the wall...
You run into the 1024 pending puts issue because dispatch
uses put!
to put events on the channel. put!
runs asynchronously so it has to go into the pending queue
If you had a reference to the re-frame channel then you could “dispatch” inside a go block. This would park the “dispatcher” until there was space in the channel to put the event
I also ran into this recently working with web sockets and streaming events
I have a feeling there is a performance gain to be had by removing https://github.com/Day8/re-frame/blob/v0.4.1/src/re_frame/router.cljs#L43
but it needs to be replaced with a call which is smart enough to know when to yield to the next animation frame but continue processing events until then
@jhchabran or @stant, are you able to share a Chrome profile of your code running? A screenshot should be enough for a first pass
The problem with (async/timeout 0)
is that it isn’t really a 0 second timeout. It’s browser specific, but usually takes between 2-15 ms to run
What @meow said about the using dropping channels is a good idea. Another option is to have the server debounce events and merge every 1/10th of a second so you’re not processing too much data
It really depends on your data model though.
(defn request-animation-frame!
"A delayed callback that pegs to the next animation frame."
[callback]
(.start (goog.async.AnimationDelay. callback)))
(defn listen-animation-frame!
[callback]
(letfn [(step
[timestamp]
(if (callback timestamp) (request-animation-frame! step)))]
(request-animation-frame! step)))
@danielcompton: if you removed that line of code then the go loop would never yield to the js process
@meow: exactly, you want to run as much as you can before the next animation frame needs to run
the browser is a mess, but I feel like I'm getting closer to understanding how to make it behave
Totally agree. We’re lucky in our project to only need to target Chrome which makes life a lot simpler
@danielcompton: can you explain what you mean about (async/timeout 0)
behavior?
I'm reading the source now, but I'm tired: https://github.com/clojure/core.async/blob/master/src/main/clojure/cljs/core/async/impl/timers.cljs#L152-L166
that calls js/settimeout
setTimeout 0 won’t actually execute in 0 msecs. Test it on your browser at http://javascript.info/tutorial/events-and-timing-depth
On the section "Measure your setTimeout(.., 0) resolution"
I got around 5 ms on Safari, Chrome, and Firefox on OS X
So every time we call settimeout 0 in the router loop we have to wait at least 5 ms before we process the next event
What would be nice is to alt! over two channels, the re-frame dispatch channel, and a channel which told you when to yield to next animation frame
I think the problem is even worse than you describe - gory details inside goog.async source.
http://google.github.io/closure-library/api/source/closure/goog/async/nexttick.js.src.html#l68
The plumbing is there to support it with setImmediate but it looks like it’s mocked out when used in actual browsers
As you can see from the code I've posted for listen-next-tick!
and listen-animation-frame!
I'm heading in the same direction of having multiple channels independent of rAF
I didn’t quite understand what the purpose of that code was?
What will that allow you to do?
They treat these like registering event listeners like you would do for say a mouse-move event.
I just took Google's improved handling of animation-frame and "do this right away", which expect callback handlers, and wrapped them in ClojureScript and then added a simple looping option in the same form as listening to events. Then its trivial to register a callback that would turn these into event channels.
I think I follow mostly
enough to get the general gist though. Thanks
the reason I had to add my own looping mechanism is that goog treats these like listenOnce things if that makes sense
the final result looks like this, which is my own rendering where I'm not using re-frame:
(defn render! [timestamp state]
...)
(defn render-cycle! [timestamp]
(render! timestamp @state)
(get-in @state [:app :rendering?]))
(defn start-rendering! []
(console/info "start rendering")
(swap! state assoc-in [:app :rendering?] true)
(poly/listen-animation-frame! render-cycle!))
(defn stop-rendering! []
(console/info "stop rendering")
(swap! state assoc-in [:app :rendering?] false))
So, to put this back into the context of re-frame. I created what I did so I could understand how to optimize cljs rendering from the ground up, using goog tools to fix browser issues and avoid unnecessary delays in the browser or in core.async. Looking at the code for router-loop
I would be concerned about the timing issues, as @danielcompton has mentioned.
@meow: since you’ve been thinking about this a lot, how else could you structure the router-loop?
Is it correct to say that these events have been dispatched by handlers that expect them to take place before the next frame?
Or rather dispatched by something that expects handlers to be called before the next frame...
If so I would look at doing something like my listen-animation-frame!
loop since that relies on Google Closure's clever code to peg it to the next frame request - it is Google providing the ultimate RAF polyfil.
Then inside that loop I would have a go-loop
with a when-let
/ recur
that took every event off and called (try (handle event ...
I'm sure I've got some of that wrong because I don't know re-frame that well. I just don't think async timeout channels provide the kind of control that you want for an event loop like this.
I do agree with you, @danielcompton, that time is being lost that could be used to process more re-frame events in each frame.