Fork me on GitHub

How would you handle some events source that streams data at a really high rate ? We tried doing it as you’d do with ajax, using a handler and dispatch, but it fills the channel that re-frame uses under the hood and raises a lot of errors. We thought about updating app-db directly, as it worked properly before we started re-frame but we're not sure that’s the right way of doing it. We could of course do some operations to slow things down, but I’m still curious how we’d do it if we were to display these events in our logview, where you’d typically want to see everything simple_smile For some context, we’re building a drone control panel, which typically have things like a map showing current position, various flight data, etc … The drone itself sends us loads of flight data, some of which are required for example, to graph things or reproducing flight controls.


( if anyone is interested we already open sourced our code that transforms drone’s data to a websocket that streams json )


btw, the error we’re getting as it is now

Uncaught Error: Assert failed: No more than 1024 pending puts are allowed on a single channel. Consider using a windowed buffer.
(< (.-length puts) impl/MAX-QUEUE-SIZE)
cljs.core.async.impl.channels.ManyToManyChannel.cljs$core$async$impl$protocols$WritePort$put_BANG_$arity$3	@	channels.cljs?rel=1438379373829:81
cljs$core$async$impl$protocols$put_BANG_	@	protocols.cljs?rel=1438379373727:17
cljs.core.async.put_BANG_.cljs$core$IFn$_invoke$arity$2	@	async.cljs?rel=1438379373509:107
cljs$core$async$put_BANG_	@	async.cljs?rel=1438379373509:101
re_frame$router$dispatch	@	router.cljs?rel=1438379372614:79
(anonymous function)	@	handlers.cljs?rel=1438512081521:17
callHandlers	@	Client.js:134
handleMessage	@	Client.js:307


and our current code

  (fn [db [_ c uav-name]]
    (bot/on-uav-update c uav-name #(dispatch [:store-uav uav-name %1]))

  (fn [db [_ uav-name response]]
    (let [data (u/uav->map response)]
      (assoc-in db [:uavs uav-name] data))))


hum, I wrote a wall of text, if this isn’t the right place to post this, please tell me simple_smile


I would throttle dispatching to the frequency of preferred UI updating frequency, say 10 times per second


I assume the data is only sampled state of some measurements and dropping samples does not affect other parts of your app


clearly, we can drop things and that won't cause any issues


and what about updating app-db directly ? not saying it's the right solution, throttling clearly is but I'm curious of the consequences of a such approach


updating app-db directly will still trigger all watchers on the ratom, which might be slow if your app is more complex


all reactions and subscriptions add watchers on app-db


so if I were to add some logviews, the best would be too buffer our events and then only update app-db in chunks right ?


not buffer, just drop them


if you want 10 times per second, drop all but last message in each 100ms time window


so you call dispatch max 10 times per second


I wasn't clear, when I said a hypothetic logview, I meant something where you can see everything that had been received


ah, ok, then you would have to keep this data outside app-db


but if I update app-db only at an acceptable rate, like ten per seconds but this time updating it with a big chunk it'd be acceptable right ?


mm, in fact you're right


in we're going to have some view that hold huge logs, they have nothing to do with app-db


thanks for your help simple_smile


you are welcome, and yes, I think updating app-db with bigger chunks of data would be acceptable, you trigger reaction watching machinery just once for each chunk


btw, your project felt like golden for us, we intend to build an atom plugin to edit code that is send to a drone


simple_smile cool, but it is still long way to go, it is not yet suitable even for experimental use


so having some code that shows us the ropes of writing an atom plugin in cljs is really great


we're still beginners with clojure


just a tip, if you are not using it already:


yeah I stumbled on it yesterday while browsing some chan in on clojurians


looks awesome


for now we're just floating in pure bliss thanks to figwheel / reagent / reframe / clojure / repl


@jhchabran: I'm late to the party here, but I'll try to provide a new piece of info ... > Uncaught Error: Assert failed: No more than 1024 pending puts are allowed on a single channel. Consider using a windowed buffer. means that a huge volume of dispatches have filled up the core.async channel before the events could be handled (by event handlers). So possible solutions: 1. dispatch less often, by either dropping data or, better, doing less dispatches but making each one carry more data. 2. Increase the size of that core.async buffer. Well, I'd have to give an API for that. So that's not an immediate solution.


@mikethompson: thanks for the insights


that's exactly what we did, basically wiring our callbacks to dispatch, wasn't a good idea 😄


we'll go for route 1


route 2, while interesting in our case would just push the problem a bit further, we'd hit the same wall once we'll start handling more things on the drone


We have run into the same issue with websokets WHEN the app is in the background. Tons of updates flying down the websocket, and because the app is in the background, the javascript is throttled. You aren't getting 60 fps. Annimation frames are much loooooonggggger


ok, dully noted 😄


I'm not very familiar with what happens when app is in the background


Neither were we. simple_smile


Hash lessons!! Is there no end to the pain?? simple_smile


well for us, we felt relieved when we started doing clojurescript 😄


if you could have seen the number of times my friend who started cljs a week ago was like "OH BOY THIS IS EXACTLY WHAT I NEEDED" while he was reading your re-frame's readme 😄


sent you a PM on reddit before noticing you're here on clojurians btw simple_smile


Hey, just read it. Thanks for the kind words!!


I've been exploring these issues of channels as streams and dealing with large volumes of inputs and while I can't give you a definitive answer I think I can share a few things that are worth considering.


First, its good to understand the inner workings of core.async channels a bit. Every channel has two queues: one for putters and one for takers, each 1024 in size. This is a given and is independent of buffering.


If you want to deal with back pressure then that's where the buffer comes in. You can size the buffer and pick a type that drops values either as fifo or lifo.


@mikethompson: for the re-frame main channel I don't think it makes sense to add a buffer - I think what you have is fine as is. When someone runs into the 1024 puts limit then they need to rethink their app.


@meow: music to my ears!! Thanks.


If the events in question aren't terribly important and can be dropped in response to back pressure then adding a buffered channel between the event source and the process of feeding those events on to re-frame is the way to go. That way the buffering and dropping is handled by core.async and is independent of timing and other browser issues.


That's the easy case.


If you want to deal with all the source data then you need to control that in a loop that is independent of re-frame or any other frame based mechanisms because those typically use a js requestAnimationFrame mechanism and browsers will slow down or stop the RAF loop for tabs that don't have the active focus.


Leading to increased back pressure on your channel.


If timing is really important, say to make a game appear smooth and at an appropriate speed given that the browser won't always be able to operate at 60fps or even necessarily at a fixed, slower rate, like 30fps, well now you are in a whole other world and will have to know quite a bit more about how timings really take place in the browser.


That's where I'm still getting up-to-speed. Google Closure has some tools for this in their async module.


I read through and there’s no mention of URLs and how to change them and dispatch and so on. Does re-frame work in a single URL?


@mikethompson: I hope that all makes sense. This is something I'm still learning but thought it might be useful in general.


This wiki page might be useful in terms of lessons learned, such as how awesome transducers are for channel data transformations:


@jhchabran: I hope that helps some. Your kind of situation where you have a firehose of data streaming in is the kind of thing I have in the back of my mind as I've been exploring this issue. Would love to see how your app shapes up.


In short, the browser event loop is never faster than 60 frames per second so you can't feed it more events than it can handle in that timeframe of 16.667 ms for each frame. That applies to the event loop in re-frame as well. So the main event loop needs to be fast and primarily focused on what you want to display in the next frame update.


@pupeno: SPAs == Single Page Applications, they have no routing to different URLs


darwin: well, many SPAs frameworks update the URL so that if you pass the URL to someone else it’ll load the same part of the application. For example, EmberJS makes that straightforward and easy. Without that, apps behave like they flash apps. And yes, that makes SPA a misnomer, but there are very few real SPAs out there. Look at the one that started it all, Google Maps, it constantly updates the URL to represent your current view as best as possible.


call it whatever you like, you could use something like secretary lib, but that is not routing to different urls, same url with subset of app’s state encoded in url hash


I know about secretary and silk and bidi.


But it looks like reframe is not doing anything with the URL, so all that needs to be manually wiring and the way reframe handles state, it looks like it would be a horrible hack. This is my question.


@pupeno: I'm not sure if any of the react-based libraries in clojurescript do any kind of automatic url updating: om, reagent, re-frame, quiescent, freactive, etc.


I could be wrong, and am curious to know.


why hack? keep routing state in your app-db, when url changes by user action, call re-frame’s dispatch to change that state, from the opposite direction: subscribe to that routing info in app-db, when it changes, update the hash, without triggering dispatch


@meow: yeah, I’m curious too. I wish there was an emberjs-like URL handling built on top of react.


darwin: well, I never used reframe before, so I don’t have a good mental model of how a finished app looks like. An all the documentation I seen there’s not a single mention of URLs which makes me thing I might be going against the grain of the library by using URLs.


IMO re-frame does not want to care about “outer world”, it is your job to map app state changes to changes of the world around and dispatch proper commands to modify app state in reaction to world events, re-frame even does not care about react/reagent/whatever-you-use to present your app state to the user


@jhchabran: You might find this useful:

(defn listen-next-tick!
  (letfn [(step [] (if (callback) (goog.async.nextTick step)))]
    (goog.async.nextTick step)))


mm, I’m having some trouble understand what it does, could you provide an example ?


I believe that listen-next-tick! is the fastest looping mechanism for js that plays nicely with the js event loop


Think of it like an event listener - you pass in your callback, which gets called every time there is an opening in the js processing loop. (Sorry for my sloppy language, the details are in the Google Closure library).


Your callback needs to return true to keep the loop going.


Your callback could then, for example, read data off the drone and log it.


I see what you mean simple_smile well in our case, the drone is just sending stuff permanently, we can’t read from it


Ok, well, you're going to read from some data source that's being populated by the drone, yes?


Or the drone is going to supply data in some fashion.


(I’ve just finished reading what you wrote earlier, thanks for the clear explanation ! )


And I'm just focusing on being able to do everything in the browser but that's just my thing and obviously this kind of problem could be split into a client/server solution.


actually, since we’re also the ones building the software that connects to the drone, we could do some handling/buffering directly in there


but for now, we tried to not drift too far away from how it’s internally done by the drone’s controller (Taulabs)


we’re reading from a ws that streams json


productivity aside, yeah doing everything in the browser in a damn fun challenge simple_smile


typically, we receive objects that represents some drone subsystem, like current altitude, position, etc ..


so, since our SPA is just a dashboard where you can see what’s happening and adjust some settings, dropping most of these objects coming from the WS doesn’t create any issue on our side


presently, we’re just wiring the callbacks, but as things progresses, it clearly hints we should add in core.async in there


populating some array in low level fashion, then periodically reading from it could be an even simpler solution


it’s just we’re still beginners in clojure so that’s a lot to digest 😄


if it were me (and keep in mind you know far more about your app than I do) I would wire up a callback for handling the json stream and put that data onto a core.async channel, using a transducer, if necessary, to transform the data into something easier or more appropriate for the rest of your app to use, and I would give that channel a buffer size and type to deal with the back pressure


then you'd have some other loop that takes the data off that channel to do whatever you want with it


yeah I thought about doing things that way (not clearly expressed as you just wrote) and I feel like it’s the direction we should take


you can think of a channel as just a queue


ah there he is @stant is the other developer on this project simple_smile


but you get back-pressure logic and async concurrency


clojurescript has great tools for this sort of thing


hi got a lot to read here oO


yes indeed just discovered it thanks to @jhchabran very cool so far


and re-frame is a very good choice for the UI


yeah re-frame made things much clearer for us, with solely reagent, we would have been a bit lost


yes actually using some kind of channel might be a good idea in our case. For exemple the thing that broke is the update of the quaternion decribing the actual attitude of the drone, which means that it’ll certainly feed a 3d representation of the drone


to follow up on my looping suggestion, if you look at the source code for core.async you'll see that they use goog.async.nextTick to schedule the work done by the go macro, so if you use core.async you're already using goog.async.nextTick whether you knew it or not. simple_smile


and in any case a data that arrives at a such data rate will be ending in the DOM


And since you folks are new to clojure you might not be that familiar with Google Closure yet, but it is the compiler used by ClojureScript so it becomes readily available for use in cljs apps, and by "it" I mean a large library that is part of the Google Closure compiler+library tool set.


Google Closure has many modules and goog.async is one of them.


My poly library is an attempt to pull in bits and pieces of Google Closure that can be enhanced or reshaped for easier use in cljs, but Google Closure can also just be used as is since it's just js code. But you might see some interesting things in poly where I'm taking js events and turning them into clojure maps and supplying them via channels.


yeah the more I read about cljs, the more it feels I should get familiar with google closure simple_smile


Yes, there are definitely good things in goog and dnolen is always telling devs to look there.


gonna spend some time reading poly tonight then 😄


damn, the whole clojure/script world is so refreshing


no fancy homepages, but awesome readme and great code everywhere


poly is definitely alpha code and an exploration for me so keep that in mind, but I would definitely recommend reading it to get an idea of what is available in goog and how one person is using it, especially the async stuff.


yeah no worries, you clearly stated it in the readme simple_smile


not sure I even have a readme for poly...


If you want a readme, @mikethompson has got a readme simple_smile


hum, you’re right


I messed up projects in my head 😄


we’re gonna get back at our code and take some time to digest the huge amount of feedback we got here


thanks guys, really simple_smile


Just following up on the discussion about 1024 pending puts


1024 pending puts on the wall, 1024 pending puts, <! one down, pass it around, 1023 pending puts on the wall...


You run into the 1024 pending puts issue because dispatch uses put! to put events on the channel. put! runs asynchronously so it has to go into the pending queue


If you had a reference to the re-frame channel then you could “dispatch” inside a go block. This would park the “dispatcher” until there was space in the channel to put the event


I also ran into this recently working with web sockets and streaming events


I have a feeling there is a performance gain to be had by removing


but it needs to be replaced with a call which is smart enough to know when to yield to the next animation frame but continue processing events until then


@jhchabran or @stant, are you able to share a Chrome profile of your code running? A screenshot should be enough for a first pass


The problem with (async/timeout 0) is that it isn’t really a 0 second timeout. It’s browser specific, but usually takes between 2-15 ms to run


What @meow said about the using dropping channels is a good idea. Another option is to have the server debounce events and merge every 1/10th of a second so you’re not processing too much data


It really depends on your data model though.


(defn request-animation-frame!
  "A delayed callback that pegs to the next animation frame."
  (.start (goog.async.AnimationDelay. callback)))

(defn listen-animation-frame!
  (letfn [(step
           (if (callback timestamp) (request-animation-frame! step)))]
    (request-animation-frame! step)))


@danielcompton: if you removed that line of code then the go loop would never yield to the js process


@meow: exactly, you want to run as much as you can before the next animation frame needs to run


the browser is a mess, but I feel like I'm getting closer to understanding how to make it behave


Totally agree. We’re lucky in our project to only need to target Chrome which makes life a lot simpler


@danielcompton: can you explain what you mean about (async/timeout 0) behavior?


or were you thinking of js timeout being flaky


that calls js/settimeout


setTimeout 0 won’t actually execute in 0 msecs. Test it on your browser at


On the section "Measure your setTimeout(.., 0) resolution"


I got around 5 ms on Safari, Chrome, and Firefox on OS X


So every time we call settimeout 0 in the router loop we have to wait at least 5 ms before we process the next event


What would be nice is to alt! over two channels, the re-frame dispatch channel, and a channel which told you when to yield to next animation frame


Ah, right - hadn't seen that.


I think goog.async has a fix for that as well.


I think the problem is even worse than you describe - gory details inside goog.async source.


The plumbing is there to support it with setImmediate but it looks like it’s mocked out when used in actual browsers


As you can see from the code I've posted for listen-next-tick! and listen-animation-frame! I'm heading in the same direction of having multiple channels independent of rAF


I didn’t quite understand what the purpose of that code was?


What will that allow you to do?


They treat these like registering event listeners like you would do for say a mouse-move event.


listen-animation-frame will call its callback for each frame


listen-next-tick will call whenever there is a break in the js processing context


goog.async.nextTick does what we'd like (async/timeout 0) to do


I just took Google's improved handling of animation-frame and "do this right away", which expect callback handlers, and wrapped them in ClojureScript and then added a simple looping option in the same form as listening to events. Then its trivial to register a callback that would turn these into event channels.


I already have code that takes mouse and keyboard events and turns them into channels.


I'm sure that's all clear as mud. Sorry.


I think I follow mostly


enough to get the general gist though. Thanks simple_smile


the reason I had to add my own looping mechanism is that goog treats these like listenOnce things if that makes sense


the final result looks like this, which is my own rendering where I'm not using re-frame:

(defn render! [timestamp state]

(defn render-cycle! [timestamp]
  (render! timestamp @state)
  (get-in @state [:app :rendering?]))

(defn start-rendering! []
  (console/info "start rendering")
  (swap! state assoc-in [:app :rendering?] true)
  (poly/listen-animation-frame! render-cycle!))

(defn stop-rendering! []
  (console/info "stop rendering")
  (swap! state assoc-in [:app :rendering?] false))


So, to put this back into the context of re-frame. I created what I did so I could understand how to optimize cljs rendering from the ground up, using goog tools to fix browser issues and avoid unnecessary delays in the browser or in core.async. Looking at the code for router-loop I would be concerned about the timing issues, as @danielcompton has mentioned.


@meow: since you’ve been thinking about this a lot, how else could you structure the router-loop?


The events being dispatched, these are re-frame events, right?


These aren't like mousemove events and such.


Keep in mind that I'm not actively using re-frame and have only toyed with it.


Is it correct to say that these events have been dispatched by handlers that expect them to take place before the next frame?


Or rather dispatched by something that expects handlers to be called before the next frame...


If so I would look at doing something like my listen-animation-frame! loop since that relies on Google Closure's clever code to peg it to the next frame request - it is Google providing the ultimate RAF polyfil.


Then inside that loop I would have a go-loop with a when-let / recur that took every event off and called (try (handle event ...


I'm sure I've got some of that wrong because I don't know re-frame that well. I just don't think async timeout channels provide the kind of control that you want for an event loop like this.


I might take a closer look at this tomorrow.


I do agree with you, @danielcompton, that time is being lost that could be used to process more re-frame events in each frame.