Fork me on GitHub

Been meaning to update the GH issue about this (saw your post the other day).


I checked a bit earlier, and I thought this was already possible, but after some poking around, I'm realizing it may not be. If you're using the reagent component directly (vs repl usage), then any options you pass through to the reagent component will get passed on to the embed api call.


I thought there was a callback option which would take the value of the created view object once it had been created, but I couldn't find it just now. If you're able to find that, then you should have everything you need (and I'm happy to help you get the usage right on that). If not, we can still add this functionality quite simply, but I probably won't have time to look at this for another week or so.


However, if you were intrepid and inspired enough, I can point you to where in the code base you would need to make a simple change, and as long as you're using shadow/deps.edn, you should be able to point to a gh branch until I get the PR merged in and deployed.


If you can't find an embed opt, then you just need to add a (.then (fn [thing] (run-callback (:view-callback opts) thing))), where thing is either an embed object with a .view , or a view object itself (I think the former, but check the embed api). Then when you pass :view-callback which takes a view object and does whatever you like with it, run callback is either just #(%1 %2) or #(%1 (.view %2).


Thanks for dusting this issue off and coaxing me to finish up on it!


PS If what you want is to be able to shuttle data from the repl, don't worry, I want that too, and have some ideas about that, but it might have to wait a bit longer.


New release: RC2! I've added the high-level chapter on CNNs, which uses Fashion-MNIST training as the example. The book is almost complete. Next: the chapter on cuDNN internals of convolution operation and pooling, a chapter on more complex architectures, and the Getting Started chapter. Note: it is much more than a Deep Learning book; it's a book that shows every single step of building real-world high-performance machine learning software, 100% in Clojure! Even if you are not that much interested in DL, you'll find tons of advice applicable on many other data crunching tasks.

sheepy 18
🚀 6

@metasoarous Thanks, that’s all helpful starting points. I haven’t gotten into the cljs side at all - so far, I’ve been running everything from the REPL.


I tried doing (while true (oz/view! ... ) (Thread/sleep 100)), which worked surprisingly well other than using 700% CPU 😂


LOL; Yes, that would be a bit frequent unfortunately.


I do have some ideas more generally for how live updates could potentially happen through atoms or channels


But I think more hammock time will be needed on that one


Do you have any sense of roughly what refresh rate will ultimately be possible?


For atoms, it will have to render completely


But if you are really streaming data into the viz, that should be pretty quick thanks to vega


If you're gonna have a lot of data, you might want to set up a window on the viz so you don't end up with too many points


My use case has ~500 to 1,000 data points, of which maybe half will update every 100 ms or so

paul.legato21:02:39 looks cool, but a bit overkill - they say 25 Hz updates on 5 million points 🙂


What do you mean by ‘set up a window on the viz’?


Ah; I see. Then the atom approach might be fine for you, since most of the viz will have to update anyway


My guess is that using the vega api to update just the data (but not the viz) will still be a bit more performant.


For one, there will be less react rendering


Second, the viz compilation structure will still be in place (uncertain how much of the time that takes though).


So I do think it'll be better than what you're doing now


The webgl rendered may be helpful for you though


I do have a mind to add an option to render using it, but I'm not sure how well maintained it was vs a proof of concept (I don't think it supports all of the available marks, for instance), but I've seen some impressive things done with it