This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # admin-announcements (9)
- # beginners (40)
- # boot (61)
- # cider (6)
- # cljsrn (5)
- # clojure (65)
- # clojure-gamedev (6)
- # clojure-greece (8)
- # clojure-ireland (1)
- # clojure-portugal (5)
- # clojure-russia (46)
- # clojure-uk (38)
- # clojurescript (177)
- # core-async (9)
- # cursive (17)
- # datomic (6)
- # dirac (8)
- # emacs (5)
- # error-message-catalog (8)
- # hoplon (248)
- # ldnclj (11)
- # ldnproclodo (1)
- # lein-figwheel (36)
- # leiningen (12)
- # mount (2)
- # off-topic (3)
- # om (26)
- # onyx (12)
- # perun (2)
- # planck (26)
- # re-frame (62)
- # reagent (55)
- # remote-jobs (2)
- # rethinkdb (1)
- # ring-swagger (14)
- # spacemacs (47)
- # untangled (69)
you're not supposed to need to fool around at the db format layer...so I'd like to know if we're missing necessary generality.
I've got large blobs of data (for charts and tables) it's coming in as nested maps. The merge function takes a long time to merge these blobs into app-state, probably because it's recursive and looking for idents and what not.
But these blobs are not something I query inside. A shallow merge would work just fine.
Changing to same data to nested vectors instead of nested maps resulted in 2-3x performance gain in the merge function.
ok, so this is a harder problem than you might recognize...I agree that your solution of being able to mark things as shallow would help; but you cannot pass metadata over the wire.
The query could be used to help...e.g. shallow merge anything that is at a raw (non-join) prop
Also, some of those bits (like the prewalk) could be changed to use a compiled specter transform to make it faster (for tempid replacement)
Anything you might do to optimize this would be useful to any general user of Untangled...so I'd say it'd be better to optimize what we have
I guess your state is large enough that the update with structural sharing is costly
So, another question to ask: Why not place an animated GIF loading symbol on the screen so that when the work happens, the user feels better. Sometimes large things just take time.
Shallow merge wherever queries end + better walker seem like good ideas. I can also try to send less data.
The more advanced merge is a really hard thing to write....ask @adambros about the mark/sweep stuff in pre/post merge.
@tony.kay: worth pointing out that this is not actually that large of a data set, we'd definitely see larger in the real world. Is there a way we could maybe store it outside the global app state where it's less expensive? (These are essentially blobs of immutable data for rendering tables - the reports never change)
You could use the same basic algorithm to hang metadata on the data structure, then use a simple deep-merge that switches to shallow when the items are marked...not really an online algorithm, though, so that might not speed it up as much as just doing the merge based on the query...but then you're needing a parser
@therabidbanana: hm. So, at the moment the only way to do that would be to hack into the parser...and you'd lose app state tracking for that data.
We've not done anything optimized in merge, and know it needs it...it is a planned thing.
If it turns out what we want is a way to point to external data, then we can add that as a feature as well.
On the animated gif point - this is stop the world work by the browser, dropping framerate to seconds per frame - so not likely to help
for that matter, if you pre-process the response to know what paths are shallow, you might even be able to use that to make tempid replacement even faster
We were going to try the nested vectors approach to limit how much the merge functions has to inspect, but seems like that still relies on the merge function being kind of fast
As I said: we know it needs optimizing. It is a first draft that is known correct, but not fast
so rather than hacking, let's work the central problem (optimization), then consider alternatives when/if that hits a wall
Sounds reasonable - specter sounds like an interesting approach, I've only briefly looked at it before.
I could hack around this next week. @tony.kay I'll raise an issue describing the issue and proposed solutions.
i did mark and sweep missing, but i dont think that’s what your profiler said was taking a long time
im not 100% sure but that sounds like you apply another migration that changes the cardinality
@adambros: on mark/sweep: It uses the query, which has some symmetry with the merge optimization
@kenbier: I don't think it's possible to change the cardinality without a re-import
@kenbier: Right...if you can do it in datomic, you can do it, since a migration is nothing more than transactional data (since schema is just transactional data) that gets executed and recorded as "done" by our library...really the tracking is all we added. The schema functions come from Datomic-schema
@tony.kay: I just forked the untangled-client, any suggestions for how to develop? Is there a dev mode to execute it? Test runner perhaps?
@currentoor: there is only a test build for untangled-client. if you want to see how it is performing, I’d make a cookbook recipe and make changes to untangled-client in checkouts