This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-09-15
Channels
- # announcements (51)
- # beginners (65)
- # calva (44)
- # cider (6)
- # clara (3)
- # clj-kondo (30)
- # cljsrn (5)
- # clojure (63)
- # clojure-australia (7)
- # clojure-dev (7)
- # clojure-europe (43)
- # clojure-gamedev (1)
- # clojure-nl (6)
- # clojure-uk (7)
- # clojurescript (51)
- # conjure (1)
- # cursive (9)
- # datascript (16)
- # datomic (14)
- # depstar (20)
- # events (1)
- # exercism (17)
- # figwheel-main (6)
- # fulcro (9)
- # graphql (3)
- # gratitude (2)
- # honeysql (4)
- # jobs (7)
- # leiningen (3)
- # lsp (107)
- # meander (7)
- # minecraft (3)
- # off-topic (16)
- # other-languages (4)
- # pathom (4)
- # pedestal (26)
- # practicalli (4)
- # re-frame (3)
- # reitit (7)
- # remote-jobs (1)
- # shadow-cljs (26)
- # tools-deps (67)
- # vim (19)
- # vscode (1)
so still looking at performance I am transacting 216334 with an "Elapsed time: 16979.000000 msecs" is it me or is that quite high 200000 does not see like many, I am starting to consider some of the alternatives but seems a lot of them don't support schema's or do not run in clojurescript currently
cljs one thread I guess as its javascript I am curious if others have experienced the same or if it could be something else I been switching to building the db manually using datoms, but this complicates things, 200000 seem like a small number to me for that length of time on a transact
entities in hash map format I have changed to using datoms which has helped but its messed up the references as I need to adjust them all to be linked I believe
yeah I imagine that doing all the decomposition and referencing is taking up a lot of time, but is also most of the value of datascript 🙂
Transact performance may not be significantly better with datahike (probably worse if writing to disk), but you'd only need to incur that cost once, since thereafter you'd be dynamically reading/querying from indexeddb.
Transacting 200,000 entities in ~17 seconds? Yeah, that’s probably about right… Everything with DataScript takes a minimum of 0.1ms - 1ms, in my experience. Queries can take even longer, and the execution time is roughly proportional to the size of the result set. 17000ms / 200,000 transactions = ~0.1ms per transaction. That was a big surprise for me initially, because I (wrongly) expected it to be almost as fast as a CLJS hash map.
Naively swap!-ing small values into a ClojureScript atom is usually a lot faster:
(def a-1 (atom {}))
(time
(run! (fn [n] (swap! a-1 (fn [m] (assoc m (random-uuid) {:small-entry n})))) (range 200000)))
;=> Elapsed time: ~1500 ms
… And can be probably made even faster with the use of transients, etc.okay thanks for confirming at least that my speed is about right, I have got around it for now by maintaining 2 databases one populated using datoms and no references the other using references I believe you can query accross multiple databases which might help mitigate the separation.
with regards to datahike that looks like its clojure only for now but clojurescript is on the roadmap, if I used the backend model I would loose a lot of the benefit I was after where I can load the data into the browser and query with our repeatedly hitting the server
@UU67HFS2X It's officially clj only, but they have a cljs branch in beta that you can test, and I believe they expect to have that officially released in the next couple of months. You can check the #datahike channel to ask more about this, but I'd actually recommend checking out their https://discord.com/invite/kEBzMvb, since that's where most of the action seems to be.
okay good to know perhaps I will check back in a month or two and see if its released and try it out
Sure thing; It actually looks like a few folks are discussing this now over in #datahike if you want to peak in.
If you're not working on something production critical, you could probably get started with it; I doubt the api will change at all. Probably just bug fixes and the like.