Fork me on GitHub
#datascript
<
2021-09-15
>
oly08:09:28

so still looking at performance I am transacting 216334 with an "Elapsed time: 16979.000000 msecs" is it me or is that quite high 200000 does not see like many, I am starting to consider some of the alternatives but seems a lot of them don't support schema's or do not run in clojurescript currently

schmee11:09:20

clj or cljs? are you doing the tx in one thread or multiple?

oly11:09:28

cljs one thread I guess as its javascript I am curious if others have experienced the same or if it could be something else I been switching to building the db manually using datoms, but this complicates things, 200000 seem like a small number to me for that length of time on a transact

lilactown14:09:49

200,000 what? datums or entities?

oly16:09:41

entities in hash map format I have changed to using datoms which has helped but its messed up the references as I need to adjust them all to be linked I believe

oly16:09:54

where as before it was being handled by transact

lilactown16:09:19

yeah I imagine that doing all the decomposition and referencing is taking up a lot of time, but is also most of the value of datascript 🙂

metasoarous21:09:59

Transact performance may not be significantly better with datahike (probably worse if writing to disk), but you'd only need to incur that cost once, since thereafter you'd be dynamically reading/querying from indexeddb.

raspasov22:09:00

Transacting 200,000 entities in ~17 seconds? Yeah, that’s probably about right… Everything with DataScript takes a minimum of 0.1ms - 1ms, in my experience. Queries can take even longer, and the execution time is roughly proportional to the size of the result set. 17000ms / 200,000 transactions = ~0.1ms per transaction. That was a big surprise for me initially, because I (wrongly) expected it to be almost as fast as a CLJS hash map.

raspasov22:09:02

Naively swap!-ing small values into a ClojureScript atom is usually a lot faster:

(def a-1 (atom {}))
(time
 (run! (fn [n] (swap! a-1 (fn [m] (assoc m (random-uuid) {:small-entry n})))) (range 200000)))
;=> Elapsed time: ~1500 ms
… And can be probably made even faster with the use of transients, etc.

oly07:09:29

okay thanks for confirming at least that my speed is about right, I have got around it for now by maintaining 2 databases one populated using datoms and no references the other using references I believe you can query accross multiple databases which might help mitigate the separation.

oly07:09:34

with regards to datahike that looks like its clojure only for now but clojurescript is on the roadmap, if I used the backend model I would loose a lot of the benefit I was after where I can load the data into the browser and query with our repeatedly hitting the server

metasoarous17:09:05

@UU67HFS2X It's officially clj only, but they have a cljs branch in beta that you can test, and I believe they expect to have that officially released in the next couple of months. You can check the #datahike channel to ask more about this, but I'd actually recommend checking out their https://discord.com/invite/kEBzMvb, since that's where most of the action seems to be.

oly12:09:36

okay good to know perhaps I will check back in a month or two and see if its released and try it out

metasoarous18:09:18

Sure thing; It actually looks like a few folks are discussing this now over in #datahike if you want to peak in.

metasoarous18:09:49

If you're not working on something production critical, you could probably get started with it; I doubt the api will change at all. Probably just bug fixes and the like.