This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2015-12-18
Channels
- # admin-announcements (12)
- # adventofcode (2)
- # beginners (10)
- # boot (340)
- # cljs-dev (1)
- # cljsrn (30)
- # clojure (79)
- # clojure-germany (4)
- # clojure-japan (4)
- # clojure-nl (2)
- # clojure-russia (141)
- # clojurescript (125)
- # core-async (9)
- # datascript (2)
- # datavis (8)
- # datomic (9)
- # editors (5)
- # editors-rus (4)
- # hoplon (69)
- # ldnclj (63)
- # off-topic (1)
- # om (291)
- # parinfer (7)
- # portland-or (3)
- # proton (248)
- # rdf (3)
- # re-frame (14)
- # remote-jobs (4)
is there a best-practice size limit for a single tx? The main scenario I'm thinking of is creating a bunch of entities/datoms in a batch import, which I'd like to keep together. (Sagas would work ok, I'm just looking to understand where the practical line sits.)
Relatedly, is there a best practice for nested entities? I have a use case where the structure is 3 deep, with a fanout of ~1:10:30.
It would obviously be easier to just build the nested structure, let Datomic handle the tempids, and send it as one big ("big"?) tx.
Datomic is more optimized for transactions ~40k or lower, it can usually survive ok with some transactions in the 100Ks, but I would definitely stay out of 1MB+ territory, which is where you’ll run into problems introduced by exceeding practical limits.
so it's mostly raw size, not number of datoms? (which are still roughly correlated, yeah...)
I think most of my current use cases would stay under 10k. I'm just not used to building 10k strings to send to the db. 😉
Yeah, I would say while datom counts are correlated to size for most data in Datomic, for anything involving blobs/document strings, etc. size concerns dominate datom count concerns (but hopefully you don’t have those values sufficiently large so as to break the correlation too much anyways)