Fork me on GitHub

is there a best-practice size limit for a single tx? The main scenario I'm thinking of is creating a bunch of entities/datoms in a batch import, which I'd like to keep together. (Sagas would work ok, I'm just looking to understand where the practical line sits.)


Relatedly, is there a best practice for nested entities? I have a use case where the structure is 3 deep, with a fanout of ~1:10:30.


It would obviously be easier to just build the nested structure, let Datomic handle the tempids, and send it as one big ("big"?) tx.

Ben Kamphaus15:12:14

Datomic is more optimized for transactions ~40k or lower, it can usually survive ok with some transactions in the 100Ks, but I would definitely stay out of 1MB+ territory, which is where you’ll run into problems introduced by exceeding practical limits.


so it's mostly raw size, not number of datoms? (which are still roughly correlated, yeah...)


I think most of my current use cases would stay under 10k. I'm just not used to building 10k strings to send to the db. 😉

Ben Kamphaus15:12:02

Yeah, I would say while datom counts are correlated to size for most data in Datomic, for anything involving blobs/document strings, etc. size concerns dominate datom count concerns (but hopefully you don’t have those values sufficiently large so as to break the correlation too much anyways) simple_smile


silly question: for the free version of datomic does 2 peers on the transactor mean only 2 app servers can synchronize with each other?