Fork me on GitHub
#datomic
<
2015-12-18
>
curtosis15:12:46

is there a best-practice size limit for a single tx? The main scenario I'm thinking of is creating a bunch of entities/datoms in a batch import, which I'd like to keep together. (Sagas would work ok, I'm just looking to understand where the practical line sits.)

curtosis15:12:12

Relatedly, is there a best practice for nested entities? I have a use case where the structure is 3 deep, with a fanout of ~1:10:30.

curtosis15:12:52

It would obviously be easier to just build the nested structure, let Datomic handle the tempids, and send it as one big ("big"?) tx.

bkamphaus15:12:14

Datomic is more optimized for transactions ~40k or lower, it can usually survive ok with some transactions in the 100Ks, but I would definitely stay out of 1MB+ territory, which is where you’ll run into problems introduced by exceeding practical limits.

curtosis15:12:13

so it's mostly raw size, not number of datoms? (which are still roughly correlated, yeah...)

curtosis15:12:14

I think most of my current use cases would stay under 10k. I'm just not used to building 10k strings to send to the db. 😉

bkamphaus15:12:02

Yeah, I would say while datom counts are correlated to size for most data in Datomic, for anything involving blobs/document strings, etc. size concerns dominate datom count concerns (but hopefully you don’t have those values sufficiently large so as to break the correlation too much anyways) simple_smile

naomarik19:12:12

silly question: for the free version of datomic does 2 peers on the transactor mean only 2 app servers can synchronize with each other?