Fork me on GitHub
#datomic
<
2022-02-11
>
kenny16:02:27

If I receive an anomaly back from the client api of category fault and a :datomic.client-spi/exception key attached to the anomaly map, should I expect to find an exception in the CW logs? e.g., for the below anomaly map, should I expect to find a CW log line with a NullPointerException stacktrace?

{:datomic.client-spi/context-id "dee3c6db-b037-4056-a3b2-059ad6e0a7a6",
 :cognitect.anomalies/category :cognitect.anomalies/fault,
 :datomic.client-spi/exception java.lang.NullPointerException,
 :datomic.client-spi/root-exception java.lang.NullPointerException,
 :cognitect.anomalies/message "java.lang.NullPointerException with empty message",
 :dbs [{:database-id "07b79939-5cf0-4074-808c-79b735fd2660", :t 134265434, :next-t 134265435, :history false}]}

Benjamin17:02:02

At what rate of writes is datomic not really suitable anymore? Few per second?

Joe Lane18:02:37

@benjamin.schwerdtner much, much more than that.

ghadi18:02:03

It's not bitcoin

šŸ’… 1
šŸ˜„ 1
šŸ˜‚ 3
Adam Lewis18:02:37

There are so many specifics that matter in terms of performance, but for a single point of reference, when we do bulk load jobs sourced from "enterprise line-of-business RDBMS" we see about 2000 transactions per second (txn datom counts are all over the place, we pack one relational row per datomic transaction)

Adam Lewis18:02:42

this is with transactor running on something like an m5.xlarge and storage in DynamoDB (on-demand)

JohnJ18:02:55

is on-demand mode instant?

Adam Lewis18:02:44

sort of. ddb has to re-shard to scale up, I believe on-demand mode can instantly handle write volumes twice as high as it has previously seen on that table

JohnJ19:02:24

looks like indexing can cause trouble there, is it normal for indexing to double your writes? or maybe 3x-10x times?

Adam Lewis20:02:55

I'm looking at some metrics here, it looks like index jobs correlate to a 3x increase in write capacity unit consumption...that factor (3) is suspiciously identical to our transactor's write concurrency, so a beefy high-concurrency transactor instance might produce different results. But from a DDB on-demand standpoint it doesn't matter, since its instantaneous capacity is 2x the max ever seen. I guess it means should wait 30 minutes between write concurrency doublings in a transactor scale-up scenario.

JohnJ20:02:04

thx, OTOH it looks like datomic would have a very hard time trying to wear out a postgres table (on a vm with a fast SDD), likely would require dozens of peers and high writes

Adam Lewis21:02:47

Yes, and may also be more cost effective for predictable intense write volumes (while making durability your responsibility). The transactor process itself is almost always the limiting factor. Anecdotally the cognitect folks have mentioned to me that they have not found an upper limit of what DDB can handle in terms of read/write throughput.

JohnJ22:02:18

good point about cost, ddb can become very expensive. About durability, even with ddb wouldn't it be safer to also have datomic backups?

Joe Lane22:02:35

Just a heads up, DDB on-demand doesn't instantly scale, not for reads nor writes.

šŸ‘ 1
Joe Lane22:02:01

You will still get throttling exceptions.

Adam Lewis18:02:42

I should say, that is "average" performance over many hour long load jobs. actual performance starts out much higher and then goes down as indexing jobs start to dominate

jacob.maine18:02:45

Iā€™m having unexpected dependency conflicts after a recent upgrade to 939-9127. I wrote up the problem https://ask.datomic.com/index.php/702/mismatch-between-expected-dependencies-dependency-conflicts. Anyone noticed similar problems?

jaret20:02:09

Hi @U07M2C8TT I updated with an answer on the ask. We are aware of this problem. Suffice it to say that we understand this issue and the deps-conflict reported on an ion-push is not accurate as the cloud-deps.edn in ion-dev does not match the cloud-deps.edn that is actually running in your version of Datomic Cloud. You should be on the correct and expected dep you saw in your spath. Please let me know if you see otherwise. Hope you are well!

jacob.maine21:02:47

Hey @U1QJACBUM! Thanks for the update. I can confirmā€”within a running Ion my classpath contains the more recent versions of these deps. Iā€™ll ignore the dependency conflicts for now. Hope youā€™re well too!

steveb8n21:02:50

Q: Iā€™m thinking about using an attribute of type ā€œrefā€ and cardinality ā€œmanyā€. doesnā€™t need a sort order. somehow it feels wrong to have a many foreign key. maybe this is my RDBMS habits echoing. Whatā€™s good/bad about this? I just want to sanity check my thinking

favila22:02:20

In general you should strive to keep cardinality low. So if itā€™s cardinality-one in the opposite direction in your domain model, Iā€™d say prefer that unless you want isComponent semantics because datomic will keep the invariant for you. But card-many refs in themselves are common and not alarming.

steveb8n23:02:42

thanks. this one will be low cardinality, generally less than 5. you make a good point about using a component attribute instead. Iā€™ll think on that

JohnJ22:02:41

sounds pretty standard for datomic