This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-02-11
Channels
- # architecture (1)
- # babashka (61)
- # babashka-sci-dev (1)
- # beginners (85)
- # calva (112)
- # clj-kondo (279)
- # cljdoc (16)
- # cljs-dev (15)
- # cljsrn (7)
- # clojure (168)
- # clojure-europe (36)
- # clojure-nl (10)
- # clojure-spec (6)
- # clojure-uk (5)
- # clojured (1)
- # clojurescript (20)
- # core-async (16)
- # crypto (2)
- # cursive (13)
- # datomic (25)
- # events (7)
- # fulcro (21)
- # google-cloud (3)
- # graalvm (2)
- # graalvm-mobile (2)
- # gratitude (3)
- # helix (20)
- # honeysql (4)
- # hugsql (15)
- # introduce-yourself (15)
- # leiningen (2)
- # lsp (24)
- # luminus (22)
- # malli (21)
- # meander (11)
- # midje (1)
- # other-languages (1)
- # pathom (8)
- # re-frame (5)
- # reagent (5)
- # releases (2)
- # reveal (1)
- # shadow-cljs (18)
- # spacemacs (17)
- # sql (9)
- # tools-build (12)
- # tools-deps (4)
- # vim (12)
If I receive an anomaly back from the client api of category fault and a :datomic.client-spi/exception
key attached to the anomaly map, should I expect to find an exception in the CW logs? e.g., for the below anomaly map, should I expect to find a CW log line with a NullPointerException stacktrace?
{:datomic.client-spi/context-id "dee3c6db-b037-4056-a3b2-059ad6e0a7a6",
:cognitect.anomalies/category :cognitect.anomalies/fault,
:datomic.client-spi/exception java.lang.NullPointerException,
:datomic.client-spi/root-exception java.lang.NullPointerException,
:cognitect.anomalies/message "java.lang.NullPointerException with empty message",
:dbs [{:database-id "07b79939-5cf0-4074-808c-79b735fd2660", :t 134265434, :next-t 134265435, :history false}]}
@benjamin.schwerdtner much, much more than that.
There are so many specifics that matter in terms of performance, but for a single point of reference, when we do bulk load jobs sourced from "enterprise line-of-business RDBMS" we see about 2000 transactions per second (txn datom counts are all over the place, we pack one relational row per datomic transaction)
this is with transactor running on something like an m5.xlarge and storage in DynamoDB (on-demand)
sort of. ddb has to re-shard to scale up, I believe on-demand mode can instantly handle write volumes twice as high as it has previously seen on that table
The DDB docs cover it in more detail: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.OnDemand
looks like indexing can cause trouble there, is it normal for indexing to double your writes? or maybe 3x-10x times?
I'm looking at some metrics here, it looks like index jobs correlate to a 3x increase in write capacity unit consumption...that factor (3) is suspiciously identical to our transactor's write concurrency, so a beefy high-concurrency transactor instance might produce different results. But from a DDB on-demand standpoint it doesn't matter, since its instantaneous capacity is 2x the max ever seen. I guess it means should wait 30 minutes between write concurrency doublings in a transactor scale-up scenario.
thx, OTOH it looks like datomic would have a very hard time trying to wear out a postgres table (on a vm with a fast SDD), likely would require dozens of peers and high writes
Yes, and may also be more cost effective for predictable intense write volumes (while making durability your responsibility). The transactor process itself is almost always the limiting factor. Anecdotally the cognitect folks have mentioned to me that they have not found an upper limit of what DDB can handle in terms of read/write throughput.
good point about cost, ddb can become very expensive. About durability, even with ddb wouldn't it be safer to also have datomic backups?
Just a heads up, DDB on-demand doesn't instantly scale, not for reads nor writes.
I should say, that is "average" performance over many hour long load jobs. actual performance starts out much higher and then goes down as indexing jobs start to dominate
Iām having unexpected dependency conflicts after a recent upgrade to 939-9127. I wrote up the problem https://ask.datomic.com/index.php/702/mismatch-between-expected-dependencies-dependency-conflicts. Anyone noticed similar problems?
Hi @U07M2C8TT I updated with an answer on the ask. We are aware of this problem. Suffice it to say that we understand this issue and the deps-conflict reported on an ion-push is not accurate as the cloud-deps.edn in ion-dev does not match the cloud-deps.edn that is actually running in your version of Datomic Cloud. You should be on the correct and expected dep you saw in your spath. Please let me know if you see otherwise. Hope you are well!
Hey @U1QJACBUM! Thanks for the update. I can confirmāwithin a running Ion my classpath contains the more recent versions of these deps. Iāll ignore the dependency conflicts for now. Hope youāre well too!
Q: Iām thinking about using an attribute of type ārefā and cardinality āmanyā. doesnāt need a sort order. somehow it feels wrong to have a many foreign key. maybe this is my RDBMS habits echoing. Whatās good/bad about this? I just want to sanity check my thinking
In general you should strive to keep cardinality low. So if itās cardinality-one in the opposite direction in your domain model, Iād say prefer that unless you want isComponent semantics because datomic will keep the invariant for you. But card-many refs in themselves are common and not alarming.