This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-01-26
Channels
- # announcements (1)
- # asami (7)
- # aws (3)
- # babashka (30)
- # beginners (21)
- # calva (48)
- # cider (11)
- # clj-commons (5)
- # clj-kondo (12)
- # cljdoc (5)
- # cljfx (1)
- # cljs-dev (32)
- # cljsrn (4)
- # clojure (218)
- # clojure-europe (88)
- # clojure-nl (11)
- # clojure-uk (31)
- # clojurescript (8)
- # cursive (98)
- # data-science (6)
- # datomic (49)
- # emacs (12)
- # events (4)
- # fulcro (47)
- # graalvm (3)
- # graphql (4)
- # introduce-yourself (5)
- # java (13)
- # juxt (9)
- # lsp (74)
- # meander (3)
- # membrane (4)
- # missionary (31)
- # off-topic (24)
- # pathom (41)
- # portal (4)
- # reagent (3)
- # releases (1)
- # remote-jobs (3)
- # rewrite-clj (4)
- # shadow-cljs (10)
- # slack-help (2)
- # testing (20)
- # tools-deps (43)
mostly I'm sad that transaction functions can't be aware of its surroundings.. so if you implement a :db/inc
function, if you somehow [[:db/inc e :stock/qty] [:db/inc e :stock/qty]]
it will increment by only 1 instead of 2, and there's simply no way around it.
This matters when you're composing many different movements into a single transaction, so if I move money from account A to B and from A to C (maybe payment + bank fees)..
the link above introduces a nonce that will protect it from happening (so throw exception) - but that doesn't answer the actual need.. to do it reliably, you'd have to collate it outside the transaction..
If you have a model where transactions are commands applied atomically, I’m not sure what alternative is possible without pre-awareness of the possibility of composition
You could have a transact wrapper which inspects tx data for tx fns it knows how to coalesce. You could wrap the separate txs that you want to combine in a tx fn that applies each sequentially with d/with, extracts a combined result, and transacts that. If modifying datomic is possible, this could be an inbuilt feature—similar to db/ensure, there could be a special tx fn that is only executed with the result db after all other txs are applied and is allowed to emit more commands, and the combined result is applied. This would be handy for keeping aggregates up to date. Although figuring out the compositional semantics of this and avoiding infinite recursion would be a challenge. All of these that simulate multiple txs in the transactor would probably be a significant performance penalty
The reason you can do this in sql (assuming your isolation level is configured correctly) is because the database is mutable and there are implicit (often virtual) locks being acquired as you work. Transaction fns are just expanding commands locklessly, there is literally no new db value to read until the entire transaction is applied atomically
ye, I was optimistically wishing for it to reduce over the db value and tx function (kinda like a d/with-tx
vibe) as it goes through the transaction... that probably opens up a whole other can of worms...
I'll probably have to do a pre-tx scan for the functions and run a combination function like you mention.... still - painful xD
> I was optimistically wishing for it to reduce over the db value and tx function (kinda like a `d/with-tx` vibe) as it goes through the transaction it's probably worth noting that this is how DataScript (and others) behave, i.e. ordering of the ops within the tx is important
@U899JBRPF that's interesting to note, thanks!
@U01KZDMJ411 from my experience, something like mysql is even more broken, since they only give you a consistent view of the table/dataset as of query time.. so datomic is already worth the extra thinking work
Yes, but it only protects after you've queried already.. if you query table A, something changes table B, then query table B, you see the changes even with read tx.. this caught me out, for a long time i thought mvcc will do same as datomic stable read value
@U050CLJ53 I’m really confused. Wouldn’t a [:db/add e :stock/qty 2]
work?
That gist is getting around a fundamental design decision of datomic: Datomic is designed for read-heavy workloads by making heavy use of caching. That gist was trying to turn datomic into a read-once thing.
Yes, it would, but that's a contrived example - in practice, the two callsites adding to the transaction are not connected
(d/transact conn (concat (tx-data-fn1) (tx-data-fn2)))
, where tx-data-fn1 and 2 independently do their own CAS for example.
Nice concise example @U09R86PA4 😀
At any rate, it still seems like a trivial code-design thing. Accumulate that number that you want to inc before making your tuples.
So the function is a bit more complex than that, but the limitation fundamentally boils down to me expecting transactor functions to work differently than they do
Ye - i can do whatever just before the transact in middleware if i need to, but it's unfortunate
I’ve had to make pretty involved tx fns before to do those sorts of operations atomically. You end up with a tx fn for basically every domain operation.
And messy to convey to other developers is they need to implement this kind of thing.. thankfully it's few an far between
that’s the part that can be surprising. you can put all this work into making :mv-stock atomic and then have it silently do the wrong thing
the nonce is a belt-and-suspenders technique to avoid that. if you can cheaply express valid end state in an unparameterized way with :db/ensure
, that would be another
@U050CLJ53 @U09R86PA4 Thanks for walking me through that! Lord knows you didn’t have to explain it, but you helped me fully understand what the problem was.
please don't use the word "nonce"
I see. it's slang in different parts of the world. https://en.wikipedia.org/wiki/Nonce_word is the specific usage here.
hello, when using Datomic Cloud, how do you run local tests? my idea of approach was to use datomic.api
for testing (in-memory) and datomic.api.client
for prod, but doing so adds an overhead that I have two different API's do deal with (from each namespace), is there a way to use a single API against both Cloud and on-prem? or that's something I have to create myself? or is there another approach to handle this?
I believe the client api is intended to be a single api for cloud and on-prem. The on-prem datomic.api has very different behavioral characteristics that I do not think should be abstracted over.
thanks, and I just found the answer for the local dev thing: https://docs.datomic.com/cloud/dev-local.html
Maybe relevant: https://github.com/ComputeSoftware/dev-local-tu we use it for unit tests along with dev-local. Very useful!