Fork me on GitHub

I don't understand what you are asking here.


mostly I'm sad that transaction functions can't be aware of its surroundings.. so if you implement a :db/inc function, if you somehow [[:db/inc e :stock/qty] [:db/inc e :stock/qty]] it will increment by only 1 instead of 2, and there's simply no way around it. This matters when you're composing many different movements into a single transaction, so if I move money from account A to B and from A to C (maybe payment + bank fees)..


the link above introduces a nonce that will protect it from happening (so throw exception) - but that doesn't answer the actual need.. to do it reliably, you'd have to collate it outside the transaction..


If you have a model where transactions are commands applied atomically, I’m not sure what alternative is possible without pre-awareness of the possibility of composition


You could have a transact wrapper which inspects tx data for tx fns it knows how to coalesce. You could wrap the separate txs that you want to combine in a tx fn that applies each sequentially with d/with, extracts a combined result, and transacts that. If modifying datomic is possible, this could be an inbuilt feature—similar to db/ensure, there could be a special tx fn that is only executed with the result db after all other txs are applied and is allowed to emit more commands, and the combined result is applied. This would be handy for keeping aggregates up to date. Although figuring out the compositional semantics of this and avoiding infinite recursion would be a challenge. All of these that simulate multiple txs in the transactor would probably be a significant performance penalty


The reason you can do this in sql (assuming your isolation level is configured correctly) is because the database is mutable and there are implicit (often virtual) locks being acquired as you work. Transaction fns are just expanding commands locklessly, there is literally no new db value to read until the entire transaction is applied atomically


ye, I was optimistically wishing for it to reduce over the db value and tx function (kinda like a d/with-tx vibe) as it goes through the transaction... that probably opens up a whole other can of worms...


I'll probably have to do a pre-tx scan for the functions and run a combination function like you mention.... still - painful xD


> I was optimistically wishing for it to reduce over the db value and tx function (kinda like a `d/with-tx` vibe) as it goes through the transaction it's probably worth noting that this is how DataScript (and others) behave, i.e. ordering of the ops within the tx is important


And this is why many prefer to just stay in classic RDBMS land 😉


You have to implement too much data integrity in your application code with datomic


@U899JBRPF that's interesting to note, thanks!


@U01KZDMJ411 from my experience, something like mysql is even more broken, since they only give you a consistent view of the table/dataset as of query time.. so datomic is already worth the extra thinking work


does mysql doesn't have read transactions?


Yes, but it only protects after you've queried already.. if you query table A, something changes table B, then query table B, you see the changes even with read tx.. this caught me out, for a long time i thought mvcc will do same as datomic stable read value


@U050CLJ53 I’m really confused. Wouldn’t a [:db/add e :stock/qty 2] work?


That gist is getting around a fundamental design decision of datomic: Datomic is designed for read-heavy workloads by making heavy use of caching. That gist was trying to turn datomic into a read-once thing.


Yes, it would, but that's a contrived example - in practice, the two callsites adding to the transaction are not connected


(d/transact conn (concat (tx-data-fn1) (tx-data-fn2))), where tx-data-fn1 and 2 independently do their own CAS for example.


Nice concise example @U09R86PA4 😀


Yeah, I mean. I get the ask. It fundamentally violates tuple logic, right?


idk maybe an overstatement on my part


At any rate, it still seems like a trivial code-design thing. Accumulate that number that you want to inc before making your tuples.


Doesn’t seem particularly onerous to me.


I know, I know… old code, existing codebase… 😄


My use case involves a stock management system


Moving stock from all over the place to other places, with qty's et al..


So the function is a bit more complex than that, but the limitation fundamentally boils down to me expecting transactor functions to work differently than they do


Right. It’s not that you can’t work around it. It’s that you just didn’t expect it.


Ye - i can do whatever just before the transact in middleware if i need to, but it's unfortunate


I’ve had to make pretty involved tx fns before to do those sorts of operations atomically. You end up with a tx fn for basically every domain operation.


And messy to convey to other developers is they need to implement this kind of thing.. thankfully it's few an far between


e.g. [:mv-stock from to amount]


and that emits a whole pile of tuples^


it adds an additional constraint: :mv-stock cannot appear twice in the same tx


that’s the part that can be surprising. you can put all this work into making :mv-stock atomic and then have it silently do the wrong thing


Right, that’s a real gotcha. That’s fair.


the nonce is a belt-and-suspenders technique to avoid that. if you can cheaply express valid end state in an unparameterized way with :db/ensure , that would be another

✔️ 1

@U050CLJ53 @U09R86PA4 Thanks for walking me through that! Lord knows you didn’t have to explain it, but you helped me fully understand what the problem was.


please don't use the word "nonce"


I see. it's slang in different parts of the world. is the specific usage here.


transactor functions have suddenly lost 99% of their usefulness to me


hello, when using Datomic Cloud, how do you run local tests? my idea of approach was to use datomic.api for testing (in-memory) and datomic.api.client for prod, but doing so adds an overhead that I have two different API's do deal with (from each namespace), is there a way to use a single API against both Cloud and on-prem? or that's something I have to create myself? or is there another approach to handle this?


I believe the client api is intended to be a single api for cloud and on-prem. The on-prem datomic.api has very different behavioral characteristics that I do not think should be abstracted over.


thanks, and I just found the answer for the local dev thing:

👆 1
César Olea18:01:07

Maybe relevant: we use it for unit tests along with dev-local. Very useful!

César Olea18:01:16

Thanks for dev-local-tu @U083D6HK9!