This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-08-20
Channels
- # admin-announcements (1)
- # announcements (1)
- # beginners (115)
- # calva (31)
- # cider (25)
- # clj-kondo (47)
- # cljdoc (23)
- # cljs-dev (5)
- # clojars (1)
- # clojure (60)
- # clojure-australia (1)
- # clojure-europe (23)
- # clojure-nl (3)
- # clojure-norway (2)
- # clojure-spec (3)
- # clojure-uk (18)
- # clojurescript (49)
- # community-development (1)
- # cursive (4)
- # datahike (2)
- # datascript (3)
- # datomic (36)
- # deps-new (2)
- # emacs (2)
- # events (9)
- # fulcro (6)
- # graphql (2)
- # gratitude (13)
- # holy-lambda (1)
- # introduce-yourself (10)
- # macro (2)
- # malli (5)
- # meander (9)
- # news-and-articles (5)
- # nextjournal (1)
- # off-topic (32)
- # pathom (17)
- # pedestal (13)
- # polylith (4)
- # protojure (4)
- # reagent (4)
- # sci (27)
- # shadow-cljs (2)
- # show-and-tell (2)
- # specter (3)
- # tools-deps (7)
- # xtdb (16)
Is there a more efficient way to get the base time of the DB after a transaction whose ID I have than (:t (d/as-of (d/db conn) 13194139533321))
? Or is this operation cheap and I don't need to worry about it? 🙏
(I am pondering how to implement optimistic locking of a whole data entity, where the attribute-level :db/cas
is not sufficient. My idea was to
1) store a ref to the transaction upon any transact changing the entity via .. [:db/add [:some-entity/last-tx "datomic.tx"]] ..
2) when reading an entity from the DB (e.g. (d/pull db ['*] <id>)
), I would also add the base time (which is readily available) to it: (assoc entity :baseT (:t db))
3) when the client submits a change, I need to check whether the entity has changed since the :baseT
the client has.
Perhaps there is a better way to achieve this than that?)
Isn't this what datomic.api/tx->t
is for?
Regarding locking the whole entity, i would suggest a transaction function like [:entity-not-changed-since entity-id tx-id] that throws an exception if the entity in the db given to the transaction function is changed after tx-id. You will not be happy having to keep track of changed-since-data. Well, if you want to store that a transaction strived to change ´the entity without really changing it (like setting a single cardinality value to it's current value), you need to keep track of it, for instance via "meta data" in the transaction entity.
Converting between a T and a TX is simple bit masking/adding. Use t->tx or tx->t to do it
I second an “check and abort” txfn as a way to approach this. Consider parameterizing it by attributes to check because “lock the whole entity” kinda goes against the semantics of entities—they are not rows in a table with fixed columns, and can support overlapping attr sets from unrelated applications. Consider also making it check by value instead of time: supply a pull expr and an expected value and revalidate that the value is equal or abort
If you still want a time check, you can implement that efficiently-ish with (d/datoms db :eavt e) where db is filtered by (d/since (d/history db) read-tx). If you get any datoms then something happened to the entity. This impl also removes the need to covert t/tx—since accepts both or even insts
Thanks a lot for your ideas! I see why I did not find tx->, it is not in datomic.client.api. Does it mean it only works on Peers?
Client doesn’t have it, but you can reimplement by masking out the 20 bits above the bottom 42 bits
I did not say whether client or peer so you could not notice 🙂
I do not really care about "time", just about "version". Fully agree with preferring not to lock, I want to use :db/cas
wherever possible. But it is possible that some places that is not enough
@UQY3M3F6D What approach did you have in mind for implementing the "the entity in the db given to the transaction function is changed after tx-id"? The same as @U09R86PA4 proposes above with history, since, datoms?
The canonical way is to implement your transactional logic in an ion and use it in a (d/transact ...)
call.
I suppose that by
> it is possible that some places that is not enough
you meant a remote transaction.
Maybe you can create an opportunistic lock using an attribute?
for example
{:db/ident :system/revision
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one}
when reading for transaction, always read the :system/revision
attribute along with other attribute.
When committing the changes, always transact with a [:db/cas 42 :system/revision revision (inc revision)]
operation.
When multiple client concurrently performs the read-and-commit operation, only the first one will succeed and others must prepare for a retry.Thank you. By "not possible at all places" I meant that in some cases the business logic may disallow partial changes to the entity and require that the whole entity has not changed since the user read it. "might" because this is just an assumption, I do not know the code base and business rules well yet.
Q2: I want to allow users to explore what-if scenarios. I can trivially do that with Datomic using d/as-of
(for the starting point) and applying a list of the changes they make via d/with
. But this is only in memory, I would like to be able to persist these scenarios, until they are not needed anymore. What is a good way to do that? Store them outside of Datomic? Or store them as strings (e.g. having [{:db/ident :whatif/start-point, :db/type :db.type/ref, :db/doc "ref the transaction when we branch off", ...}, {:db/ident :whatif/changes, :db/type :db.type/string, :db/cardinality :db.cardinality/many}
)? 🙏
One way (which i think requires some carefullness in the data modelling) is outlined by Tim Ewald here: https://docs.datomic.com/on-prem/learning/videos.html#reified-transactions This talk is mind boggling, IMHO. The basic idea is to make queries aware of which transactions they actually use in the queries. I haven't fully grasped how things would work if several sagas (multi-transaction collections) work with the same unique entities. Probably not very well. Another way is to store the d/with transaction results as edn-data in transactions (or anywhere, really). there needs to refer to either a realized database version or another stored transaction result. If these what-if:s are limited in size it should be quite a quick operation to be able to calculated the databases (very much event sourcing).
Thanks a lot, will check the talk. I hope I will not end up 🤯 🙂 > store the d/with transaction results as edn-data in transactions so, essentially, as a string, right?
@U0522TWDA How long do you want to persist these scenarios? Are these scenarios the dominating entity in your data-model? Are the changes/transactions against the speculative scenarios (alternate-timelines) generated by humans or machines?
Few days to weeks, I guess. No, not dominating. Generated by humans - analysts thinking about the future and modelling different future scenarios for discussion.
Can you model this entity in a more first-class way rather than speculation on d/with
dbs? Seems like this might be a better fit than keeping track of the datoms added to a speculative db as strings, etc.
What this is all about is people starting from a graph and then making various modification to the graph, then comparing it to the original one or other such scenarios. To make it a 1st class, I would perhaps need to copy all the graph entities to new ones (and link them to their originals) - then I could work with these freely. And this is what we do now with Mongo. I just thought that leveraging d/as-of
and a list of the transactions=changes would be a very simple, low-effort way to reimplement it on top of Datomic.
Have you considered modeling each version of the graph as a new entity with an adjacency list pointing to nodes as a card-many ref? That way, you can "copy" every existing node relationship (edge) from a prior version of the graph, determine which edges need to point to new nodes, and then transact the new graph entity with a new version and adjacency list (and possibly the new nodes) all in a single transaction?
(Not sure if you need a transaction function, depends on the semantics your biz requirements have)
> Generated by humans - analysts thinking about the future and modeling different future scenarios for discussion. AFAIK, d/as-of is designed to be a snapshot of the database. It was never intended to be used as a fork. I once designed the schema for a optimization engine that need to concurrently explore multiple possible futures. What worked for me was to implement a graph-like data-structure, quite like the structure-sharing pattern seen in the persistent map in clojure core.
Thanks for sharing!
Hey! Can you tell me, please, if I can use datsync (https://github.com/metasoarous/datsync) with datomic cloud? I see this code block:
(ns your-app
(:require [dat.sync.client]
[dat.remote]
[dat.remote.impl.sente :as sente-remote]
[datascript.core :as d]
[dat.reactor.dispatcher :as dispatcher]))
(def conn (d/create-conn dat.sync.client/base-schema))
But I dont understand how link this with datomic cloud? Can someone help?It seems the snippet is from the client side and thus has 0 to do with datomic. The server side, connected to the other end of the sente websocket channel is what needs to talk to the DB
under https://github.com/metasoarous/datsync#on-the-server - Receiving transactions - there are example calls to (d/q ...)
- these would use the datomic cloud client API (assuming the backend runs e.g. on ions)
@U0522TWDA I understand it should be http on ions? If I want to have transaction access for all my functions in ion, do I still have to do the routing? I mean, I thought it would allow me to make one entry point for all kinds of transactions. I will be very grateful if you help me to understand this.
Sorry, I know nothing about datsync so cannot really assist. I have no idea what the sentence "If I want to have transaction access for all my functions in ion, do I still have to do the routing?" means. I believe ions also support websockets (not just https) nowadays.
@U0522TWDA Sorry. I meant that I am interested in having access to all functions through ion api. To use an endpoint and recognize what I want to perform, for example, the function of adding to the database, and on another endpoint, for example, getting data from the database. so that the client can work comfortably with it. I meant it when I mentioned routing.
You can create as many or few endpoints and ion functions as you want.
I'm not sure what I mean, it's a while since I read how to connect ions with websockets to the outside. Good luck!