Fork me on GitHub

For xt/id, how to create monotonically increasing values for it?

đź‘€ 2

A transaction function could be used for this, but there will be non-zero latency & throughput costs. Have you already discounted the idea of using UUIDs?


Depending on how "big" your requirements might get in the future, these kinds of solutions are pretty interesting


anyone know of an implementation of snowflake or sonyflake in Clojure?


If you have a simple doc like {:xt/id 1 :foo :x}, what is the way to add a single key to that document in a subsequent transaction? Do you have to first retrieve the document with a query, assoc the new kv into the doc map, and then match it back in?


I'll be interested to hear the official answer from the XT folks but I'd be tempted to register a transaction function that accepts the entity id followed by a map of additional key values to add to the doc...


user=> (xt/submit-tx node [[::xt/put {:xt/id :merge-tx
                                      :xt/fn '(fn [ctx eid kvs]
  (let [db (xtdb.api/db ctx) 
        entity (xtdb.api/entity db eid)]
    [[::xt/put (merge entity kvs)]]))}]])
#:xtdb.api{:tx-id 0, :tx-time #inst "2021-09-19T21:11:08.692-00:00"}
user=> (xt/submit-tx node [[::xt/put {:xt/id 1 :foo :x}]])
#:xtdb.api{:tx-id 1, :tx-time #inst "2021-09-19T21:12:17.758-00:00"}
user=> (xt/submit-tx node [[::xt/fn :merge-tx 1 {:bar 13 :quux "WAT?"}]])
#:xtdb.api{:tx-id 2, :tx-time #inst "2021-09-19T21:13:58.275-00:00"}
user=> (xt/entity (xt/db node) 1)
{:foo :x, :bar 13, :quux "WAT?", :xt/id 1}

✔️ 2

That’s also the approach that I would be inclined to take


the fact that this api is so much "better" and that it isn't the api offered makes me think that there is some fundamental performance reason its not done


a transaction function is quite expensive to write, since it has to write a doc just for the args. though I think it may be faster to re-index than match since the result is cached?

✔️ 2

> the fact that this api is so much "better" and that it isn't the api offered makes me think that there is some fundamental performance reason its not done I think what is "better" strongly depends on the problem at hand. e.g. if it is critical to only ever perform operations that respect match/`cas` at the atomicity level of the entity, then an API that deals with individual EAVs would conversely seem verbose and be less than optimally inefficient. In terms of the APIs offered, we have typically tried to make as few, minimal decisions as possible. Therefore the APIs are more representing the internal mechanics and less aiming for some kind of ideal information representation / manipulation interface. The idea is to let users discover the best patterns over time with the primitives available, but this kind of discussion & feedback certainly feeds into our thinking 🙂 In regards this specific case, transaction functions are a relatively new addition to the code base and for the reasons Kevin stated (and more) we wouldn't want to blindly guide users towards them being the default when it's not strictly necessary. Also note that Sean's code is good for the basic case, but it doesn't handle inserting into the past/future, since (xtdb.api/db ctx) will always see the currently-valid version of the entity.


Ah, good point on the temporal aspect of it. In my head, I am of course comparing how we might interact with XT against how we interact with MySQL today and we do a lot of UPDATE operations where we update just a few columns in some fairly "large" tables (in terms of columns per record). So the "natural" mapping would seem to be some sort of :assoc-tx or :merge-tx but I suspect rethinking how we organize the data might lead to "better" patterns in a doc-oriented store like XT with the whole temporal aspect (although our current MySQL usage maps to "always the currently-valid version" in XT).

đź‘Ť 2

Thanks for the snippet and discussion. I too wonder if structuring data differently would be better than using a transaction function or a similar approach to update the doc. If the original doc got an additional key that could be used to query for all such documents, like {:xt/id 1 :key :my-entity :foo :x}, updates could instead be introduced as new documents which use the same {:key :my-entity ...}. This also makes me wonder if, when using such an approach, it would be sensible to record all attributes of :my-entity as individual documents instead of trying to get it right the first time and then having to decide on the best mechanics for updating existing docs.


Modelling is definitely very open-ended 🙂 > record all attributes of `:my-entity` as individual documents I've not seen a real system fully embrace this approach universally before, but in principle I think the only downside is ~slightly slower queries (due to all the extra joins) and some extra storage costs. I'd be very curious to hear how you find it if you give it a go