This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-03-21
Channels
- # announcements (26)
- # babashka (115)
- # babashka-sci-dev (5)
- # beginners (48)
- # calva (69)
- # cider (4)
- # clj-commons (11)
- # clj-kondo (1)
- # cljfx (29)
- # clojure (109)
- # clojure-art (1)
- # clojure-czech (1)
- # clojure-europe (33)
- # clojure-nl (1)
- # clojure-nlp (3)
- # clojure-norway (7)
- # clojure-uk (1)
- # clojurescript (63)
- # clr (1)
- # data-science (41)
- # datalevin (1)
- # datomic (11)
- # emacs (58)
- # etaoin (11)
- # figwheel-main (1)
- # fulcro (5)
- # google-cloud (12)
- # helix (2)
- # honeysql (21)
- # hyperfiddle (22)
- # joyride (53)
- # malli (52)
- # off-topic (27)
- # portal (4)
- # re-frame (19)
- # releases (3)
- # ring-swagger (5)
- # xtdb (30)
If I have multiple xtdb nodes using the same postgres database, how is it determined which node writes the changes to the doc(s) to the database? Ie. if I have five nodes, will each of them process the transaction and update the document (5 times)?
Hey @U0K1KAJTB it's all deterministic, so work is repeated by each node
> if I have five nodes, will each of them process the transaction and update the document (5 times)? essentially yes, except transaction functions will eventually stop being evaluated, as the resulting tx-ops 'replace' the underlying argument document
I am asking this because I was running two nodes, one of which had a buggy tx fn that crashed. When I submitted a tx to the bug-free node I looked like it was successful. The buggy node did not have that change, obviously. Then I stopped the bug-free node, deleted its index and started it again, the change that was once successful was gone. Is this intended behaviour?
The tx fn was calling out to a function in a local namespace. I deliberately threw an exception inside that function to see how it would behave. I was using 1.23.1
.
> Is this intended behaviour? on reflection yes I believe so - after the first successful evaluation of the transaction function the node(s) will write the results back to the document store so that any subsequent nodes don't have to re-evaluate the functions (this is important for eviction) so at that point whether or not they have the buggy code is irrelevant .... until someone submits another transaction with an invocation, then the cycle will repeat (assuming the buggy node keeps being buggy)
Using (say) the JDBC storage model, what would the runtime requirements be on the xtdb node? Thinking here about running in a sort-of-serverless mode where it gets spun up as needed and not a full-up always-on instance? Especially for things that are read-only uses; we could constrain updates to a more controlled process gated through a single job-specific instance. Anyone using this deployment model?
Hey @U050N4M9Z spinning-up-as-needed won't be instant, and undoubtedly it gets slower as your database grows. https://docs.xtdb.com/administration/checkpointing/ is definitely helps but there is ultimately still a limit on how much that can reduce the cold start time (i.e. GBs = minutes, even with S3 in the same region). Are you using AWS? The choice of JDBC vs Kafka doesn't really affect the practicalities of node startup time.
For this client, yes, AWS is the preferred environment. Is that minutes with checkpointing? That probably pushes it out too far, though it might be interesting to explore something like SnapStart.
That’d be fastER, for sure. Might even be close to SnapStart, since that still has to restore a snapshot from (hidden) EBS.
Cool, well, I'd be happy to help somehow and certainly very curious to hear how that goes if you do try it 🙂
Hi folks, I'll be running a virtual meetup in just under an hour's time if anyone's free and curious - see details: https://discuss.xtdb.com/t/todays-virtual-meetup-transaction-functions-constraints/161



Updated with the recording: https://discuss.xtdb.com/t/video-virtual-meetup-4-recording-transaction-functions-constraints/161
Hey @UGNMGFJG3 you could use a transaction function for this (see the 'counter' example in the docs for inspiration), but before blindly recommending that...how come you want to use an auto-incrementing ID and not a UUID?
That seems like a fair use case. I don't know what the ideal tradeoff would be or what the state-of-the-art is for helping end users work with UUIDs :thinking_face:
do you need like a “tracking reference” that people need to type/paste in? or why does readability matter… sequences always require coordination which slows things down, but you can always create fresh uuids without needing coordination
You could also have a uuid doc id and another attribute that generates successive ids, but perhaps thats just kicking the can down the road
I have a few identifiers that I wanted to be human-readable so I've been generating a random sequence of alphanumeric characters with a uniqueness check, just did some rough calculations to make sure my random string is long enough to make collisions very very unlikely with the amount of traffic I expect. Might work for you?