This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-09-21
Channels
- # admin-announcements (1)
- # announcements (6)
- # babashka (8)
- # beginners (134)
- # calva (18)
- # chlorine-clover (1)
- # cider (6)
- # circleci (6)
- # clj-commons (111)
- # cljsrn (13)
- # clojure (95)
- # clojure-australia (2)
- # clojure-europe (15)
- # clojure-nl (1)
- # clojure-spec (52)
- # clojure-uk (17)
- # clojurescript (4)
- # datavis (9)
- # datomic (8)
- # docker (2)
- # emacs (15)
- # events (7)
- # fulcro (6)
- # graphql (1)
- # gratitude (1)
- # introduce-yourself (2)
- # kaocha (8)
- # meander (87)
- # minecraft (2)
- # music (2)
- # off-topic (20)
- # portal (119)
- # releases (1)
- # reveal (55)
- # shadow-cljs (34)
- # sql (36)
- # tools-deps (9)
- # vim (8)
- # xtdb (39)
So as I’m migrating to 1.18, I was curious - why does :eid
set a keyword on put, but a crux/id
on evict?
If I've understood you correctly: once a document's been evicted, we no longer have access to the original eid - the only thing we can give you is its hash
Sorry, that was indeed somewhat ambiguous. I am referring to tx-ops
for a secondary index. On put
it is the value of the eid, while on evict it is the hash. While it’s not the end of the world, it means that secondary indices are then responsible of maintaining an index using the hash (which maybe I should have always been doing)
Hmm, so looking at these lines:
https://github.com/xtdb/xtdb/blob/e2f51ed99fc2716faa8ad254c0b18166c937b134/core/src/xtdb/tx/conform.clj#L41-L44
we see that the put op transformation returns :eid (:xt/id doc)
and :doc-id doc-id (c/hash-doc doc)
whereas the evict transform (further down that ns) sets the hashed eid
on :eid
....so I agree it seems a little confusing. I'm not sure of the history there or implications
That's correct, you must .stop
the node and delete the directories manually/programmatically. There is no provided API to do this. In tests we use this with-tmp-dir
macro extensively https://github.com/xtdb/xtdb/blob/20a2632a18a36dae2e915ab032307ce0c5b3e150/test/src/xtdb/fixtures.clj#L16-L26
Ah, stopping the node is what I was missing, I was just dropping the tables duh!
I am rewritting a small project. Currently I am using postgres, but already have some temporality in place. Temporality aside(its not much, I think I can handle within postgres), but how do you see your day to day with xtdb? Do you prefer it over sql? I am talking general purpose stuff, how convenient is datalog vs sql, is maintenance/setup a big hurdle? Generally, is xtdb someones default db and is he happier than with postgres?
also, could I just flip the database and use postgres for persistance (like I know, xtdb stores documents, so I am not getting relational tables from this, but I have infra in place and don't really want to add/deal with rocksdb/kafka. (so... to rephrase a bit I know I can use postgres as transaction/doc storage, but how performant/trouble-free is it)
Hi 🙂 > Temporality aside (its not much, I think I can handle within postgres) Is the temporality in your use-case something you expect to grow much more complex in future? How many tables are you adding valid-from/to columns to? Are all the tables treated as append-only already? For the most general cases I expect the average developer will (still) have an easier time working with Postgres than XT for the simple reason that there is a huge wealth of features, libraries, & resources available when using Postgres, which are battle-hardened and ready to help you out in nearly all possible scenarios. The main exceptions to this are when you have a problem or use-case that really demands the Datalog/graph & bitemporal features on offer with XT, as Postgres will struggle to help you there at any significant scale. That said, if you are comfortable with the idea of navigating the tradeoffs as they come, then I'd argue that you can achieve almost anything in XT with a little extra code and modelling forethought. Which is particularly true when used with Clojure. > I know I can use postgres as transaction/doc storage, but how performant/trouble-free is it I'll leave it to others to chime in with anecdata here, but it certainly shouldn't be much slower / worse. Incidentally an issue was raised very recently to reduce the transaction latency for JDBC backends https://github.com/xtdb/xtdb/issues/1623
For me datalog is a big deal when writing queries, I find modeling in tables a bit awkward. Datalog + pull + pathom is all I need for my back-end to resolve complicated and flexible queries from a quite dynamic front-end. Very happy with xtdb.
I'm working on a small application that needs a replicated database. e.g.: it's for personal information that needs to be up-to-date on my mobile, on my work desktop, at home.
All of the examples I've seen other than Replicativ (that hasn't seen commits in a long time) seem to assume that if you want replication you're a corp deploying to the cloud. I'm on the other end of that spectrum. But the "unbundled" nature of Crux appeals to me and fits my use-case well. e.g.: I want the database embedded inside the app. Does anyone have advice about how to set up replication when Crux nodes may come online or drop offline without notice. When there won't always be 3 (or more) nodes available. When data needs to be reconciled on an "as-available" basis?
(You can assume I've worked out firewall traversal / overlay networks / etc. for purposes of this discussion.)
This line makes me confused. do you want the xtdb index to communicate with the golden stores over the network or locally?
So, you'd have your tx log/doc store available centrally, then your device local indexes which you query will be all be full replicas. Is that what you want? It's how xtdb always works :)
e.g., the application always writes to its local database because the network isn't always available.
Xtdb would leave those details down to the doc store/tx logs themselves, and I don't think there is one written that supports that use case well. You'd probably have to write an implementation for e.g. couchdb
I’ve been doing something similar using a Yjs document for application state. Works well for real-time collaboration and for merging in remote changes
Thanks @UE72GJS7J. I'm on the JVM for now. Sadly, it seems all the cool kids doing p2p these days use Javascript, Go, or Rust (in about that order).
I’m using ClojureScript on the server for that reason, and just to get something usable out the door until I can think better about different approaches to replicated data
I did experiment briefly with encoding a Hybrid Logical Clock into XT's valid-time as a zero-conflict / non-destructive last-writer-wins synchronisation mechanism but didn't get as far as working prototype (though I see no reason why it wouldn't work great) https://github.com/xtdb/xtdb/issues/895#issuecomment-636041476
I was quite inspired to attempt it after watching this talk about building an offline-first app https://www.youtube.com/watch?v=DEcwa68f-jY
@U899JBRPF Super cool you're thinking about this! This idea came up in a discussion recently, and at some point I came around to the idea that you wouldn't want to do it with valid-time, but rather the transaction time. This seems to make more sense semantically (valid-time seems more domain focused IIUC; like when in the real world did this fact become true), whereas transaction time is about when that fact was asserted, and it would seem to be the latter that you would want to do this with. However, I could also imagine based on the architecture that doing this with transaction time might be more challenging. Curious about your thoughts on this.
Hey @U05100J3V nice to see you here 🙂 (not sure if you remember, but we did swap emails like 5 years ago!) > based on the architecture that doing this with transaction time might be more challenging Yes indeed, it would be problematic. XT's transaction time is assumed to be immutable & monotonic, and is what underpins all the potential for horizontal read scaling. With valid time though, XT really doesn't care how you apply it's semantics to your use-case. The only "baked-in" assumption is a total ordering of 64-bit Longs.
I can kind of imagine "vector clock time" being some middling 3rd time dimension in between the other two, so that valid time is still available for more traditional usage, but it's out of scope of our current roadmap for now. In principle though, the upcoming temporal indexes could scale to additional dimensions (currently it's only working with 6)