This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-07-31
Channels
- # architecture (1)
- # babashka (17)
- # calva (18)
- # cider (5)
- # clj-kondo (5)
- # cljdoc (44)
- # cljs-dev (2)
- # clojure (49)
- # clojure-europe (11)
- # clojure-norway (16)
- # clojure-uk (3)
- # clojurescript (89)
- # clr (8)
- # conjure (7)
- # cursive (26)
- # data-science (2)
- # datomic (15)
- # emacs (11)
- # events (1)
- # fulcro (8)
- # gratitude (3)
- # hyperfiddle (68)
- # introduce-yourself (1)
- # london-clojurians (1)
- # lsp (3)
- # nbb (8)
- # pathom (44)
- # pedestal (14)
- # polylith (2)
- # random (1)
- # shadow-cljs (8)
- # spacemacs (13)
- # squint (36)
- # tools-deps (9)
- # xtdb (17)
Is XTDB 1.x intended to be run in small scale applications as well? where “small-scale” means something like a desktop application using a database to persist state, user preferences, etc.
I am really enjoying learning about xtdb. Worked through the learn-xtdb-datalog-today on Nextjournal and the concepts really clicked. I am starting work on an application that could really benfit from bitemporal record storage. The application would have a small user base (50 or so), but would be sensitive to data loss. Would it be premature to start development with xtdb?
Hey @U055FN2ES82 thanks for the feedback! 🙂 data loss shouldn't be a concern so long as you are using a suitably durable tx-log or document-store backend such as Postgres (and also avoid writing non-pure transaction functions that could create inconsistencies)
Do you expect the upgrade from 1.x to 2.x to be challenging?
Or should we start developing with 2.x now
> Do you expect the upgrade from 1.x to 2.x to be challenging? It won't be completely trivial as there are changes to the underlying structure of things (particularly tables and data types), but we are aiming for it to be a broadly smooth & well-documented upgrade experience. The main caveat is whether you end up building an application that is tightly coupled to the exact performance characteristics of 1.x - that might make the cost-benefit justification of the upgrade more tricky to evaluate. > should we start developing with 2.x now 2.x is currently unstable, meaning you shouldn't attempt to deploy it for serious use cases ahead of the first formal alpha release because we won't have the bandwidth to support & troubleshoot - but depending on your timelines (and ours!) it might be worth considering. When does the application need to be ready for? One option is to build your application on 1.x in such a way that it can make the 2.x transition easier, by avoiding particular APIs or patterns that are changing. If that is of interest we can try to enumerate these. Let me know if you'd like to discuss more on a call or whatever - happy to continue here though too!
Thanks for the offer of the call. I will can investigate 2.x API and reach out with any question we have.
2.x seems to introduce the concept of tables? Does this mean a "table" value has been added into the "flat set of triples"?
kind of, you can think of it a bit like "https://en.wikipedia.org/wiki/Named_graph", i.e. just a bucket of triples
internally there is less emphasis on triples in general though, as 2.x supports arbitrarily sized/shaped relations
you can still model everything as triples in a single global table if desired, but the query engine isn't going to optimize for that in the near future
So, in https://www.w3.org/TR/n-quads/, the table is the context? would an entity id be unique within the table or globally unique?
cool thanks