This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-02-02
Channels
- # announcements (3)
- # asami (29)
- # babashka (62)
- # beginners (131)
- # biff (7)
- # calva (31)
- # cider (5)
- # clerk (14)
- # clj-kondo (3)
- # cljsrn (12)
- # clojars (18)
- # clojure (72)
- # clojure-austin (17)
- # clojure-dev (6)
- # clojure-europe (31)
- # clojure-indonesia (1)
- # clojure-nl (1)
- # clojure-norway (18)
- # clojure-sweden (11)
- # clojure-uk (6)
- # clr (47)
- # conjure (42)
- # cursive (88)
- # datalevin (2)
- # datomic (25)
- # emacs (42)
- # exercism (1)
- # fulcro (10)
- # funcool (8)
- # gratitude (2)
- # honeysql (16)
- # introduce-yourself (5)
- # jobs-discuss (26)
- # leiningen (5)
- # lsp (31)
- # malli (21)
- # matcher-combinators (14)
- # missionary (2)
- # nbb (1)
- # off-topic (40)
- # pathom (38)
- # portal (2)
- # re-frame (7)
- # reagent (18)
- # reitit (1)
- # releases (5)
- # shadow-cljs (62)
- # sql (12)
- # testing (4)
- # xtdb (37)
Has anyone had to dig into an unknown XTDB database, and how did you do it? I don't have the need, but I wonder what the approach would be compared to SQL where I would first study the schema.
I actually made some code that queries the database and generates pretty graphviz diagrams from the “document types”
it works for our db, because we model “entity type” in the :xt/id (eg. {:person #uuid "…"}
) so it can build model by looking at the docs
the attribute-stats
API may help, but I don't think you can really avoid a full scan of the data to understand the range of values and figuring out what may or may not be ~FKs (foreign keys)
Have you considered exposing RocksDB checkpointing via API? I'm thinking of doing something like starting a new node on demand, and creating a checkpoint for that node by calling a function, instead of just configuring constant, time based snapshots. If I had a handle to the RocksKV, I could call the save-checkpoint easily. Or perhaps even have a direct handle to the RocksDB db instance. Though is that safe while XTDB is also using the handle :thinking_face:
another use case, migrating lots of data from another system… I want to make a snapshot immediately after the migration is done, and only start new nodes after that
also, would be good to trigger a manual checkpoint just before doing an upgrade so new nodes don’t have to work so much
There's this open issue https://github.com/xtdb/xtdb/issues/1813
However, there has been reluctance to blindly implement and offer something like this without having a more holistic understanding of the use-case(s), as it is likely that a more comprehensive high-level solution is the better route.
Feels like I've seen this discussed, but at least couldn't find it… (need to move it into discuss if I get an answer!). Is there a performance difference between having a document with a vector of values, vs splitting them to many documents that refer to the original document? I suppose that's assuming it's a flat list, because if they aren't then the split to many documents will fill indexes a lot more. I'm thinking if there's a trade-off between the write amplification caused by changing the elements (or just appending) vs possibly slower queries (if querying for all elements from the parent)
I can't think of more specific guidance right now (sorry!), but I suppose running some experiments could make for an interesting blog post :thinking_face:
Heh, yeah. If we had time for a technical blog, XTDB has a lot of new ground to cover 😉
Hey, a really quick question that should be easier to be answered by anyone here. XTDB has the same way to bring the client computational power to help query scalability/performance as Datomic peers?
Hey @U28A9C90Q you may want to give this a read https://docs.xtdb.com/resources/faq/#comparisons but the short answer is that XT uses a full replica of all the data locally, which is subtly different from the "peer" model that I believe dynamically manages a cache of the working set. So the scaling is different.
Yes, but in general it's better to use it through JVM so that we can have all power provided when data is local through this replica right?
Correct, yep. The main complication is how closely you really want to couple the "application" to the "database" as the line can become very blurred. Sometimes this is good, but frequent redeploying/rebuilding of the database has operational costs that can add up in the long run.
I was thinking about how Rails is still a thing and how usable would be XTDB from a JRuby app with Java bindings
It would be great to see XT become a defacto choice for another ecosystem. Have you been looking at other interesting alternative Rails backends for comparison?
I was thinking briefly about how/why Django is still so prominent the other day. That would be another target in this vein.
Hey @U899JBRPF for sure, but I have seem that Rails has still great reach in the webdev space among startups that need to bootstrap a product quickly
And I don't know about Python, but Ruby has a great history in the JVM, even more with what is coming regarding truffle ruby who has direct development support from shopify
Elixir is another great platform, but since it's based on Erlang we can only use http wrappers. There are some in the wild, a friend of mine @U884GE2FJ did this one: https://github.com/naomijub/translixir
Did you see this https://dev.solita.fi/2023/01/04/xtdb-phoenix.html