This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-03-31
Channels
- # announcements (3)
- # babashka (75)
- # beginners (16)
- # calva (124)
- # cider (10)
- # clara (2)
- # clj-kondo (1)
- # cljdoc (4)
- # cljs-dev (14)
- # clojure (104)
- # clojure-australia (4)
- # clojure-czech (5)
- # clojure-europe (14)
- # clojure-germany (48)
- # clojure-nl (4)
- # clojure-serbia (4)
- # clojure-uk (34)
- # clojurescript (60)
- # community-development (16)
- # conjure (12)
- # core-async (34)
- # cursive (42)
- # data-science (7)
- # deps-new (9)
- # depstar (1)
- # emacs (11)
- # events (2)
- # fulcro (15)
- # graalvm (1)
- # inf-clojure (1)
- # jobs (3)
- # jobs-discuss (1)
- # joker (7)
- # juxt (8)
- # lsp (20)
- # malli (42)
- # meander (4)
- # off-topic (5)
- # pathom (2)
- # polylith (13)
- # re-frame (39)
- # reagent (9)
- # reitit (31)
- # releases (2)
- # rewrite-clj (23)
- # shadow-cljs (39)
- # spacemacs (11)
- # specter (6)
- # tools-deps (8)
- # xtdb (12)
Suppose I start with Rocks DB for docs and log storage and later want to switch to Kafka or something else. How hard would the migration be? Apologies if this is covered somewhere and I missed it.
Hi @UA9399DFZ there's an item on our roadmap to provide some tooling to help with this, see https://github.com/juxt/crux/issues/1386
You can get most of the way today by transforming and streaming the output from open-tx-log
of one Crux instance into submit-tx
of another instance. However that approach alone doesn't account for :crux.tx/match
or :crux.tx/evict
operations, and we'd prefer it to be even simpler.
In summary it's definitely possible to migrate between backends, but we'd probably have to assist you as things stand today. We'll try to give this area some attention over the coming weeks.
Do you think it would acceptable if transaction times & transaction IDs are not preserved in the migration? Whilst it should also be possible in most (all?) cases, it would be significantly more complex.
Great to hear that tooling for migrating backends is on the roadmap. I don't need it with any urgency, but knowing that it's coming takes away some of the fear of getting my initial setup "perfect". Preserving transaction times and IDs wouldn't be important for my use case, which is just basic CRUD apps.
Hi there, I have been playing with Crux a little bit recently. I noticed that for single node use cases, SQLite is an option for doc, log (and index?) store. Iām wondering if anyone tried https://dqlite.io in a multi-node setting.
Hi š SQLite isn't supported as a back-end for the KV indexes today. I can't think of a reason why it wouldn't be possible to make it work but the performance won't be nearly as good as with Rocks or LMDB. These ~distributed SQLite stores do look interesting though. I've not had the chance to play with any of them yet - count me interested also!
The KV protocol is fairly small if you fancied the challenge though, here's a Redis-based prototype I wrote: https://github.com/crux-labs/crux-redis/blob/master/src/crux/redis.clj
maybe I'm dreaming, but if crux were to integrate arrow into the indexes, would it then be possible to use something like https://github.com/techascent/tech.ml.dataset in situ, i.e. on the local node without any ETL?