This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-10-25
Channels
- # announcements (22)
- # babashka (9)
- # beginners (33)
- # biff (12)
- # calva (17)
- # cider (64)
- # cljdoc (3)
- # cljfx (16)
- # clojure (125)
- # clojure-bay-area (14)
- # clojure-europe (15)
- # clojure-norway (64)
- # clojure-uk (2)
- # clojurescript (7)
- # conjure (1)
- # core-async (4)
- # cursive (6)
- # data-science (14)
- # datahike (8)
- # datomic (6)
- # defnpodcast (4)
- # emacs (5)
- # events (1)
- # hyperfiddle (15)
- # leiningen (17)
- # lsp (8)
- # membrane (27)
- # off-topic (25)
- # podcasts-discuss (4)
- # polylith (6)
- # portal (21)
- # reagent (11)
- # releases (1)
- # shadow-cljs (36)
- # slack-help (2)
- # sql (1)
- # squint (131)
- # testing (12)
- # xtdb (7)
Hello! Unsure where to post this, but datahike is a replikativ project so this seemed ok. I noticed the https://replikativ.io/ cert expired on september 1st.
I’m also curious about what happened to replikativ itself (i.e. the p2p crdt and replication bits, https://github.com/replikativ/replikativ). Attention seems to have moved to datahike, although I realize many replikativ support libaries are shared between the two projects (konserve, kabel, etc)
I think @U1C36HC6N is still very much into this and with the new distributed capabilities it made a step in this direction again.
@U09R86PA4 Yes, in fact there is an underlying unified lattice based perspective on Datalog and replikativ. On a practical side kabel and replikativ need to be dependency updated and then integrated with the way Datahike is storing data. I have worked on a variant of https://speakerdeck.com/ept/data-structures-as-queries-expressing-crdts-using-datalog?slide=22 for CRDTs as well. What are you interested in in particular in replikativ?
My interest was in using it as a framework for local-first (webapp platform, with cloud replicas) tenanted datasources with peer synchronization and replication. I stopped following the project about 4 years ago because my job and the problems I was working on changed. I recently had reason to be reminded of replikativ, and then serendipitously the next day there was a big Datahike release, so I reached out.
Nice. Yes, Datahike in a sense is a bit more pragmatic, since it allows you to cover distributed read scaling with a well behaving data retrieval model through Datalog queries. replikativ is more general, but requires more upfront investment into building a CRDT data model for your domain. Being able to share read-only data in a unified query language environment (distributed index space as I coined it) is a good intermediate step to better distributed programming, I think.