This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # aleph (1)
- # aws (4)
- # aws-lambda (5)
- # beginners (85)
- # cider (39)
- # cljs-dev (3)
- # cljsrn (1)
- # clojars (1)
- # clojure (129)
- # clojure-italy (14)
- # clojure-nl (5)
- # clojure-nlp (1)
- # clojure-uk (61)
- # clojurescript (52)
- # cursive (3)
- # datomic (42)
- # duct (3)
- # emacs (9)
- # fulcro (60)
- # graphql (2)
- # juxt (2)
- # keechma (1)
- # leiningen (4)
- # midje (2)
- # off-topic (8)
- # onyx (3)
- # overtone (1)
- # re-frame (22)
- # reagent (51)
- # reitit (3)
- # remote-jobs (3)
- # ring (4)
- # ring-swagger (1)
- # rum (4)
- # shadow-cljs (14)
- # specter (28)
- # tools-deps (85)
- # vim (9)
from the reading I’ve done datomic seems not to be the right tool for the job, but I figure I’ll ask here in case someone wants to prove me wrong:
We’re looking to implement
n append only logs as stacks (LIFO) where data is unrelated and the stacks should not inhibit each other’s performance. It’s important we can read it in reverse order and stop on a predicate, think
take-while. In case it matters, we’re using datomic cloud.
I'm thinking of using a Datomic-like system (maybe Datascript) to implement a system where multiple offline clients are editing the same document. So essentially there a multiple replicas of the db (or document), and clients can write to their replica optimistically even without connectivity to the "leader" node (in the cloud). When the clients come back online, the server receives the writes from the clients and needs to return the authoritative tx log, determining the order of the concurrent writes.
Has anyone used a Datomic-like system for collaborative editing (a kind of multi-leader distributed db)?
I'm especially curious what happens when node1 and node2 have concurrent writes (tx1 and tx2). The central node (node0) needs to either apply tx1 first and then tx2, or the other way round. In each case the tx-log would be different from what one of the clientd expected. Would this lead to problems?
but if you keep “prev” state around on client 2 you can rearrange his txs so that the order would become tx1→tx2 on client2 too
so you would have the server reject tx2 because tx1 has already been applied, and leave rearrange tx2 to node2?
I'm not too worried about conflicts for now. If node1 and node2 both transact [:db/add 1234 :person/add "Joe"] and [:db/add 1234 :person/add "Jeff"] respectively, Last Write Wins would be an acceptable resolution
I'm more concerned about getting the database into an inconsistent state by transacting in the wrong order, or not being able to apply the second transaction for some reason.
so that tx2 is first applied locally, but when server confirmation comes it’s “undone” and tx1 + tx2 applied
yes, I primary goal would be to have identical dbs (identical lists of datoms) on every node...
so "eventual consistency", i.e. identical dbs after reestablishing connectivity with the central server
in effect, the nodes would fork the db for optimistic updates, then discard the fork when the server returns the canonical tx log
I was thinking of rolling my own system for this, but Datascript already solves so many of these problems it seems like a waste to ignore it
In Datascript, is it possible (or necessary?) to serialize the eavt/aevt indexes? I see that
(-> @conn pr-str cljs.reader/read-string) only seralizes the datoms, so it may take some time to rebuild the index for a longer document when reading from, e.g. JSON.
all indexes store the same datoms, just sorted in a different order. Yes it takes time to sort them on DB import, whether it’s important or not is up to you
so really in an in-mem db, a "covering index" is just an index, because there's not need to store the same datom twice
@pesterhazy i’d check out couchdb too as i believe it was designed with your use case in mind
@spieden right, especially given that there already is pouchdb, which runs in the browser
@pesterhazy "multiple offline clients are editing the same document" sounds like the JSON CRDT-ish automerge: https://github.com/automerge/automerge
@pesterhazy I have experience with operational transformations. I'm investigating the json crdt things. A lot depends on what kind of conflicts are possible or need to be supported
@thegeez agreed. I think in my particular case, conflicts can be resolved with a simple last-write-wins strategy
That's why I'm investigating a Datomic-like data structure as a basis for real-time collaborative editing
I might be totally missing this in the reference material, but how do I tear down what I've deployed using Datomic ions? Or should I actually remove all functions from my ion config and deploy "nothing"?
I think this would make a great forum topic. I might copy it over there tomorrow, but essentially Ions are immutable, there is no tear down 🙂. If you’re concerned with the added noise you can deploy an empty ion to clean up.