Fork me on GitHub
#datomic
<
2018-06-11
>
denik12:06:09

from the reading I’ve done datomic seems not to be the right tool for the job, but I figure I’ll ask here in case someone wants to prove me wrong: We’re looking to implement n append only logs as stacks (LIFO) where data is unrelated and the stacks should not inhibit each other’s performance. It’s important we can read it in reverse order and stop on a predicate, think take-while. In case it matters, we’re using datomic cloud.

chrisblom13:06:10

@denik sounds like you want something like kafka really

chrisblom13:06:44

oh wait, you want stacks not queues

denik13:06:42

also never care to delete data

denik13:06:42

also never care to delete data

pesterhazy13:06:11

I'm thinking of using a Datomic-like system (maybe Datascript) to implement a system where multiple offline clients are editing the same document. So essentially there a multiple replicas of the db (or document), and clients can write to their replica optimistically even without connectivity to the "leader" node (in the cloud). When the clients come back online, the server receives the writes from the clients and needs to return the authoritative tx log, determining the order of the concurrent writes.

pesterhazy13:06:27

Has anyone used a Datomic-like system for collaborative editing (a kind of multi-leader distributed db)?

pesterhazy13:06:01

I'm especially curious what happens when node1 and node2 have concurrent writes (tx1 and tx2). The central node (node0) needs to either apply tx1 first and then tx2, or the other way round. In each case the tx-log would be different from what one of the clientd expected. Would this lead to problems?

Niki13:06:05

it will lead to logical conflicts, sure

Niki13:06:47

but if you keep “prev” state around on client 2 you can rearrange his txs so that the order would become tx1→tx2 on client2 too

Niki13:06:55

still doesn’t solve any conflicts though

pesterhazy14:06:55

so you would have the server reject tx2 because tx1 has already been applied, and leave rearrange tx2 to node2?

pesterhazy14:06:24

I'm not too worried about conflicts for now. If node1 and node2 both transact [:db/add 1234 :person/add "Joe"] and [:db/add 1234 :person/add "Jeff"] respectively, Last Write Wins would be an acceptable resolution

pesterhazy14:06:22

I'm more concerned about getting the database into an inconsistent state by transacting in the wrong order, or not being able to apply the second transaction for some reason.

Niki14:06:02

Datomic/DataScript transactions are simple enough

Niki14:06:36

if you can’t apply a tx just throw it away. It’s kind of a conflict too

pesterhazy14:06:55

I guess that's true

Niki14:06:01

but I recommend building a system that maintains same order at all instances

Niki14:06:39

so that tx2 is first applied locally, but when server confirmation comes it’s “undone” and tx1 + tx2 applied

pesterhazy14:06:45

yes, I primary goal would be to have identical dbs (identical lists of datoms) on every node...

pesterhazy14:06:17

so "eventual consistency", i.e. identical dbs after reestablishing connectivity with the central server

pesterhazy14:06:33

in effect, the nodes would fork the db for optimistic updates, then discard the fork when the server returns the canonical tx log

pesterhazy14:06:08

I was thinking of rolling my own system for this, but Datascript already solves so many of these problems it seems like a waste to ignore it

pesterhazy14:06:19

In Datascript, is it possible (or necessary?) to serialize the eavt/aevt indexes? I see that (-> @conn pr-str cljs.reader/read-string) only seralizes the datoms, so it may take some time to rebuild the index for a longer document when reading from, e.g. JSON.

pesterhazy14:06:25

Or am I overthinking this?

Niki14:06:41

all indexes store the same datoms, just sorted in a different order. Yes it takes time to sort them on DB import, whether it’s important or not is up to you

pesterhazy14:06:09

so really in an in-mem db, a "covering index" is just an index, because there's not need to store the same datom twice

spieden17:06:00

@pesterhazy i’d check out couchdb too as i believe it was designed with your use case in mind

pesterhazy17:06:13

@spieden right, especially given that there already is pouchdb, which runs in the browser

thegeez18:06:17

@pesterhazy "multiple offline clients are editing the same document" sounds like the JSON CRDT-ish automerge: https://github.com/automerge/automerge

pesterhazy18:06:12

@thegeez I saw that, the paper that describes automerge is on my reading list

pesterhazy18:06:41

Do you have experience in this field?

thegeez18:06:07

@pesterhazy I have experience with operational transformations. I'm investigating the json crdt things. A lot depends on what kind of conflicts are possible or need to be supported

pesterhazy18:06:41

@thegeez agreed. I think in my particular case, conflicts can be resolved with a simple last-write-wins strategy

pesterhazy18:06:00

The simplicity of storing data as [e a v t] tuples is appealing

pesterhazy18:06:49

That's why I'm investigating a Datomic-like data structure as a basis for real-time collaborative editing

pesterhazy18:06:13

I guess EAV tuples are familiar from RDF, not just from Datomic

cjsauer21:06:44

I might be totally missing this in the reference material, but how do I tear down what I've deployed using Datomic ions? Or should I actually remove all functions from my ion config and deploy "nothing"?

jaret02:06:00

I think this would make a great forum topic. I might copy it over there tomorrow, but essentially Ions are immutable, there is no tear down 🙂. If you’re concerned with the added noise you can deploy an empty ion to clean up.

cjsauer14:06:44

I'll be on the lookout for the post. Thanks Jaret 🍻