This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # admin-announcements (28)
- # beginners (30)
- # boot (6)
- # cljs-dev (48)
- # clojure (72)
- # clojure-android (8)
- # clojure-australia (1)
- # clojure-italy (9)
- # clojure-japan (12)
- # clojure-russia (21)
- # clojure-sg (1)
- # clojurescript (109)
- # core-async (11)
- # core-logic (17)
- # cursive (33)
- # datascript (1)
- # datomic (30)
- # dunaj (4)
- # editors (38)
- # events (1)
- # ldnclj (17)
- # off-topic (156)
- # om (2)
- # overtone (1)
- # re-frame (2)
- # reagent (63)
Does anyone here use Datomic with multiple Couchbase clusters and cross-datacenter replication (XDCR)?
@ljosa: Datomic is, in general, not designed to support cross-datacenter operation with one Transactor pair.
Cross-datacenter replication strategies usually allow data to diverge between the two datacenters, with some kind of arbitrary rule for conflict resolution. This is not a strong enough guarantee to preserve Datomic's consistency model.
For example, for Couchbase, http://docs.couchbase.com/admin/admin/XDCR/xdcr-architecture.html 'XDCR … provides eventual consistency across clusters. If a conflict occurs, the document with the most updates will be considered the “winner.” '
Yes, as @tcrayford says, there is one important piece that is not immutable: the pointer to the "root" of each database value.
Also, the immutable segments are nodes in a tree structure… if the tree has a new root but not all the leaves have been replicated across the datacenters, you would see inconsistent results. Datomic doesn't allow this, so it would appear as unavailability.
Basically, you can't get Datomic's strong consistency guarantees and cross-datacenter (or cross-region) replication at the same time.
I believe conflicts cannot happen in this case because the replication is one-way from the cluster that Datomic writes to. But I see that point that Datomic will be confused if the mutable documents are updated in the wrong order or if the leaves of the tree are delayed. Do you know how Datomic would react in such cases? Would it throw an exception? (That might be OK: from playing with Datomic and XDCR, it seems that replication delays are usually masked because recent datoms are cached in the memory index, which is transferred directly from the transactor to the peers.)
@ljosa: In general Datomic will always prefer an error to returning inconsistent results. But you should be aware that cross-datacenter replication is not a supported use case so anything it does is, by definition, undefined behavior.
I suppose we’ll have to get by with a single Couchbase cluster in a single AZ and hope that caching in the peers together with the memory index is enough to smooth over AWS glitches.
I suppose Datomic must be using strongly consistent reads when it’s running on DynamoDB?
some systems (not sure if dynamo is one), only have problems w/ consistent reads when (ab)using mutability