This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-10-18
Channels
- # aws (10)
- # beginners (43)
- # calva (1)
- # cider (7)
- # cljs-dev (83)
- # clojure (132)
- # clojure-dev (20)
- # clojure-europe (6)
- # clojure-greece (4)
- # clojure-italy (2)
- # clojure-nl (6)
- # clojure-spec (21)
- # clojure-sweden (16)
- # clojure-uk (21)
- # clojuredesign-podcast (16)
- # clojurescript (74)
- # cursive (41)
- # datomic (7)
- # emacs (3)
- # fulcro (30)
- # graalvm (3)
- # graphql (2)
- # instaparse (1)
- # jobs (1)
- # joker (13)
- # kaocha (14)
- # off-topic (118)
- # pathom (13)
- # re-frame (5)
- # reagent (22)
- # shadow-cljs (67)
- # spacemacs (7)
- # sydney (1)
- # testing (1)
- # tools-deps (82)
- # vim (4)
- # xtdb (1)
I've only been using Datomic Cloud for a few weeks now and I'm really enjoying working with it so far. However I've unearthed enough limitations of Datomic Cloud to the point now where I think we might have to abandon it. I'm hoping someone here might be able to shoot down my observations enough for us to reconsider... Here are the limitations as I see them: 1. No data excision - This is a real issue if we get a GDPR right to erasure request. - Of course, we could just avoid storing any data that could possibly come under such a request but that's not ideal. 2. No backups - I get the whole argument about the robustness of the AWS storage that backs Datomic Cloud but we still want a disaster recovery plan if something goes wrong. - (Or if we just want to restore to a point in time.) 3. No transaction context - Other databases have the concept of a transaction "context" which one can wrap multiple commands in - if one command fails, they all fail. - E.g. if we want to store some data AND publish it externally - we've no way of ensuring that the storage operation and publishing operation only succeed if BOTH succeed. Of course, If we were using Datomic on-prem points 1 and 2 would be resolved. And I'm hoping that I'm just missing something obvious with point 3.
When you say "publish it externally", unless you and the external thing are doing some kind of 2-phase commit thing, I'm not sure you can make both atomic. If the external publishing succeeds but the response times out, your transaction will fail right?
Yes that's true although not actually an issue in my scenario. I'll be a bit more specific. What I'm thinking of is actually a DB write + kafka produce scenario. So the typical approach would be 1) write to DB; 2) produce to kafka. If (2) fails, roll the whole transaction back. If (2) times out for some reason then retrying the whole transaction is fine as it'll produce a duplicate kafka message (which downstream consumers will recognise as a duplicate and ignore).
So, I guess what I really see as the limitation is that there's no way to rollback a datomic transaction.