Fork me on GitHub
#datomic
<
2019-10-18
>
kelveden14:10:49

I've only been using Datomic Cloud for a few weeks now and I'm really enjoying working with it so far. However I've unearthed enough limitations of Datomic Cloud to the point now where I think we might have to abandon it. I'm hoping someone here might be able to shoot down my observations enough for us to reconsider... Here are the limitations as I see them: 1. No data excision - This is a real issue if we get a GDPR right to erasure request. - Of course, we could just avoid storing any data that could possibly come under such a request but that's not ideal. 2. No backups - I get the whole argument about the robustness of the AWS storage that backs Datomic Cloud but we still want a disaster recovery plan if something goes wrong. - (Or if we just want to restore to a point in time.) 3. No transaction context - Other databases have the concept of a transaction "context" which one can wrap multiple commands in - if one command fails, they all fail. - E.g. if we want to store some data AND publish it externally - we've no way of ensuring that the storage operation and publishing operation only succeed if BOTH succeed. Of course, If we were using Datomic on-prem points 1 and 2 would be resolved. And I'm hoping that I'm just missing something obvious with point 3.

jaihindhreddy14:10:51

When you say "publish it externally", unless you and the external thing are doing some kind of 2-phase commit thing, I'm not sure you can make both atomic. If the external publishing succeeds but the response times out, your transaction will fail right?

kelveden14:10:40

Yes that's true although not actually an issue in my scenario. I'll be a bit more specific. What I'm thinking of is actually a DB write + kafka produce scenario. So the typical approach would be 1) write to DB; 2) produce to kafka. If (2) fails, roll the whole transaction back. If (2) times out for some reason then retrying the whole transaction is fine as it'll produce a duplicate kafka message (which downstream consumers will recognise as a duplicate and ignore).

kelveden14:10:55

So, I guess what I really see as the limitation is that there's no way to rollback a datomic transaction.

Joe Lane16:10:07

Why do datomic Transaction functions which publish to Kafka not work here?

Joe Lane16:10:48

If the publishing to Kafka fails then you can throw an exception to abort the datomic commit.

4
kelveden21:10:51

Thanks. I'd not considered using ions - I've not used them before. It should work though.