Fork me on GitHub

Does anyone have some handy code for producing an infinite tx-range? Where you're waiting for new transactions when you run out of what's already in the db.


The tx-report-queue has properties I don't particularly like, as it fills up memory.

Wes Hall13:03:23

Not sure who to message with this, but I have a suggestion. I'm using datomic cloud and developing against it, which basically means a long running datomic-socks-proxy process. This was quite painful due to frequent timeouts and disconnects, causing me to having to keep jumping across and restarting it. I installed autossh instead and hacked the script to use this, and it is now much more stable (and survives sleeps of my laptop). I wonder whether it might be worth having the standard script check for the installation of autossh and if found, use that instead (and maybe print a message to the user if not found, before continuing with the regular ssh client). For anybody interested in my little hack, I just commented out the ssh command at the bottom of the script, and added the autossh one. Like this... #ssh -v -i $PK -CND ${SOCKS_PORT:=8182} [email protected]${BASTION_IP} autossh -M 0 -o "ServerAliveInterval 5" -o "ServerAliveCountMax 3" -v -i $PK -CND ${SOCKS_PORT:=8182} [email protected]${BASTION_IP}


@dominicm the onyx datomic plugin has a tx-range poll with backoff


@robert-stuttaford That's not far from where I've ended up. Except with less volatiles 😛. A shame there isn't a clever solution to have a sequence/channel of some kind only.


@(d/transact conn [[:custom-fn1][:custom-fn2]])
Is it possible to custom-fn2 see the changes caused by custom-fn1?


@lboliveira No - the atomicity of transactions means that there is no ‘before’ or ‘after’ within a transaction - everything either happens or doesn’t and it all occurs “at the same time”


you can enforce arbitrary constraints within a single transaction function, but there is no way to “inspect” the results of one txn-fn from another


@marshall Thanks for the clarification.


oh. one thing to note - you can call a txn function from within a txn-fn


which might provide the semantics you need


but two top-level calls (like you’ve shown) can’t interact


could you please show an example of calling a txn function from within a txn-fn ?


the output of any transaction function must be only valid tx-data


essentially lists that look like : [[:db/add 124215 :db/doc “test”][:db/retract 11111 :person/name “Marshall]]


you can emit something like [[:my-fun-2 arg1 arg2]]


which is valid tx-data, and when it’s processed it will invoke the :my-fun-2 txn function


keep in mind that all transaction functions are serialized in the single writer thread of the transactor, so i tend to avoid them if possible


thank you. I am experimenting transaction functions now and they are great.


I promisse I will take care. 😃


@lboliveira Think of tx-fns as something like macro-expansion


@lboliveira If you really need to see the result of a tx and do something conditionally, you can make a tx fn whose argument is a full transaction, call d/with inside that tx fn to see what the db result would be, then emit the final tx


mind blowing


You can also do this on the peer, but include a tx fn which asserts some invariant


so the tx fails if something else happened in the meantime which would have invalidated your conditional


the peer would have to know to regenerate the tx and retry (and eventually to give up)


@favila Thank you. You gave me soo much to study.


this is exactly what I need. @val_waeselynck


thank you! 😃


@val_waeselynck from I can gather, your write up mostly comes from your experience working with mongo, would you have felt the same if you had come from postgresql?


@U92K3MU66 yes, the differences between MongoDb and PostgreSQL are mostly insignificant for this discussion. Both of them are "mutable-cells", client-server, "monolithic process" DBMS.


Note: that does NOT mean that the differences between MongoDB and PostgreSQL are irrelevant in general. I personally cannot really think of a use case where MongoDB is an optimal choice, except maybe when it's part of some framework like Meteor - whatever problem I consider, I would always replace it with either Datomic, Postgres, ElasticSearch, Firebase, Cassandra, Redis...


oh ok, just though you were enlighten when doing mongo -> datomic, which is no surprise I guess, 😉


besides having history for free(which is something you can achieve in postgres with some effor), do you really feel more productive in datomic? if so, are the productivity gains worth using datomic over a open source widely used and mature sql database like postgresql?


@U92K3MU66 I am curious; how would you achieve “history for free” in postgres? I”ve gone through many approaches and have yet to find a satisfactory one


Short answer: yes and yes, mostly because of the workflow, testing, and programmability possibilities Datomic enables.


@U92K3MU66 have you used this approach in production?


@U92K3MU66 from what I gather, this approach is much more coarse-grained (rows) than what Datomic offers.


yes, it works if you just want history on data


@val_waeselynck Hi! I’ll take the opportunity of having you around to ask a quick question: on what order of magnitude does your business operate in terms of number of datoms?


The trend is: more and more of the schema lives in Datomic, and more and more of the data lives outside 🙂


@val_waeselynck oh, how so? I am curious 😛 Why do you push data out of Datomic?


And I assume you ensure that the external data storage is immutable?


doesn’t that impede querying a bit?


Either because of data size (too big to fit in Datomic) or privacy regulations (GDPR). Querying is solved by sending data to materialized views, which is unusually easy in Datomic, because Change Data Capture is trivial to implement.


@val_waeselynck how does it help with GDPR? Wouldn’t excision in Datomic be an option? Sorry for the flow of questions!


And what’s your preferred out-of-datomic storage option?


Depends of what you need; I like ElasticSearch for fast aggregations and search, Postgres for gneral-purpose complementary storage, S3 for complementary BLOB storage... no reason to limit yourself really


Datomic excision is useful to check the 'I can delete data' legal box, but not really a practical solution given its performance and availability impact. A complementary store can make it easier to delete data. Will blog about this soon


@val_waeselynck looking forward to the blog post!


@val_waeselynck So just to see if I got this straight: you use Datomic (to some extent) as storage for pointers to immutable data on external storage (unless forced to remove it for compliance), and then build materialised views for querying purposes by listening to the transaction stream?


@U5ZAJ15P0 yes; in some cases (such as personal data) the data in the external storage can be "exceptionally mutable".


@val_waeselynck that’s quite neat actually 🙂 “Datomic as a timeline”


Hi! Is there a recommended solution for backing up Datomic when using Cloud? I found for on-prem but didn’t see anything for Cloud.