Fork me on GitHub
#datomic
<
2018-03-19
>
dominicm13:03:42

Does anyone have some handy code for producing an infinite tx-range? Where you're waiting for new transactions when you run out of what's already in the db.

dominicm13:03:02

The tx-report-queue has properties I don't particularly like, as it fills up memory.

Wes Hall13:03:23

Not sure who to message with this, but I have a suggestion. I'm using datomic cloud and developing against it, which basically means a long running datomic-socks-proxy process. This was quite painful due to frequent timeouts and disconnects, causing me to having to keep jumping across and restarting it. I installed autossh instead and hacked the script to use this, and it is now much more stable (and survives sleeps of my laptop). I wonder whether it might be worth having the standard script check for the installation of autossh and if found, use that instead (and maybe print a message to the user if not found, before continuing with the regular ssh client). For anybody interested in my little hack, I just commented out the ssh command at the bottom of the script, and added the autossh one. Like this... #ssh -v -i $PK -CND ${SOCKS_PORT:=8182} ec2-user@${BASTION_IP} autossh -M 0 -o "ServerAliveInterval 5" -o "ServerAliveCountMax 3" -v -i $PK -CND ${SOCKS_PORT:=8182} ec2-user@${BASTION_IP}

robert-stuttaford13:03:34

@dominicm the onyx datomic plugin has a tx-range poll with backoff

dominicm14:03:40

@robert-stuttaford That's not far from where I've ended up. Except with less volatiles 😛. A shame there isn't a clever solution to have a sequence/channel of some kind only.

lboliveira17:03:49

@(d/transact conn [[:custom-fn1][:custom-fn2]])
Is it possible to custom-fn2 see the changes caused by custom-fn1?

marshall17:03:27

@lboliveira No - the atomicity of transactions means that there is no ‘before’ or ‘after’ within a transaction - everything either happens or doesn’t and it all occurs “at the same time”

marshall17:03:47

you can enforce arbitrary constraints within a single transaction function, but there is no way to “inspect” the results of one txn-fn from another

lboliveira17:03:25

@marshall Thanks for the clarification.

marshall17:03:12

oh. one thing to note - you can call a txn function from within a txn-fn

marshall17:03:23

which might provide the semantics you need

marshall17:03:36

but two top-level calls (like you’ve shown) can’t interact

lboliveira17:03:28

could you please show an example of calling a txn function from within a txn-fn ?

marshall17:03:49

the output of any transaction function must be only valid tx-data

marshall17:03:30

essentially lists that look like : [[:db/add 124215 :db/doc “test”][:db/retract 11111 :person/name “Marshall]]

marshall17:03:51

you can emit something like [[:my-fun-2 arg1 arg2]]

marshall17:03:20

which is valid tx-data, and when it’s processed it will invoke the :my-fun-2 txn function

marshall17:03:50

keep in mind that all transaction functions are serialized in the single writer thread of the transactor, so i tend to avoid them if possible

lboliveira17:03:34

thank you. I am experimenting transaction functions now and they are great.

lboliveira18:03:51

I promisse I will take care. 😃

favila18:03:20

@lboliveira Think of tx-fns as something like macro-expansion

favila18:03:30

@lboliveira If you really need to see the result of a tx and do something conditionally, you can make a tx fn whose argument is a full transaction, call d/with inside that tx fn to see what the db result would be, then emit the final tx

lboliveira18:03:30

mind blowing

favila18:03:46

You can also do this on the peer, but include a tx fn which asserts some invariant

favila18:03:17

so the tx fails if something else happened in the meantime which would have invalidated your conditional

favila18:03:52

the peer would have to know to regenerate the tx and retry (and eventually to give up)

lboliveira18:03:52

@favila Thank you. You gave me soo much to study.

lboliveira22:03:37

this is exactly what I need. @val_waeselynck

lboliveira22:03:46

thank you! 😃

JJ22:03:27

@val_waeselynck from I can gather, your write up https://medium.com/@val.vvalval/what-datomic-brings-to-businesses-e2238a568e1c mostly comes from your experience working with mongo, would you have felt the same if you had come from postgresql?

val_waeselynck22:03:39

@U92K3MU66 yes, the differences between MongoDb and PostgreSQL are mostly insignificant for this discussion. Both of them are "mutable-cells", client-server, "monolithic process" DBMS.

val_waeselynck22:03:27

Note: that does NOT mean that the differences between MongoDB and PostgreSQL are irrelevant in general. I personally cannot really think of a use case where MongoDB is an optimal choice, except maybe when it's part of some framework like Meteor - whatever problem I consider, I would always replace it with either Datomic, Postgres, ElasticSearch, Firebase, Cassandra, Redis...

JJ22:03:02

oh ok, just though you were enlighten when doing mongo -> datomic, which is no surprise I guess, 😉

JJ22:03:57

besides having history for free(which is something you can achieve in postgres with some effor), do you really feel more productive in datomic? if so, are the productivity gains worth using datomic over a open source widely used and mature sql database like postgresql?

hmaurer22:03:49

@U92K3MU66 I am curious; how would you achieve “history for free” in postgres? I”ve gone through many approaches and have yet to find a satisfactory one

val_waeselynck22:03:55

Short answer: yes and yes, mostly because of the workflow, testing, and programmability possibilities Datomic enables.

hmaurer22:03:41

@U92K3MU66 have you used this approach in production?

val_waeselynck22:03:41

@U92K3MU66 from what I gather, this approach is much more coarse-grained (rows) than what Datomic offers.

JJ00:03:50

yes, it works if you just want history on data

hmaurer22:03:34

@val_waeselynck Hi! I’ll take the opportunity of having you around to ask a quick question: on what order of magnitude does your business operate in terms of number of datoms?

val_waeselynck22:03:55

The trend is: more and more of the schema lives in Datomic, and more and more of the data lives outside 🙂

hmaurer22:03:44

@val_waeselynck oh, how so? I am curious 😛 Why do you push data out of Datomic?

hmaurer22:03:00

And I assume you ensure that the external data storage is immutable?

hmaurer22:03:14

doesn’t that impede querying a bit?

val_waeselynck22:03:06

Either because of data size (too big to fit in Datomic) or privacy regulations (GDPR). Querying is solved by sending data to materialized views, which is unusually easy in Datomic, because Change Data Capture is trivial to implement.

hmaurer22:03:50

@val_waeselynck how does it help with GDPR? Wouldn’t excision in Datomic be an option? Sorry for the flow of questions!

hmaurer22:03:17

And what’s your preferred out-of-datomic storage option?

val_waeselynck22:03:52

Depends of what you need; I like ElasticSearch for fast aggregations and search, Postgres for gneral-purpose complementary storage, S3 for complementary BLOB storage... no reason to limit yourself really

val_waeselynck23:03:12

Datomic excision is useful to check the 'I can delete data' legal box, but not really a practical solution given its performance and availability impact. A complementary store can make it easier to delete data. Will blog about this soon

hmaurer23:03:58

@val_waeselynck looking forward to the blog post!

hmaurer11:03:20

@val_waeselynck So just to see if I got this straight: you use Datomic (to some extent) as storage for pointers to immutable data on external storage (unless forced to remove it for compliance), and then build materialised views for querying purposes by listening to the transaction stream?

val_waeselynck12:03:08

@U5ZAJ15P0 yes; in some cases (such as personal data) the data in the external storage can be "exceptionally mutable".

hmaurer14:03:22

@val_waeselynck that’s quite neat actually 🙂 “Datomic as a timeline”

dehli23:03:09

Hi! Is there a recommended solution for backing up Datomic when using Cloud? I found https://docs.datomic.com/on-prem/backup.html for on-prem but didn’t see anything for Cloud.