Fork me on GitHub
Matt Butler10:03:34

Hi @favila Done some thinking. Yesterday you asked > are you creating and destroying connections frequently, maybe inadvertently? It was my understanding that calling datomic connections were cached, calling (d/connect) on the same uri multiple times just returned the same connection and it was never destroyed. Could you elaborate on what you meant by creating/destroying a connection? Lets say my code does something similar to this for example:

(doseq [n (range 10000)]
  (d/transact (d/connect uri) tx))
Is calling d/connect in quick succession not advised and should i be passing a connection whenever doing frequent transactions? Could this be the cause?


Hi. Anyone deployed Datomic onto Docker + OpenShift?


onto docker yes, openshift no


docker on kubernetes


any learnings?


@mbutler I was talking about a d/connect d/release lifecycle pair. I was speculating you had some lifecycle management in your app. I think release is async, or else maybe you exposed a race in datomic. All pure speculation, none of it ended up applying to your case

Matt Butler14:03:58

Okay, no problem. So in theory calling (d/connect uri) is functionally identical to passing around an established connection. I performed the refactor anyway as im at a loss 🙂


@jonpither very easy to get running, but we haven't tested a production workload yet and are not using a high-availability transactor yet


the backend is a cassandra cluster and is also running inside kubernetes


d/release is only necessary if you know you are not going to use that connection again, and the java process is going to stick around. For processes which keep a database connection for their entire lifetime, d/release is not needed.


d/connect does cache connections based on the URI, so calling d/connect repeatedly should have no effect.