Fork me on GitHub
#datomic
<
2017-03-07
>
Matt Butler10:03:34

Hi @favila Done some thinking. Yesterday you asked > are you creating and destroying connections frequently, maybe inadvertently? It was my understanding that calling datomic connections were cached, calling (d/connect) on the same uri multiple times just returned the same connection and it was never destroyed. Could you elaborate on what you meant by creating/destroying a connection? Lets say my code does something similar to this for example:

(doseq [n (range 10000)]
  (d/transact (d/connect uri) tx))
Is calling d/connect in quick succession not advised and should i be passing a connection whenever doing frequent transactions? Could this be the cause?

jonpither13:03:52

Hi. Anyone deployed Datomic onto Docker + OpenShift?

stijn13:03:22

onto docker yes, openshift no

stijn13:03:15

docker on kubernetes

jonpither13:03:22

any learnings?

favila14:03:46

@mbutler I was talking about a d/connect d/release lifecycle pair. I was speculating you had some lifecycle management in your app. I think release is async, or else maybe you exposed a race in datomic. All pure speculation, none of it ended up applying to your case

Matt Butler14:03:58

Okay, no problem. So in theory calling (d/connect uri) is functionally identical to passing around an established connection. I performed the refactor anyway as im at a loss 🙂

stijn14:03:02

@jonpither very easy to get running, but we haven't tested a production workload yet and are not using a high-availability transactor yet

stijn14:03:07

the backend is a cassandra cluster and is also running inside kubernetes

Lambda/Sierra15:03:45

d/release is only necessary if you know you are not going to use that connection again, and the java process is going to stick around. For processes which keep a database connection for their entire lifetime, d/release is not needed.

Lambda/Sierra15:03:06

d/connect does cache connections based on the URI, so calling d/connect repeatedly should have no effect.