Fork me on GitHub

Continuing with my kubernetes rant. If i had 2 pods on the same service, that sounds like that would cause problems, as traffic would be getting split to 2 transctors and then you can't ensure atomic transactions. To get the H/A effect, would i make sense to have 1 deployment with 1 pod with a survice uri of "datomic" and another deployment with 1 pod called "datomic-failover", each pod starts a transactor with host: "datomic", alt-host "datomic-failover"... would that in effect be what the H/A is doing ?


So if writes to datomic fails it would then shoot over subsequent requests to datomic-failover ?

Linus Ericsson06:05:57

From my understanding every transactor writes their connection details into the backstore. The failover is handled by a protocol based on heartbeats. If the first connected transactor fails to write its heartbeat - the peers connects to the second connected transactor that succeeds with its heartbeat etc. Theres probably more to it, but thats the main idea. In practice you can just make sure to have two transactors running and they will sort out the failover stuff internally.

👍 1
Linus Ericsson06:05:20

This process should not be loadbalanced - the transactors and peers use other mechanisms to figure out which transactor is active etc.

Jem McElwain19:05:37

the service is irrelevant for the transactor, since the pods will read the address directly from storage in order to speak to it. you just have to make sure that it's routable from inside the cluster. so there are no loadbalancing considerations as far as k8s is concerened.


Hey there, building my first Clojure library, and it aims to combine Datomic and Lacinia! 😳 Still in draft and all, but happy to read your thoughts! (And also keen to get coding feedback 🙊)

❤️ 4
Jem McElwain19:05:12

hi, just cut our peers over to use valcache, and we're seeing some "errors" every once and a while in our logs under the key :valcache/put-exception

java.nio.file.NoSuchFileException: /opt/valcache/a73/61ef3bea-4f63-4d27-92fa-cda51dcf0a73
these are logged at info, so i'm assuming are not significantly impacting our availability, but i'd like to understand a bit better what's going on.

Jem McElwain19:05:34

right now we are provisioning fresh disks every time the application starts, so it's always a cold start. we have plans to snapshot/reuse disks, and i'm wondering if that would help


Are you using a version >= 1.0.6202?

Jem McElwain20:05:23

yup, looks like we're currently on 1.0.6316


Another possibility is the filesystem itself. What are you using? Are you mounting with strictatime and lazytime ?


I wouldn’t expect to see these, and I don’t see them on our valcache systems


we use xfs, but I don’t think the docs specify

Jem McElwain20:05:29

using ext4, great questions on the mount flags, let me double check to make sure


also do you see these before the valcache fills?


or only when it’s full?

Jem McElwain20:05:47

no, these are while it's filling

Jem McElwain20:05:56

confirmed the disk still has plenty of space


I mean has disk utilization reached datomic.valcacheMaxGb yet?


(which can be less than filesystem size)

Jem McElwain20:05:58

yup we provision based on the valcacheMaxGb setting

Jem McElwain20:05:13

yeah just checked and we're missing the flags, that seems like a likely culprit

Jem McElwain20:05:19

okay thanks for your help i'll try to ensure those get set

Jem McElwain18:05:36

unfortunately that didn't seem to solve it! will have to dig deeper...


I would open a support ticket