This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-05-06
Channels
- # announcements (7)
- # aws (8)
- # babashka (9)
- # babashka-sci-dev (11)
- # beginners (37)
- # calva (50)
- # cider (15)
- # clj-kondo (30)
- # clj-otel (3)
- # cljdoc (16)
- # cljs-dev (26)
- # cljsrn (4)
- # clojure (168)
- # clojure-doc (1)
- # clojure-europe (17)
- # clojure-gamedev (4)
- # clojure-nl (3)
- # clojure-norway (1)
- # clojure-spec (17)
- # clojure-uk (16)
- # clojurescript (27)
- # community-development (3)
- # css (3)
- # cursive (9)
- # datomic (25)
- # emacs (1)
- # events (4)
- # fulcro (2)
- # google-cloud (2)
- # graphql (11)
- # gratitude (9)
- # humbleui (16)
- # hyperfiddle (2)
- # jobs (1)
- # london-clojurians (1)
- # lsp (16)
- # malli (2)
- # off-topic (71)
- # pedestal (4)
- # polylith (9)
- # portal (94)
- # reagent (6)
- # reitit (2)
- # releases (1)
- # remote-jobs (2)
- # sci (9)
- # shadow-cljs (49)
- # spacemacs (8)
- # tools-build (2)
- # tools-deps (39)
- # vim (7)
- # xtdb (6)
Continuing with my kubernetes rant. If i had 2 pods on the same service, that sounds like that would cause problems, as traffic would be getting split to 2 transctors and then you can't ensure atomic transactions. To get the H/A effect, would i make sense to have 1 deployment with 1 pod with a survice uri of "datomic" and another deployment with 1 pod called "datomic-failover", each pod starts a transactor with host: "datomic", alt-host "datomic-failover"... would that in effect be what the H/A is doing ?
So if writes to datomic
fails it would then shoot over subsequent requests to datomic-failover
?
From my understanding every transactor writes their connection details into the backstore. The failover is handled by a protocol based on heartbeats. If the first connected transactor fails to write its heartbeat - the peers connects to the second connected transactor that succeeds with its heartbeat etc. Theres probably more to it, but thats the main idea. In practice you can just make sure to have two transactors running and they will sort out the failover stuff internally.
This process should not be loadbalanced - the transactors and peers use other mechanisms to figure out which transactor is active etc.
the service is irrelevant for the transactor, since the pods will read the address directly from storage in order to speak to it. you just have to make sure that it's routable from inside the cluster. so there are no loadbalancing considerations as far as k8s is concerened.
Hey there, building my first Clojure library, and it aims to combine Datomic and Lacinia! 😳 Still in draft and all, but happy to read your thoughts! (And also keen to get coding feedback 🙊) https://github.com/nottmey/datomic-lacinia
hi, just cut our peers over to use valcache, and we're seeing some "errors" every once and a while in our logs under the key :valcache/put-exception
java.nio.file.NoSuchFileException: /opt/valcache/a73/61ef3bea-4f63-4d27-92fa-cda51dcf0a73
these are logged at info, so i'm assuming are not significantly impacting our availability, but i'd like to understand a bit better what's going on.right now we are provisioning fresh disks every time the application starts, so it's always a cold start. we have plans to snapshot/reuse disks, and i'm wondering if that would help
yup, looks like we're currently on 1.0.6316
Another possibility is the filesystem itself. What are you using? Are you mounting with strictatime
and lazytime
?
using ext4, great questions on the mount flags, let me double check to make sure
no, these are while it's filling
confirmed the disk still has plenty of space
yup we provision based on the valcacheMaxGb setting
yeah just checked and we're missing the flags, that seems like a likely culprit
okay thanks for your help i'll try to ensure those get set
unfortunately that didn't seem to solve it! will have to dig deeper...