This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-08-25
Channels
- # announcements (21)
- # babashka (8)
- # beginners (11)
- # calva (11)
- # clj-kondo (14)
- # clojure (52)
- # clojure-chicago (6)
- # clojure-europe (10)
- # clojure-nl (32)
- # clojure-norway (38)
- # clojure-uk (3)
- # code-reviews (2)
- # conjure (1)
- # cursive (6)
- # datahike (4)
- # hyperfiddle (15)
- # introduce-yourself (3)
- # lsp (10)
- # matrix (17)
- # off-topic (8)
- # polylith (3)
- # slack-help (12)
- # specter (4)
- # vim (20)
- # xtdb (15)
I recorded a short video with @danstone yesterday discussing some REPL workflow thoughts - not so much focused on XT itself, but it might be of interest! https://twitter.com/xtdb_com/status/1695072870089589060
I have a quick question, in dev mode sometimes we "wipe" the db (`tx_events`) included.
We have an XTDB listener running and when we do that the connection poller (`xtdb.tx.subscribe$handle_polling_subscription`) is throwing and does not seem to retry "enough". Is there a knob for it to configure there?
The actual error I get, after recreating tx_events
is
> xtdb.tx.subscribe - Error polling for txs, will retry
> java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms.
This is actually a problem in our scripts it seems, we down the docker container containing postgres
Hey @U0C8489U6 ! So you're expecting the node to be able to resume seamlessly somehow? Doing it naively wouldn't be safe if you are actually modifying tx_events
out-of-band - like if you wipe the table then start submitting new transactions you're going to get into a pretty funky state
ah, so you really want to wipe the local node state (index-store) also - is cycling the node(s) not possible?
well, I have considered pushing the team for a dev-time-only drop!
API before but we've never spec'd it out or decided that it's definitely a bad idea - in lieu of that I'm struggling to think of other options
another good use case for that for us is for E2E testing, we reset postgres before re-seeding with the data we need (hitting the apis)
back to the original question more directly, I suspect it would be non-trivial for a node to somehow know for sure that it should 'reset' itself based on a wiped/modified tx_events
- at least without introducing a new coordination mechanism (e.g. store some UUID at the beginning of the log that tracks such epoch changes)
assuming you're embedding XT nodes in your own code anyway I guess you can at least hack something together for this in userspace
I am thinking maybe restarting the XTDB component might work? given we wipe that table plus all the checkpointing dirs?
if there are no checkpoints and you haven't configured a durable kv store (Rocks / LMDB) then yep you should be okay to restart your component
right, didn't think about the kv store...