Fork me on GitHub
#xtdb
<
2019-08-29
>
hoppy13:08:56

is there a handy example on how to setup the jdbc store?

refset17:08:11

@hoppy not "handy" but I can accelerate our release plan a little 🙂 which backend DB are you hoping to use?

hoppy17:08:56

postgres 11.5

hoppy18:08:28

I'm basically hunting the recipe that aligns with the start-standalone-node / start-cluster-node thingy. I can puzzle through that and see how to do it eventually.

hoppy18:08:49

I'm hoping to house this in an existing db, if that's possible

refset21:08:27

@hoppy this seems to work:

(ns crux-jdbc-example
  (:require [crux.api :as crux])
  (:import (crux.api Crux ICruxAPI)))

(def opts {:dbtype "postgresql"
           :port 5432
           :dbname "postgres"
           :user "postgres"
           :kv-backend "crux.kv.memdb.MemKv"
           :db-dir "kv-store"})

(def node (Crux/startJDBCNode opts))
in conjunction with running the standard postgres docker image using docker run -p 5432:5432 postgres

refset21:08:49

I can't see an obvious reason why reusing an existing db within a running instance would be a bad idea, as long as you have plenty of headroom for the storage of backups -- but I'm not a database operations expert 🙂

hoppy21:08:14

I'm playing with ^^^

hoppy21:08:16

I'm wanting to do that (move in) because I have a hybrid situation and I wan't a singular consistent backup story

hoppy21:08:49

Oh, and I'll stick with rocks for the kv-store

👍 4
hoppy21:08:25

I've surmised that if I completely wipe the rocks stuff, it will rebuild itself, is that correct?

refset21:08:34

Cool, yeah the sweet spot for JDBC mode is that it fits into an existing ops landscape very trivially

refset21:08:56

yep, any kind of cluster node (i.e. JDBC or Kafka) can rebuild itself

refset21:08:03

I wouldn't necessarily plan on doing that too much given the time it takes to rebuild will only increase, but I'm guessing you don't have huge quantities of data

hoppy21:08:04

so the other fun question. what if the kv store is older than the database backup. Does it "catch up" or is that a nuclear event

hoppy21:08:47

aka, my strategy is to not care about the kv store aside from say, back it up once a month.

hoppy21:08:12

postgres would be either replicated or wal archived

refset22:08:48

yep it catches up, regardless of how old it is, and that's the basis of how clustering works with each node continuously trying to stay up to date with transaction log independently

jjttjj23:08:46

Is there a way to block until a submit-tx completes?

(let [id (m/random-uuid)]
  (crux/submit-tx node [[:crux.tx/put {:crux.db/id id}]])
  ;;??
  (crux/entity (crux/db node) id)) ;;how to make this return the submitted entity

jjttjj23:08:22

oh wait, I think sync is what I want

👍 4
jjttjj23:08:24

Here we go, working:

(let [id (m/random-uuid)]
  (let [result (crux/submit-tx node [[:crux.tx/put {:crux.db/id id}]])]
    (crux/sync node (:crux.tx/tx-time result) (Duration/parse "pt1s")))
  (crux/entity (crux/db node) id))

🙂 4
hoppy23:08:28

@taylor.jeremydavid the above sorta works (the first time).

hoppy23:08:46

after that, it tries to do the create_table again on the 2nd run

refset08:08:35

@U050DD55V any advice on how this should be handled?

jonpither08:08:52

the create-table should be idempotent - i.e. create table if not exists, so should be ok to be called. if there is a bug / problem then please raise and we'll fix

hoppy14:09:45

I retried this with master last night. It seems that the sql for has been updated there, so it is working now.

parrot 8
hoppy14:09:45

I retried this with master last night. It seems that the sql for has been updated there, so it is working now.

parrot 8