Fork me on GitHub
#datahike
<
2023-05-17
>
alekcz10:05:24

How often does datahike persist data to disk? I want to migrate my datahike server to a new cloud

timo10:05:49

I am not the one that is qualified to answer but I guess it depends on how much you transact. @UB95JRKM3 @j.massa @U1C36HC6N

alekcz11:05:01

How do I force it to persist? So I can point traffic to the new instance

Judith Massa11:05:06

The data is indeed persisted to the konserve backend on every transaction, so there shouldn't be a problem moving the data with the file backend. I can't say anything about other backends though or specifics of datahike-server.

timo11:05:14

huh :thinking_face: I guess then all backends should do the same. at least from what I know.

Judith Massa11:05:04

well, other external backends are basically services themselves, so it depends on when they persist their data to disk

alekcz11:05:41

As long as datahike dispatches to on each transaction I'm good

alekcz11:05:06

I know konserve-jdbc also dispatches the write immediately

Judith Massa11:05:30

Then you should be good without any further action

alekcz08:05:21

I checked last night. I'm on 0.1510. It's definitely not syncing. I connected 3 datahike-server instances. And they didn't have the same data.

alekcz08:05:31

It's super worry because now I don't know if I reboot if I'll loose data

timo08:05:03

What do you mean with 0.1510? And why three datahike-server instances?

alekcz08:05:00

I'm on datahike 0.1510. I connected 3 different datahike-servers to the same mysql db.

timo08:05:26

0.6.1510?

alekcz08:05:24

Ah sorry missed a number 0.5.1510

👍 2
alekcz08:05:07

The running instance has the most up to date data. The other two are quite far behind.

timo08:05:51

Datahike did not update the pointer when transacting data on another instance, you need https://github.com/replikativ/datahike/releases/tag/0.6.1539

alekcz08:05:00

But if I reboot my server there won't be any data loss right?

timo08:05:19

all the transacted data should be there

timo08:05:53

but keep in mind when reading and transacting from different server instances you have inconsistencies

alekcz09:06:25

I don't plan on doing both at the same time. It's just when I want to switch providers. I can power down one and move to another.

👍 2
alekcz11:05:15

@j.massa @timok could we release the updated konserve-jdbc to clojars?

timo11:05:23

the pipeline has an error

timo12:05:45

clojars is down today

timo08:05:56

released it now 0.1.2

alekcz06:05:31

Thanks. I'm trying to update datahike-jdbc but I'm getting a whole bunch of error. I'm not too familar with deps.edn but my guess is that the pom isn't reflecting what's in the deps.edn

timo06:05:09

try clj -X:deps mvn-pom

alekcz07:05:11

If I don't add

com.github.seancorfield/next.jdbc {:mvn/version "1.3.874"}
        com.mchange/c3p0 {:mvn/version"0.9.5.5"}
It gives me a class not found error

timo07:05:15

are you using the development branch?

alekcz07:05:55

Tried both. Is my understanding that the pom.xml should match deps.edn correct?

alekcz07:05:17

in konserve-jdbc

timo07:05:52

the pom is generated with the command above

alekcz07:05:20

What I mean is: That command generates a pom.xml. Presumably it runs before deploying to clojars.

👍 2
alekcz07:05:13

The current pom.xml doesn't match deps.edn in the repo. Which means clojars doesn't have the required deps either

alekcz07:05:30

• konserve on clojars is 0.6.0. In deps.edn 0.7.285

alekcz07:05:07

Should I run that command and do a pull request

timo07:05:32

when you look at the circleci/config you can see that first the pom is updated and then the jar is built. To avoid confusion the pom should not have dependencies in it. somehow it got committed. but from what I see the jar is always built with updated pom.xml even if not committed to the git repo.

timo07:05:19

I assume that clojars is pulling the deps from the repo not from the built jar. don't know why though

timo07:05:14

you want to do a short call?

alekcz07:05:48

In 14 min?

timo07:05:00

ok, send me a link

alekcz11:05:55

Awesome 🎉