Fork me on GitHub
#datomic
<
2016-02-09
>
onetom07:02:09

im trying to find a most minimal example of fast in-memory datomic db tests which doesn't require a global connection object and uses d/with. i found http://yellerapp.com/posts/2014-05-07-testing-datomic.html but it doesn't share what does that (empty-db) function does to avoid recreating the db and reconnecting to it.

onetom07:02:36

i found https://gist.github.com/vvvvalvalval/9330ac436a8cc1424da1 too but it seems a bit harsh and doesn't show what is solution is it comparing to.

pesterhazy08:02:49

@onetom, interesting article!

pesterhazy08:02:02

do I read the article correctly: you can use d/with to do multiple "transactions" one after another where the second one builds on the first?

pesterhazy08:02:42

so you can basically completely emulate, or "fork", a connection

onetom09:02:45

yup, that's the idea

robert-stuttaford10:02:01

we do this with great success

robert-stuttaford10:02:24

you do need to shepherd the temp ids from prior d/with’s to later ones if you mean to ultimately transact something for real

robert-stuttaford10:02:30

make tx -> d/with. query with db, make another tx (now using ids that look like ones in storage but actually just came from d/wtih) -> d/with. repeat N times. actually commit final tx to storage which includes all the intermediate txes together, after swapping out the d/with “real” ids for the tempids again, so that the final tx has all the right real and temp ids.

robert-stuttaford10:02:37

i have code for this if anyone wants

robert-stuttaford10:02:00

multiple onyx tasks each doing their own work, but each building on the data of the previous one. only actually goes into storage at the end.

pesterhazy10:02:05

in my use case, I'm not planning to actually "really" commit anything

robert-stuttaford10:02:16

they each use d/with and return tx data

pesterhazy10:02:17

curious, why would you want to commit at the end?

robert-stuttaford10:02:03

using the onyx-datomic commit-bulk-tx plugin

robert-stuttaford10:02:30

you’ll see if you scan my post

jgdavey15:02:25

@bkamphaus: Can you elaborate on this bullet: * Improvement: connection caching behavior has been changed so that peers can now connect to the same database served by two (or more) different transactors.

jgdavey15:02:02

More than one transactor can serve a single datomic database?

marshall15:02:28

@jgdavey: That bullet specifically deals with peers connecting to multiple databases than originated from the same call to create-database. I.e. if you have a staging database that is restored locally (dev) from a backup of a production database on some other storage, you can now launch a single JVM peer that can connect to both the staging and the production instance.

jgdavey15:02:58

Just to make sure I’m understanding correctly: is the connection caching now based on URI and database id?

Ben Kamphaus15:02:23

@jgdavey: aspects of the connection+storage config, but caching in that respect is just an implementation detail. The contract-level from this release forward is that two different transactors, one serving a database and the other a restored copy of that database in a different storage, can be reached from the same peer.

jgdavey16:02:06

That makes sense. Whereas before, peers wouldn’t be able to simultaneously connect to a db and a restored copy of it on another transactor.

jgdavey16:02:23

And/or the behavior was undefined/unsupported

jgdavey16:02:43

Not trying to beat a dead horse, just want to make sure I understand simple_smile

kschrader16:02:10

@jgdavey: before if you tried to establish a second connection it would stay connected to the first DB

kschrader16:02:30

assuming that I’m understanding this change correctly, this fixes that

kschrader16:02:23

if you did (def prod-conn (d/connect PROD_URI))

kschrader16:02:54

and then (def local-copy-of-prod (d/connect LOCAL_COPY_URI))

kschrader16:02:26

local-copy-of-prod would actually be pointing at PROD_URI

kschrader16:02:29

which was bad

pesterhazy19:02:21

yeah I'm happy that's getting fixed

jgdavey19:02:01

Thank you everyone for the clarification. simple_smile