This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-02-09
Channels
- # announcements (4)
- # beginners (71)
- # boot (258)
- # braid-chat (7)
- # business (3)
- # cider (5)
- # cljs-dev (5)
- # cljsrn (64)
- # clojure (154)
- # clojure-canada (1)
- # clojure-poland (112)
- # clojure-russia (290)
- # clojurebridge (1)
- # clojurescript (60)
- # community-development (1)
- # core-async (25)
- # cursive (9)
- # data-science (1)
- # datomic (40)
- # editors (14)
- # events (2)
- # hoplon (2)
- # jobs (3)
- # ldnclj (51)
- # lein-figwheel (2)
- # luminus (1)
- # off-topic (5)
- # om (57)
- # onyx (29)
- # overtone (1)
- # parinfer (52)
- # portland-or (1)
- # proton (17)
- # quil (2)
- # re-frame (77)
- # reagent (1)
- # ring-swagger (20)
- # spacemacs (1)
- # test-check (4)
- # testing (13)
- # yada (1)
im trying to find a most minimal example of fast in-memory datomic db tests which doesn't require a global connection object and uses d/with
.
i found http://yellerapp.com/posts/2014-05-07-testing-datomic.html but it doesn't share what does that (empty-db)
function does to avoid recreating the db and reconnecting to it.
i found https://gist.github.com/vvvvalvalval/9330ac436a8cc1424da1 too but it seems a bit harsh and doesn't show what is solution is it comparing to.
ah, i see vvvvalvalval has a recent article on this topic http://vvvvalvalval.github.io/posts/2016-01-03-architecture-datomic-branching-reality.html
@onetom, interesting article!
do I read the article correctly: you can use d/with
to do multiple "transactions" one after another where the second one builds on the first?
so you can basically completely emulate, or "fork", a connection
you totally can
we do this with great success
you do need to shepherd the temp ids from prior d/with’s to later ones if you mean to ultimately transact something for real
make tx -> d/with. query with db, make another tx (now using ids that look like ones in storage but actually just came from d/wtih) -> d/with. repeat N times. actually commit final tx to storage which includes all the intermediate txes together, after swapping out the d/with “real” ids for the tempids again, so that the final tx has all the right real and temp ids.
i have code for this if anyone wants
we use it here: http://www.stuttaford.me/2016/01/15/how-cognician-uses-onyx/
interesting
multiple onyx tasks each doing their own work, but each building on the data of the previous one. only actually goes into storage at the end.
in my use case, I'm not planning to actually "really" commit anything
they each use d/with and return tx data
curious, why would you want to commit at the end?
using the onyx-datomic commit-bulk-tx plugin
you’ll see if you scan my post
@robert-stuttaford: will do, thanks
Datomic 0.9.5350 is now available https://groups.google.com/d/msg/datomic/TIGnE3Dtjgs/PEAWEQdcEgAJ
@bkamphaus: Can you elaborate on this bullet: * Improvement: connection caching behavior has been changed so that peers can now connect to the same database served by two (or more) different transactors.
@jgdavey: That bullet specifically deals with peers connecting to multiple databases than originated from the same call to create-database. I.e. if you have a staging database that is restored locally (dev) from a backup of a production database on some other storage, you can now launch a single JVM peer that can connect to both the staging and the production instance.
Just to make sure I’m understanding correctly: is the connection caching now based on URI and database id?
@jgdavey: aspects of the connection+storage config, but caching in that respect is just an implementation detail. The contract-level from this release forward is that two different transactors, one serving a database and the other a restored copy of that database in a different storage, can be reached from the same peer.
That makes sense. Whereas before, peers wouldn’t be able to simultaneously connect to a db and a restored copy of it on another transactor.
@jgdavey: before if you tried to establish a second connection it would stay connected to the first DB
yeah I'm happy that's getting fixed