This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-11-10
Channels
- # announcements (3)
- # asami (19)
- # babashka (38)
- # beginners (42)
- # cider (19)
- # clojure (17)
- # clojure-europe (34)
- # clojure-hungary (3)
- # clojure-nl (1)
- # clojure-norway (53)
- # clojure-uk (7)
- # clojuredesign-podcast (34)
- # conjure (2)
- # cursive (7)
- # data-science (13)
- # datalevin (3)
- # datomic (19)
- # dev-tooling (1)
- # events (1)
- # honeysql (2)
- # hyperfiddle (31)
- # integrant (16)
- # juxt (39)
- # missionary (14)
- # nrepl (14)
- # off-topic (57)
- # overtone (22)
- # podcasts-discuss (1)
- # practicalli (32)
- # reitit (12)
- # releases (2)
- # ring (13)
- # ring-swagger (2)
- # sql (85)
- # squint (75)
Hi Guys, are there any resources on testing datomic transactions and queries? Preferably one that touches on the best practices. I’m using the client api
what kind of system behaviors do you want your tests to verify? ymmv but I very rarely use test doubles w/datomic. most of my testing needs are usually best served by a mem db w/peer; or a datomic system on storage (even if just file system/dev) with fake data. since queries return data, queries are data, transactions are data, etc. a lot of the classic OO use of test doubles isn’t really needed. most of the time you’re in a domain where you’re just making sure you’re not throwing invalid data shapes into the system, which can be verified with specs or other structural tests. or you need to ensure that you hit a particular db state after a sequence of transactions, in which case you should use an in mem or persistent db rather than implement one yourself in test objects. if you find yourself reaching for test doubles because you’re transacting inside a lot of different functions, it may be that it would be better to refactor the app so you’re separating the concerns of generating and transforming data to be transacted from the logic of transacting and ensuring that transactions succeed. this way you can unify logic like e.g. annotating a tx with a uuid or other unique identifier to make strong guarantees about succeeding exactly once on retry. it’s also very valuable at the repl to have a durable db in tests, so that when you’re debugging strange heisenbugs, you can simply comment out delete-db from teardown, then use query/the api to inspect the sequence of transactions and their contents.
im also curious what specific problem are you facing. we are also using datomic client api, because our production system is Datomic Cloud. for testing we use https://docs.datomic.com/cloud/datomic-local.html#memdb we have some convenience layer around initializing a new databases with their schemas. then a test setup looks something like:
(deftest x-test
(let [{:keys [datomic/xxx-db]} (fake-svc/mk)]
; GIVEN
(-> [[{:some 'data}]
[{:some/more 'data}
[`in/another :transaction]]]
(->> (run! (partial map dc/tx! xxx-db))))
(is
; THEN
{:expectation met?}
; WHEN
(-> [(some-tx-generating-fn {:with 'some} "params")]
(->> (dc/tx! xxx-db))
; and
; 1. interrogate the :db-after via some datalog query
; 2. pull some entity from :db-after based on some temp ID
; converted to an actual ID via :tempids
))))
where dc/tx!
is something like:
(defn tx! [conn tx-data]
(d/transact conn {:tx-data tx-data}))
alternatively, u can test multiple scenarios, which share a common setup (the GIVEN part), by not transacting within the is
, but instead using d/with
, like
; WHEN
(-> [(some-tx-generating-fn {:with 'some} "params")]
(->> (d/with (d/with-db xxx-db)))
:db-after
...
)
furthermore, it might worth using actual https://docs.datomic.com/cloud/transactions/transaction-functions.html, instead of functions which just return transaction data, because they can be also used in the GIVEN
section to setup a realistic database state concisely, like i showed in the [<backtick>in/another :transaction]
case above.
then the transaction-(data)-under-test would just become
[[`some-tx-fn {:with 'some} "params"]]
of course u can directly unit test some-tx-fn
too and assert what tx-data does it generate, but to make sure that the generated tx-data is actually transactable, u want the other kind of tests above, which actually attempt to transact it too.
another trick u might want to utilize is to name your test entities. that simplifies both test setup and the querying of the resulting database value too, because u don't have to deal with numerical entity IDs that much. eg, instead of
; GIVEN
(-> [[{:our.user/id (random-uuid)
:our.user/email ""}]
[{:some.entity/owner [:our.user/email ""]
:some.entity/attr 'ibute}]]
(->> (run! (partial map dc/tx! xxx-db))))
u can just write something like
; GIVEN
(-> [[{:db/ident :u1
:our.user/id (random-uuid)
:our.user/email (str (gensym "user-") "@gmail.com")}]
[{:some.entity/owner :u1
:some.entity/attr 'ibute}]]
(->> (run! (partial map dc/tx! xxx-db))))
and even extract the random user attribute generation into some helper function
OR use some clojure.spec or malli data generation lib
OR maybe the details of the user entity are not even important, so u can just use {:db/ident :u1}
insteadmaking a new database for every test case takes less than 10 millisec on an Apple M2 Pro machine, so that shouldn't really be your concern.
when u r unit testing just transaction functions or queries, then it's also possible to use a transduction to speculatively apply you GIVEN
scenario onto a common db, which already has the schema transacted, but that's added complexity.
regardless, as a reference, here is an example how would such a function looks like in our system:
(defn with-txs
"Speculatively apply the `txs` transactions to a `dc|txr|with-db`.
Transform the last transaction result with `completing-fn`, if specified.
As its name suggests, `dc|txr|with-db` can be:
1. `dc` - Datomic component, with a `:branch` key, containing a `d/with-db`
or an `rmap/rval`, which would valuate to a with-db.
2. `txr` - A \"transaction result\" map, as returned by `d/with`, allowing
the chaining of `with-txs` calls.
3. `with-db` - a with-db, directly."
([dc|txr|with-db* txs*] (with-txs dc|txr|with-db* txs* identity))
([dc|txr|with-db txs completing-fn]
(transduce
(map txm)
(fn reduce-tx
([] {:db-after (or (-> dc|txr|with-db (rmap/get! :branch))
(-> dc|txr|with-db :db-after)
(-> dc|txr|with-db))})
([txr] (completing-fn txr))
([txr tx] (-> txr :db-after (d/with tx))))
txs)))
dc
is short for datomic component. it's a concept specific to our system
txr
is short for transaction result or receipt
with-db
just means the object returned by d/with-db
, which is the same kind of object u get from the :db-after
of a d/with
call.
so there is some monad lurking in there where the monadic values are {:db-after <with-db>}
, but i haven't felt the need to be that abstract about this problem.
Thanks a lot @U06GLTD17 @U086D6TBN! Your responses have been very helpful. I started working with datomic recently and wasn’t sure about how to go about testing: How to set up/Tear down, best practices, etc.
we don't even tear down the dbs tests make temporarily, because we usually restart our REPLs about daily, so there's little chance to accumulate too much db garbage.
i used https://github.com/vvvvalvalval/datomock in the past which is really nice, but it does not support the client api
what kind of system behaviors do you want your tests to verify? ymmv but I very rarely use test doubles w/datomic. most of my testing needs are usually best served by a mem db w/peer; or a datomic system on storage (even if just file system/dev) with fake data. since queries return data, queries are data, transactions are data, etc. a lot of the classic OO use of test doubles isn’t really needed. most of the time you’re in a domain where you’re just making sure you’re not throwing invalid data shapes into the system, which can be verified with specs or other structural tests. or you need to ensure that you hit a particular db state after a sequence of transactions, in which case you should use an in mem or persistent db rather than implement one yourself in test objects. if you find yourself reaching for test doubles because you’re transacting inside a lot of different functions, it may be that it would be better to refactor the app so you’re separating the concerns of generating and transforming data to be transacted from the logic of transacting and ensuring that transactions succeed. this way you can unify logic like e.g. annotating a tx with a uuid or other unique identifier to make strong guarantees about succeeding exactly once on retry. it’s also very valuable at the repl to have a durable db in tests, so that when you’re debugging strange heisenbugs, you can simply comment out delete-db from teardown, then use query/the api to inspect the sequence of transactions and their contents.
(usual caveats apply: there are always exceptions and outliers, there are no perfect solutions only tradeoffs, etc etc)