Fork me on GitHub

Using the client-api and a query like this: `[q '[:find (pull ?e [*]) :in $ [?vals...] :where [?e :myns/myattr ?vals]]` It seems to me that the datomic peer-server won't return more than 1000 results, e.g. there seems to be some kind of cut-off, because when I provide 2000 items in the vals collection, I get no more than 1000 results (i'm sure it should be 2000). Can I configure this cut-off point somewhere?


aha 🙂, it's in the docs

:chunk - Optional. Maximum number of results that will be returned
  for each chunk, up to 10000. Defaults to 1000.


In datomic can I transact a schema with a one-to-many relationship by using a :unique/identity value?


[{:db/ident :parent/children
  :db/valueType db.type/ref
  :db/cardinality :db.cardinality/many}
 {:db/ident :child/id
  :db/valueType db.type/keyword
  :db/cardinality :db.cardinality/one
  :db/unique :db.unique/identity}
[{:parent/children [:child-key-1 :child-key-2]}]


I would say yes, but it's also really easy to try it out


Something like that


I needed to do it like this:

[{:parent/children [{:db/id [:child/id :child-key-1} ...]}]


Can I trust my entity ids to be the same every time for my tests using an in memory db? They seem to be


I would definitely not rely on that, especially is stuff starts getting concurrent


Also, this seems super fragile - adding some new data in your test db could break a lot of subsequent tests


Ahh cheers. I can also see at the bottom of this page they don’t recommend it either


I wouldn't rely on that across instantiations.


I need to cascade data down my environments, from production to staging and from staging to developer local databases. I can use Datomic's backup and restore for that, has anyone tried this? I'd really like to be able to spin up a new environment for each pull request and populate the db from the latest backup, but I'm worried it would take too long.


@conan Maybe you could have each environment use an in memory fork of a common staging database


How can I create that fork?


I like the sound of in-memory, but you can’t restore a backup to a mem transactor


@conan you can't restore to in-memory, but you can do something even better. Just use Datomock: . Just stick a (datomock.api/fork-conn my-shared-staging-conn) and you're good to go.


The fact that you can do that is one of the unsung superpowers of Datomic - it's based on the datomic.api/with API.


OK this looks really great - so I could set up my database component to connect to a staging database, with a switch that in review instances forks the connection so no changes are actually written to the db? This would then allow me to test schema changes, run test suites and everything without acutally modifying the staging db. Then I can just have a cron job that restores the staging db from production every night or so. If that works then it completely justifies the choice of using datomic.


> If that works then it completely justifies the choice of using datomic. It does, doesn't it? 🙂

Vincent Cantin15:01:02

Ca le fait grave


@conan i believe datomic does incremental restores, so if you ran into perf issues maybe you could do a copy at the storage level and freshen from there.. just an idea


We routinely copy prod data to staging using backup/restore. It's a mild pain in the butt to do all the transactor / peer restarts, but it's perfectly doable.


We also wrote some tools to do the same for locally-running transactors on dev boxes and pull in S3 backups from other environments


It turned out to be easiest to do this with the transactor in a docker container since it's way easier to bring it up/down that way

Drew Verlee21:01:53

With datalog, can you return the queries as maps with the attributes as keys? I’m actual asking for datascript, but it seems reasonable it would work the same way in both right?


@drewverlee (d/q '[:find (pull ?e [*]) :where [?e :user/name]] db)

Drew Verlee22:01:24

Thanks! Thats perfect.