Fork me on GitHub
#datomic
<
2018-01-12
>
hansw11:01:41

Using the client-api and a query like this: `[q '[:find (pull ?e [*]) :in $ [?vals...] :where [?e :myns/myattr ?vals]]` It seems to me that the datomic peer-server won't return more than 1000 results, e.g. there seems to be some kind of cut-off, because when I provide 2000 items in the vals collection, I get no more than 1000 results (i'm sure it should be 2000). Can I configure this cut-off point somewhere?

hansw11:01:28

aha 🙂, it's in the docs

:chunk - Optional. Maximum number of results that will be returned
  for each chunk, up to 10000. Defaults to 1000.

caleb.macdonaldblack12:01:59

In datomic can I transact a schema with a one-to-many relationship by using a :unique/identity value?

caleb.macdonaldblack12:01:03

[{:db/ident :parent/children
  :db/valueType db.type/ref
  :db/cardinality :db.cardinality/many}
 {:db/ident :child/id
  :db/valueType db.type/keyword
  :db/cardinality :db.cardinality/one
  :db/unique :db.unique/identity}
[{:parent/children [:child-key-1 :child-key-2]}]

conan15:01:21

I would say yes, but it's also really easy to try it out

caleb.macdonaldblack12:01:24

Something like that

caleb.macdonaldblack12:01:33

I needed to do it like this:

[{:parent/children [{:db/id [:child/id :child-key-1} ...]}]

caleb.macdonaldblack14:01:53

Can I trust my entity ids to be the same every time for my tests using an in memory db? They seem to be

val_waeselynck14:01:35

I would definitely not rely on that, especially is stuff starts getting concurrent

val_waeselynck14:01:36

Also, this seems super fragile - adding some new data in your test db could break a lot of subsequent tests

caleb.macdonaldblack00:01:17

Ahh cheers. I can also see at the bottom of this page http://docs.datomic.com/entities.html they don’t recommend it either

hansw14:01:50

I wouldn't rely on that across instantiations.

conan15:01:23

I need to cascade data down my environments, from production to staging and from staging to developer local databases. I can use Datomic's backup and restore for that, has anyone tried this? I'd really like to be able to spin up a new environment for each pull request and populate the db from the latest backup, but I'm worried it would take too long.

val_waeselynck18:01:34

@conan Maybe you could have each environment use an in memory fork of a common staging database

conan10:01:02

How can I create that fork?

conan10:01:01

I like the sound of in-memory, but you can’t restore a backup to a mem transactor

val_waeselynck11:01:28

@conan you can't restore to in-memory, but you can do something even better. Just use Datomock: https://github.com/vvvvalvalval/datomock . Just stick a (datomock.api/fork-conn my-shared-staging-conn) and you're good to go.

val_waeselynck11:01:49

The fact that you can do that is one of the unsung superpowers of Datomic - it's based on the datomic.api/with API.

conan14:01:17

OK this looks really great - so I could set up my database component to connect to a staging database, with a switch that in review instances forks the connection so no changes are actually written to the db? This would then allow me to test schema changes, run test suites and everything without acutally modifying the staging db. Then I can just have a cron job that restores the staging db from production every night or so. If that works then it completely justifies the choice of using datomic.

val_waeselynck15:01:58

> If that works then it completely justifies the choice of using datomic. It does, doesn't it? 🙂

Vincent Cantin15:01:02

Ca le fait grave

spieden18:01:15

@conan i believe datomic does incremental restores, so if you ran into perf issues maybe you could do a copy at the storage level and freshen from there.. just an idea

timgilbert18:01:57

We routinely copy prod data to staging using backup/restore. It's a mild pain in the butt to do all the transactor / peer restarts, but it's perfectly doable.

timgilbert18:01:01

We also wrote some tools to do the same for locally-running transactors on dev boxes and pull in S3 backups from other environments

timgilbert18:01:48

It turned out to be easiest to do this with the transactor in a docker container since it's way easier to bring it up/down that way

Drew Verlee21:01:53

With datalog, can you return the queries as maps with the attributes as keys? I’m actual asking for datascript, but it seems reasonable it would work the same way in both right?

souenzzo21:01:26

@drewverlee (d/q '[:find (pull ?e [*]) :where [?e :user/name]] db)

Drew Verlee22:01:24

Thanks! Thats perfect.