This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-01-12
Channels
- # arachne (1)
- # aws (2)
- # beginners (123)
- # boot (22)
- # boot-dev (8)
- # chestnut (3)
- # cider (38)
- # clara (36)
- # cljs-dev (148)
- # clojars (2)
- # clojure (76)
- # clojure-austin (2)
- # clojure-greece (1)
- # clojure-italy (6)
- # clojure-russia (5)
- # clojure-spec (8)
- # clojure-uk (65)
- # clojurescript (45)
- # core-async (38)
- # cursive (9)
- # data-science (5)
- # datomic (28)
- # docs (1)
- # emacs (2)
- # fulcro (34)
- # hoplon (18)
- # jobs-discuss (7)
- # keechma (8)
- # lumo (5)
- # om (3)
- # onyx (31)
- # parinfer (1)
- # pedestal (1)
- # re-frame (20)
- # reagent (5)
- # ring-swagger (16)
- # shadow-cljs (56)
- # spacemacs (11)
- # specter (8)
- # sql (5)
- # unrepl (29)
- # yada (6)
Using the client-api and a query like this: `[q '[:find (pull ?e [*])
:in $ [?vals...]
:where
[?e :myns/myattr ?vals]]`
It seems to me that the datomic peer-server won't return more than 1000
results, e.g. there seems to be some kind of cut-off, because when I provide 2000 items in the vals
collection, I get no more than 1000 results (i'm sure it should be 2000).
Can I configure this cut-off point somewhere?
aha 🙂, it's in the docs
:chunk - Optional. Maximum number of results that will be returned
for each chunk, up to 10000. Defaults to 1000.
In datomic can I transact a schema with a one-to-many relationship by using a :unique/identity value?
[{:db/ident :parent/children
:db/valueType db.type/ref
:db/cardinality :db.cardinality/many}
{:db/ident :child/id
:db/valueType db.type/keyword
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
[{:parent/children [:child-key-1 :child-key-2]}]
Something like that
I needed to do it like this:
[{:parent/children [{:db/id [:child/id :child-key-1} ...]}]
Can I trust my entity ids to be the same every time for my tests using an in memory db? They seem to be
I would definitely not rely on that, especially is stuff starts getting concurrent
Also, this seems super fragile - adding some new data in your test db could break a lot of subsequent tests
Ahh cheers. I can also see at the bottom of this page http://docs.datomic.com/entities.html they don’t recommend it either
I need to cascade data down my environments, from production to staging and from staging to developer local databases. I can use Datomic's backup and restore for that, has anyone tried this? I'd really like to be able to spin up a new environment for each pull request and populate the db from the latest backup, but I'm worried it would take too long.
@conan Maybe you could have each environment use an in memory fork of a common staging database
@conan you can't restore to in-memory, but you can do something even better. Just use Datomock: https://github.com/vvvvalvalval/datomock . Just stick a (datomock.api/fork-conn my-shared-staging-conn)
and you're good to go.
The fact that you can do that is one of the unsung superpowers of Datomic - it's based on the datomic.api/with
API.
OK this looks really great - so I could set up my database component to connect to a staging database, with a switch that in review instances forks the connection so no changes are actually written to the db? This would then allow me to test schema changes, run test suites and everything without acutally modifying the staging db. Then I can just have a cron job that restores the staging db from production every night or so. If that works then it completely justifies the choice of using datomic.
> If that works then it completely justifies the choice of using datomic. It does, doesn't it? 🙂
Ca le fait grave
@conan i believe datomic does incremental restores, so if you ran into perf issues maybe you could do a copy at the storage level and freshen from there.. just an idea
We routinely copy prod data to staging using backup/restore. It's a mild pain in the butt to do all the transactor / peer restarts, but it's perfectly doable.
We also wrote some tools to do the same for locally-running transactors on dev boxes and pull in S3 backups from other environments
It turned out to be easiest to do this with the transactor in a docker container since it's way easier to bring it up/down that way
With datalog, can you return the queries as maps with the attributes as keys? I’m actual asking for datascript, but it seems reasonable it would work the same way in both right?
@drewverlee (d/q '[:find (pull ?e [*]) :where [?e :user/name]] db)
Thanks! Thats perfect.
> If that works then it completely justifies the choice of using datomic. It does, doesn't it? 🙂