This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (2)
- # beginners (82)
- # calva (13)
- # cider (12)
- # clara (4)
- # cljdoc (22)
- # clojure (89)
- # clojure-dev (23)
- # clojure-europe (16)
- # clojure-italy (39)
- # clojure-nl (8)
- # clojure-spec (28)
- # clojure-uk (36)
- # clojurescript (40)
- # cursive (10)
- # data-science (1)
- # datomic (27)
- # devcards (4)
- # emacs (1)
- # fulcro (25)
- # jobs (1)
- # jobs-discuss (3)
- # kaocha (5)
- # luminus (1)
- # nrepl (68)
- # off-topic (64)
- # pedestal (23)
- # planck (1)
- # quil (4)
- # re-frame (6)
- # reitit (5)
- # remote-jobs (4)
- # shadow-cljs (16)
- # spacemacs (11)
- # testing (1)
Can I create a Datomic database with an past, fixed basis rather than just
Use case: I create Datomic databases once per unit test. A given unit test must be able to create an entity with a
(-> 10 minutes ago).
But if the database was created just now, I won't be able to add such an entity:
Time conflict: Tue Apr 23 05:31:11 CEST 2019 is older than database basis
If you are getting this error then you transacted something after that without overriding the txInstant into the past. Maybe schema?
I’m writing a query to support pagination for a web client. The
q API supports
:limit which seems perfect, but I’m having trouble ensuring a sort order for the results. How do people handle pagination for web apps when using Datomic?
I think limit+offset is only somewhat reliable when reusing the same db (anchored to specific t) in fairly rapid succession
if your result is a set, you should expect a repeatable order if the results are the same
(the repeatable order is due to hashing order, so additional caveat that the result items are hashable)
That’s about where I landed as well. Here I’ve been worried about memory usage, but that’s probably unnecessary. Thanks!
even in on-prem, going
:find (pull ?x [*]( . to get only one result will pull everything, then take the first tiem
so try to return as little as you can manage from your "big" query and followup with additional queries or pulls later, using that result as input
along the same lines, if you have the kind of query which has a large known set based on a raw datomic index, you can use d/datoms to select a subset, then feed that as input to your query
Yeah, good idea. We’re doing a pull of eids and txInstant, sorting, dropping and taking depending on the pagination, and then querying for the full data when we know just the page’s amount.
this only works for query shapes where each item in the first "clause" of the query is merely filtered out by subsequent clauses
(the query doesn't see the entire result so it can't do the deduping normally done by the result set)
That makes sense — the raw indices don’t seem to fit just right in our use case, but it’s a good idea.