This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-04-15
Channels
- # announcements (3)
- # architecture (1)
- # babashka (52)
- # beginners (228)
- # calva (1)
- # chlorine-clover (31)
- # cider (9)
- # clj-kondo (16)
- # cljs-dev (25)
- # cljsrn (21)
- # clojure (116)
- # clojure-argentina (8)
- # clojure-europe (18)
- # clojure-france (17)
- # clojure-germany (1)
- # clojure-nl (5)
- # clojure-spec (49)
- # clojure-uk (63)
- # clojurescript (59)
- # community-development (14)
- # conjure (89)
- # core-matrix (1)
- # cursive (18)
- # data-science (1)
- # datomic (27)
- # exercism (4)
- # figwheel-main (5)
- # fulcro (38)
- # ghostwheel (8)
- # graalvm (5)
- # hoplon (2)
- # jobs-discuss (17)
- # juxt (1)
- # lambdaisland (5)
- # luminus (1)
- # lumo (9)
- # malli (7)
- # off-topic (32)
- # planck (24)
- # re-frame (14)
- # reagent (14)
- # reitit (14)
- # rum (23)
- # shadow-cljs (80)
- # spacemacs (2)
- # sql (6)
- # unrepl (1)
- # xtdb (2)
hello. I am running a following query to get changes that happened over the last few minutes:
(d/q '[:find [?e ...]
:in $ ?log ?from-t ?to-t
:where
[(tx-ids ?log ?from-t ?to-t) [?tx ...]]
[(tx-data ?log ?tx) [[?e]]]
[?e :entity/status]]
(d/as-of db timestamp-to)
log
timestamp-from
timestamp-to)
Recently I've noticed that if I rerun the same query after some time, I sometimes get more items returned.
Is it possible that on the original run I am getting stale view from the log? As I am reusing the same instance of log for the whole batch.. or for some other reasonHi Ignas, that's surprising. Is it possible that your timestamp-to
is in the future the first time it runs? And the second run picks up more transactions?
The peer and the transactor might not be in sync. I would look at the extra items you get in your second query and see when they were transacted. Otherwise I don't see what could have happened, sorry 🙂
That is not a guarantee that the db contains tx info up to and including timestamp. Due to physics, the information may not have arrived yet
IOW you should either derive “now” from the datomic peer’s clock (whatever the latest t is that it knows about) or use d/sync to make sure the peer caught up to a wall-clock time you want to sync to
Oh! It's the as-of
that might not see everything. Thanks @U09R86PA4.
Was that correct that he could also have missed txs if the peer clock was ahead of the txtor?
Dbs have an inherent basis-t, which is the newest t they know about. As-of is a filter on top of that, but doesn’t alter the basis
as I don't have a t
to sync to, would just using (d/db conn)
instead of the whole as-of
work here:
(d/q '[:find [?e ...]
:in $ ?log ?from-t ?to-t
:where
[(tx-ids ?log ?from-t ?to-t) [?tx ...]]
[(tx-data ?log ?tx) [[?e]]]
[?e :entity/status]]
(d/db conn)
log
timestamp-from
timestamp-to)
If you are coordinating between peers or with some other wallclock system you need sync. If you aren’t coordinating and just want the latest thing that peer can see, use the basis t of any db value as the “latest”
I am not sure which you are doing though. The use of timestamp arguments is suspicious to me and suggests maybe you are coordinating with a clock somewhere
I have a service that exports the changes to another database. It uses a sliding window with overlap so that it can catch-up if there is a delay, or we might want to replay from a certain point in time. This is where the wallclock comes in.. we use it to log what periods were exported and to determine what to export next. so ideally i just want to get everything that happened in the last few minutes (ant the window gets bigger if its catching up)
So it seems that it might have been a design flaw to choose timestamps as a way to track the export windows
on the other hand it gives us more visibility on what data is already exported, being able to easily say that for example data is at the moment served with 5 hour delay.
Is there something like Datomic Console for Datomic Cloud, or a way to use it with Cloud?
@auroraminor Use REBL