This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-05-15
Channels
- # aws (4)
- # beginners (98)
- # boot (23)
- # cider (63)
- # cljsrn (3)
- # clojure (259)
- # clojure-boston (1)
- # clojure-dev (2)
- # clojure-italy (6)
- # clojure-nl (17)
- # clojure-russia (1)
- # clojure-serbia (1)
- # clojure-spec (36)
- # clojure-uk (74)
- # clojurescript (11)
- # cursive (2)
- # datascript (12)
- # datomic (36)
- # defnpodcast (1)
- # devops (1)
- # docs (1)
- # emacs (15)
- # euroclojure (3)
- # fulcro (13)
- # graphql (1)
- # juxt (2)
- # lumo (27)
- # off-topic (46)
- # onyx (23)
- # pedestal (6)
- # planck (2)
- # portkey (27)
- # re-frame (18)
- # reagent (12)
- # remote-jobs (2)
- # ring-swagger (11)
- # rum (4)
- # shadow-cljs (104)
- # spacemacs (4)
- # sql (3)
- # tools-deps (5)
- # vim (45)
Is there a way to mark a var as final in the same way you can mark a Java variable as final, to prevent it from being re-bound to another value?
Sorry I’m in the wrong channel! Ignore me
how do I use :db-after
in a query ? The documentation https://docs.datomic.com/cloud/transactions/transaction-processing.html#results says "Both :db-before and :db-after are database values that can be
passed to the various query APIs", but :db-after
returned from a transaction is an ArrayMap and I can't find a way to use that map to create a datomic.client.impl.shared.Db
for use in (d/q)
an answer to @octo221’s question might help me solve the problem of keeping two datomic cloud clients in sync https://stackoverflow.com/questions/50347307/how-do-i-keep-two-datomic-cloud-clients-in-sync
@octo221 @joshkh it looks as though you use (d/as-of db t)
where t
is (get-in transaction-result [:db-after :t])
as-of dont get a "future" db, just a past db (as-of is a filter) https://docs.datomic.com/on-prem/filters.html#as-of-not-branch
hmm, agreed. using as-of
with a future value still doesn't "catch up" the db to the future.
unfortunately it's a no go. if someone from the cognitech team finds this, any advice would be very much appreciated! i've been trying to work it out for a few days and running queries twice is a nasty hack.
@joshkh client does not currently have the feature to ‘get the latest db available’; that would be a good suggestion for a feature request in our feature portal currently, passing a t value is the correct method to ensure multiple clients share a basis
thanks, @marshall. does passing a future t value to the client's d/as-of (as @octo221 suggested) update the database value? i ran a test and unless i'm mistaken it suggests not.
i'm still out of the datomic nomenclature loop! if repl 1 transacts and results with a :t 100
, and repl 2 is still at :t 99
and uses (as-of (d/db conn) 100)
as the db value in a query, should the transaction from repl 1 be found?
REPL 1 (db-after t value is 770)
(d/transact @conn {:tx-data [{:person/first-name "Alice"}]})
=>
{:db-before {:database-id "...",
:db-name "datomic-test",
:t 769,
:next-t 770,
:history false,
:type :datomic.client/db},
:db-after {:database-id "...",
:db-name "datomic-test",
:t 770,
:next-t 771,
:history false,
:type :datomic.client/db},
:tx-data [#datom[13194139534082 50 #inst"2018-05-15T13:14:44.154-00:00" 13194139534082 true]
#datom[70804150782265703 73 "Alice" 13194139534082 true]],
:tempids {}}
REPL 2 with explicit t value at 770 (is currently at 669)
(d/q '[:find (pull ?person [*])
:in $ ?person-name
:where
[?person :person/first-name ?person-name]]
(d/as-of (d/db @conn) 770)
"Alice")
=> []
... where as the act of executing that queries does update the t value in REPL 2, and re-running the query for a second time does return the transacted value.
thanks, @marshall. very much appreciated. it's a hurdle for us as we're sharing a cloud db across two applications. thanks for looking into it - i'm sure i'm just missing something obvious.
@joshkh @marshall @favila I can confirm: 2 repls, in one I transact a datum, then wait plenty of time (seconds), then in the second repl I query for the datum just transacted using d/db
to get the latest db value, but the result is []
, then I immediately try the same query again and this time I get the result
(using Datomic Cloud)
what's more, it's the use of d/q
which refreshes the db connection (or something), not the use of d/db
, since if I first use a previous db value (without calling d/db
in my first d/q
) then use d/q
with d/db
I get the same behaviour
repl 1 vs repl 2 was a contrived example but representative of the larger services that we've already built, and folding them into a monolithic application that shares a single client connection isn't really an option for us as we're scaling across containers. i'm open to any work-around that gets us past this show stopper, even if it's the phantom query method, but i'm hoping that's not the case unless it's bullet proof.
@joshkh we’re looking into the issue; in the interim, you can also create a new connection to “force” update
thanks a lot, @marshall. that wasn't meant to sound all gloom-and-doom. just highlighting the problems it's causing downstream.
@marshall is there a trackable issue we can check on for an update? slack is great but discussions tend to evaporate upwards over time. 🙂
@favila I had this problem, hence made a feature for that in datalog-rules: https://github.com/vvvvalvalval/datalog-rules#reversed-rules-generation-experimental . Granted, it's a bit of a hack, I wish that was part of Datomic's API
Short of datomic actually doing some query optimization, I wish I could use the bracket syntax on arbitrary arguments
I know...