Fork me on GitHub
James Vickers01:05:24

Is there a way to mark a var as final in the same way you can mark a Java variable as final, to prevent it from being re-bound to another value?


You mean a datalog var?

James Vickers02:05:36

Sorry I’m in the wrong channel! Ignore me


how do I use :db-after in a query ? The documentation says "Both :db-before and :db-after are database values that can be passed to the various query APIs", but :db-after returned from a transaction is an ArrayMap and I can't find a way to use that map to create a datomic.client.impl.shared.Db for use in (d/q)


an answer to @octo221’s question might help me solve the problem of keeping two datomic cloud clients in sync


@octo221 @joshkh it looks as though you use (d/as-of db t) where t is (get-in transaction-result [:db-after :t])


as-of dont get a "future" db, just a past db (as-of is a filter)


hmm, agreed. using as-ofwith a future value still doesn't "catch up" the db to the future.


unfortunately it's a no go. if someone from the cognitech team finds this, any advice would be very much appreciated! i've been trying to work it out for a few days and running queries twice is a nasty hack.


@joshkh client does not currently have the feature to ‘get the latest db available’; that would be a good suggestion for a feature request in our feature portal currently, passing a t value is the correct method to ensure multiple clients share a basis


thanks, @marshall. does passing a future t value to the client's d/as-of (as @octo221 suggested) update the database value? i ran a test and unless i'm mistaken it suggests not.


a ‘future t’ ?


i'm still out of the datomic nomenclature loop! if repl 1 transacts and results with a :t 100, and repl 2 is still at :t 99 and uses (as-of (d/db conn) 100) as the db value in a query, should the transaction from repl 1 be found?


i believe so yes


i just gave it a shot and came up blank but i'll try again now


REPL 1 (db-after t value is 770)

(d/transact @conn {:tx-data [{:person/first-name "Alice"}]})
{:db-before {:database-id "...",
             :db-name "datomic-test",
             :t 769,
             :next-t 770,
             :history false,
             :type :datomic.client/db},
 :db-after {:database-id "...",
            :db-name "datomic-test",
            :t 770,
            :next-t 771,
            :history false,
            :type :datomic.client/db},
 :tx-data [#datom[13194139534082 50 #inst"2018-05-15T13:14:44.154-00:00" 13194139534082 true]
           #datom[70804150782265703 73 "Alice" 13194139534082 true]],
 :tempids {}}
REPL 2 with explicit t value at 770 (is currently at 669)
(d/q '[:find (pull ?person [*])
       :in $ ?person-name
       [?person :person/first-name ?person-name]]
     (d/as-of (d/db @conn) 770)
=> []


... where as the act of executing that queries does update the t value in REPL 2, and re-running the query for a second time does return the transacted value.


understood - looking into it


@marshall this is an issue where d/sync is needed


thanks, @marshall. very much appreciated. it's a hurdle for us as we're sharing a cloud db across two applications. thanks for looking into it - i'm sure i'm just missing something obvious.


@joshkh @marshall @favila I can confirm: 2 repls, in one I transact a datum, then wait plenty of time (seconds), then in the second repl I query for the datum just transacted using d/db to get the latest db value, but the result is [], then I immediately try the same query again and this time I get the result


(using Datomic Cloud)


what's more, it's the use of d/q which refreshes the db connection (or something), not the use of d/db, since if I first use a previous db value (without calling d/db in my first d/q) then use d/q with d/db I get the same behaviour


yes, exactly.


repl 1 vs repl 2 was a contrived example but representative of the larger services that we've already built, and folding them into a monolithic application that shares a single client connection isn't really an option for us as we're scaling across containers. i'm open to any work-around that gets us past this show stopper, even if it's the phantom query method, but i'm hoping that's not the case unless it's bullet proof.

👍 4

@joshkh we’re looking into the issue; in the interim, you can also create a new connection to “force” update


thanks a lot, @marshall. that wasn't meant to sound all gloom-and-doom. just highlighting the problems it's causing downstream.




@marshall is there a trackable issue we can check on for an update? slack is great but discussions tend to evaporate upwards over time. 🙂


(we're also snapshot friendly and happy to test anything unofficial)


Is there any way to use rules "bi-directionally" with decent performance?


@favila I had this problem, hence made a feature for that in datalog-rules: . Granted, it's a bit of a hack, I wish that was part of Datomic's API

👍 4

Short of datomic actually doing some query optimization, I wish I could use the bracket syntax on arbitrary arguments


And have datomic use that info to select the right impl at runtime