This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (17)
- # aws (12)
- # babashka (27)
- # beginners (85)
- # bristol-clojurians (1)
- # calva (16)
- # cider (3)
- # clara (7)
- # clojure (85)
- # clojure-europe (13)
- # clojure-italy (19)
- # clojure-nl (2)
- # clojure-norway (6)
- # clojure-poland (1)
- # clojure-spec (31)
- # clojure-uk (61)
- # clojurescript (29)
- # core-async (10)
- # cursive (7)
- # data-science (1)
- # datomic (29)
- # docker (3)
- # fulcro (120)
- # graphql (16)
- # hugsql (2)
- # leiningen (17)
- # luminus (2)
- # off-topic (36)
- # other-languages (3)
- # pathom (13)
- # re-frame (12)
- # ring (2)
- # rum (1)
- # shadow-cljs (126)
- # tools-deps (56)
- # vscode (5)
Hey, I have a datomic query performance question. We are trying to get all entities of a certain type (transactions) that have changed in the last 5 minutes. We are currently using the default "now" db to get all the transactions and then "since" db to shave off the past. The query works but gets exponentially slower as data grows. Maybe there is a better way write this query?
(def since-5min (d/since db #inst "2020-01-28T15:45")) (d/q '[:find [e? ...] :in $ $since :where [$ ?e :transaction/status] (not [?e :transaction/type :transaction.type/rejected])) [$since ?e]] db since-5min)
Five minutes is not very many transactions. Maybe just look at the transaction log directly?
yes, I won't get the ones that changed in the last 5 minutes but did not change the :transaction/status property
for the transactions logs, can we use both :db/txInstant to filter for the last 5 minutes and filter by :transaction/status in the same query?
in the way that would include also the transactions that have changed but on some other property than :transaction/status
(d/q '[:find [?e ...] :in $ ?log ?from-t ?to-t :where [(tx-ids ?log ?from-t ?to-t) [?tx ...]] [(tx-data ?log ?tx) [[?e ?a ?v _ ?op]]] [?e :transaction/status] (not [?e :transaction/type :transaction.type/rejected])] (d/as-of (d/db conn) #inst "2020-01-28T15:50") (d/log conn) #inst "2020-01-28T15:45" #inst "2020-01-28T15:50")
Look at everything that happened in the last five minutes; if you see any datoms in the tx log whose entity now (currently) has a transaction status and no rejected transaction type, you know that transaction entity changed
this strategy might not make sense for longer time periods or higher transaction loads because it depends on transaction time being the most selective thing available
thanks you! the query seems to do exactly what I want. I'll now try to benchmark it in different load scenarios
Is there a faster alternative to developing against Datomic Cloud over using the Bastion/Tunnel? For my development purposes, I sometimes need to run about a 100 queries in a batch which is super slow. I don't need help with best practices etc. as I use Datomic for 5 years now. Point is, when executed within Datomic Cloud this takes milliseconds. But locally it can take half a minute which breaks any kind of dynamic/interactive flow during dev. Right now I'm thinking of using VNC to develop on a remote appliance within AWS.
WOuld you consider running wireguard or zerotier on the server? then you can connect directly to the server, skipping ssh 🙂 (the connection is encrypted and secure on either wireguard and/or zerotier)
Well, with bastion you're jumping directly from a machine in the middle to the actual target server, so that's an additional network hop
the latency from bastion -> query group is far less than the latency from laptop -> bastion
perhaps - I don't know - maybe - but if one can remove a network hop, then it's good
Cloud running locally would be great, like datomic-memdb but supporting tx functions and tuples. I'm also annoyed by queries that run fast in the cloud but very slow when developing.
@ghadi I'm pretty sure the performance distance is mainly due to a Datomic Ion having more "Datomic Peer like" querying performance with segment caching and whatnot, whereas the client via bastion is a flat http client.