This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-03-15
Channels
- # arachne (6)
- # aws-lambda (3)
- # beginners (14)
- # boot (56)
- # cider (8)
- # cljs-dev (5)
- # cljsrn (11)
- # clojure (240)
- # clojure-dusseldorf (3)
- # clojure-greece (165)
- # clojure-italy (5)
- # clojure-romania (1)
- # clojure-russia (24)
- # clojure-uk (30)
- # clojure-ukraine (3)
- # clojurescript (29)
- # core-async (6)
- # css (1)
- # cursive (25)
- # datascript (6)
- # datomic (61)
- # dirac (1)
- # events (3)
- # hoplon (1)
- # instaparse (3)
- # jobs (4)
- # juxt (28)
- # lein-figwheel (7)
- # leiningen (19)
- # luminus (1)
- # lumo (2)
- # nyc (1)
- # off-topic (19)
- # om (25)
- # onyx (4)
- # parinfer (2)
- # pedestal (23)
- # perun (20)
- # re-frame (44)
- # reagent (20)
- # remote-jobs (3)
- # ring (3)
- # ring-swagger (5)
- # rum (12)
- # slack-help (3)
- # spacemacs (25)
- # specter (62)
- # sql (16)
- # unrepl (313)
- # yada (4)
I am wondering about this case I am encountering when setting txInstant in a unit test:
(let [t1 #inst "2001-01-01"
t2 #inst "2002-01-01"
conn (scratch-conn)
_ (init-schemas conn)]
(d/transact conn [{:db/id (d/tempid :db.part/tx) :db/txInstant t1} {:db/id (d/tempid :db.part/user) :bygning/id 1 :bygning/attr1 "1"}])
(d/transact conn [{:db/id (d/tempid :db.part/tx) :db/txInstant t2} {:db/id (d/tempid :db.part/user) :bygning/id 2 :bygning/attr1 "2-1"}])
(d/transact conn [{:db/id (d/tempid :db.part/tx) :db/txInstant t2} {:db/id (d/tempid :db.part/user) :bygning/id 1 :bygning/attr1 "2"}])
(prn "without as-of" (d/q '[:find ?v ?tx
:where
[_ :bygning/attr1 ?v ?tx]
[?tx :db/txInstant ?txInst]]
(d/history (d/db conn))))
(prn "with as-of" (d/q '[:find ?v ?tx
:where
[_ :bygning/attr1 ?v ?tx]
[?tx :db/txInstant ?txInst]]
(d/as-of (d/history (d/db conn)) t2))))
prints
"without as-of" #{["2-1" 13194139534315] ["1" 13194139534313] ["2" 13194139534317] ["1" 13194139534317]}
"with as-of" #{["2-1" 13194139534315] ["1" 13194139534313]}
The second and third transaction are both at :db/txInstant t2, but when doing an as of, I don’t get the one on :bygning/id 1 and :bygning/attr1 “2”.
@casperc as-of with an instance is resolved to a tx value as with (-> (d/datoms :avet :db/txInstant the-instant) first :tx)
So the as-of point is precisely 13194139534315, because that is the first match for that instant
Guys this query [:find (count ?e) :where [?e :а-entity/а-attribute]] return outOfMemory How to be to count all the entities?
I had assumed that dereffing a transaction would be a form of backpressure, but I'm starting to question it. In the docstring the "completion" of the transaction is mentioned, but I'm not osure what it means in this context
I think total datom count can also be monitored from the transactor:http://docs.datomic.com/monitoring.html#sec-3
@favila I am doing a bulk lot of transactions (divided up into chunks). In some cases transacting millions of datoms. I want to avoid overwhelming the transactor
For example, (run! #(deref (d/transact-async conn %)) txes)
would ensure only one tx is in flight at a time
@favila We've just increased the size of the individual txes, to increase the throughput of our queries to generate the txes.
Will take a look into pipelining. I had wondered how you'd keep a few in flight at a time.
There was some docs somewhere on tuning for a bulk import job too, but I can't find them now
I'd been given the 10k datoms number. Hmm. We've just increased from ~10/tx to 10*1000 (chunking the job into 1000s)
there were some transactor tuneables to set, as well as temporarily raising storage if you use e.g. dynamo
e.g. raising the memory-index-threshold to avoid indexes as long as possible, then doing an explicit requestIndex at the end
another technique is to do the bulk index locally (dev storage) on a big machine, then backup+restore to remote storage
Am I correct in assuming that, in pull expressions, defaults can't be supplied for reverse lookups?
setting the object-cache size on the transactor controls the object-cache size of the transactor, right? If I wanted to set that on a peer I would do so via a java option on the peer, is that right?
@djjolicoeur correct
Memory index setting on the transactor is used by all peers Object cache size is set independently on each, but defaults to 50% of the heap
@marshall thanks, that what I though but wanted to make sure
@marshall is there a good way to ensure that a transactor is running, i.e. something we would monitor? we have an HA setup, and we want to have some monitoring on both the primary transactor and the failover to ensure we get alerted if either dies
@marshall those are settings internal to both the primary and standby, right? I was looking for something we might be able to monitor externally
our transactors don’t run on AWS, so we don’t use CloudWatch. we track metrics in riemann with the yeller /datomic-riemann-reporter. but what I’m looking for is actually less around metrics and more around telling our automation framework that the transactor is up and ready. a port that is open or something along those lines. and if a call to that fails, or a number of calls to that fails, restart the transactor.
if no such thing exists off the top of your head, that is fine, we will figure something out.
thanks, I’ll look into what we can do with the heartbeat