This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-03-06
Channels
- # architecture (25)
- # bangalore-clj (1)
- # beginners (21)
- # boot (45)
- # cljs-dev (38)
- # clojure (272)
- # clojure-austin (7)
- # clojure-finland (7)
- # clojure-france (3)
- # clojure-italy (7)
- # clojure-japan (1)
- # clojure-russia (13)
- # clojure-spec (36)
- # clojure-uk (31)
- # clojurescript (96)
- # core-async (15)
- # cursive (16)
- # datascript (3)
- # datomic (97)
- # emacs (107)
- # hoplon (16)
- # jobs (9)
- # keechma (1)
- # luminus (1)
- # off-topic (19)
- # om (39)
- # onyx (15)
- # pedestal (3)
- # planck (22)
- # protorepl (4)
- # re-frame (20)
- # reagent (3)
- # ring-swagger (25)
- # specter (26)
- # test-check (19)
- # testing (1)
- # untangled (381)
Hi guys
I'm reading the docs and watching 3 SQL engines Psql, mysql and oracle, in my understanding here sqlite is missing, so, could I configure sqlite as backend engine for a transactor?
@lowl4tency I have done this just for fun, but why not use dev transactor?
hello, is it possible to increase the transaction timeout for a specific transaction on datomic?
we are having an issue that a particular transaction (that has to be atomic, we can't break it) needs more time to process, we are wondering if it's possible to change the timeout while running this tx in specific, without having to change the general configuration, is that possible?
@wilkerlucio Unfortunately the timeout is system-wide
Getting a the following error when querying a datomic db
frequently, mapping over d/datoms
(lazily I believe)
2017-03-06 20:43:49,580[ISO8601] [clojure-agent-send-off-pool-56] WARN datomic.slf4j - {:message "Caught exception", :pid 1297, :tid 202}
org.hornetq.api.core.HornetQNotConnectedException: HQ119010: Connection is destroyed
at org.hornetq.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:296)
at org.hornetq.core.client.impl.ClientSessionImpl.deleteQueue(ClientSessionImpl.java:365)
at org.hornetq.core.client.impl.ClientSessionImpl.deleteQueue(ClientSessionImpl.java:375)
at org.hornetq.core.client.impl.DelegatingSession.deleteQueue(DelegatingSession.java:326)
at datomic.hornet$delete_queue.invokeStatic(hornet.clj:256)
at datomic.hornet$delete_queue.invoke(hornet.clj:252)
at datomic.connector$create_hornet_notifier$fn__8108$fn__8111.invoke(connector.clj:210)
at datomic.connector$create_hornet_notifier$fn__8108.invoke(connector.clj:206)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.core$apply.invokeStatic(core.clj:657)
at clojure.core$apply.invoke(core.clj:652)
at datomic.error$runonce$fn__48.doInvoke(error.clj:148)
at clojure.lang.RestFn.invoke(RestFn.java:397)
at datomic.connector$create_hornet_notifier$fn__8085$fn__8086$fn__8089$fn__8090.invoke(connector.clj:204)
at datomic.connector$create_hornet_notifier$fn__8085$fn__8086$fn__8089.invoke(connector.clj:192)
at datomic.connector$create_hornet_notifier$fn__8085$fn__8086.invoke(connector.clj:190)
at clojure.core$binding_conveyor_fn$fn__6772.invoke(core.clj:2020)
at clojure.lang.AFn.call(AFn.java:18)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Code looks something like this, causing roughly 20 datomic queries a second against a single db
snapshot.
(let [db (d/db (d/connect uri)]
(doseq [element (->> (d/datoms db)
(map #(.e %))
(filter #(some-query db %)
(map #(some-other-query db %))
(map #(another-query db %)))]
(try
(transact-and-http-io element))
(catch (log-error e)))
Do i need to look at refactoring my code to reduce the frequency of queries? Happens at different times during processing the datoms.
Thanks 🙂This single function call uses the same db snapshot created from a single connection
Say, is there a (prev-t)
function to get the t
value of the transaction immediately before (basis-t)
, or should I just grab it out of (d/tx-range)
?
Added db binding for clarity in case it its doing something i dont understand fully
@mbutler the d/datoms call is pseudocode right? you are including index and segment etc (I would expect a different error)
sorry, yes. The code works perfectly in localdev
I would think maybe im accidentally realising the sequence (which is ~700k datoms) and maybe running out of memory, but the jvm doesnt die
I dont understand, if i d/db once at startup my entire app would use the same snapshot no?
no this code runs once, and maps over a seq of datoms once calling datomic queries on the sequence as i go.
I should update the code, snippet as i think its not quite clear
I understand. But the stacktrace is related to connection failures with hornetq, which is transactor communication
I think that error might actually be unrelated
So in short i see the seq being consumed and performing some-oi
but that after a random amount of time the code just stops running
That error appeared at the same time in my logs and was datomic related, if its transactor related then its likely something else it my app not happy that it cant transact.
However the above code stopped running again but there was no hornet error this time (in the logs).
What would happen if there was a network issue when querying a datomic db?
But one would expect an error? this is run in a java thread surrounded with a try/catch
and my system has an uncaughtExceptionHandler
If there is network disruption between the Peer and Transactor, you will see an exception only when connecting or transacting. If there is network disruption between the Peer and Storage, you will see exceptions when reading (querying).
Yes, I am now assuming that the stack trace was an anomaly
In addition, the Peer's background threads will log connection errors with the Transactor, which may show up as HornetQ / Artemis exceptions.
Maybe suggesting that there is a connection issue, but not in the code above which is the code that fails.
So i am left with no error but code that silently dies
The cpu usage on the box drops to idle levels and the (some/io)
that happens in the doseq stops being logged
has anyone written a way to serialize EntityMaps by db/id
and entity-t
? or is that a bad idea?
so due to the doseq
nil is returned. Not sure if anywhere id expect this to end up.
Reasoning for the code not being finished is that if i call that function again it starts up again and carries on a little further
based on the i/o at the end elements would be filtered out at the filter
stage
Yes, those might be good steps. One of those unfortunate prod only bugs, that happens when dealing with large (d/datoms)
I forgot something that might matter, the (some-io)
does transact, but again it only produced the hornet error when it ran/failed the first time. Subsequent times no error was logged.
Is it possible something is caching the error
i don't know what's in some-io. behavior of things when flooded is not always sensible
constructs a map, does another query and performs a http request
I think that maybe removing the i/o part would be a good idea. Maybe my http request (sync) is hanging forever
Yeah, my first instinct is to simplify this and get solid confirmation that the process is hanging
Im using clj-http which you'd expect to time out.
probably a wise bit of advice
The some i/o + doseq logs but none of the map/filter stages
which is probably a mistake
the latter
around 701k
actually i can take a specific number of datoms off the front for now
so i can say exactly which datom + how many
totally
this is between an ec2 node and dynamodb
and the storage metrics seem super low vs my provisioned capacity
thanks 🙂
spitting it into generating the seq and consuming it was a good idea so thanks 🙂
yeah seems to be super low, single digit % of provisioned read. Getting late here, going to do some further testing tomorrow and report back 🙂