This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-04-05
Channels
- # announcements (15)
- # aws (7)
- # babashka (105)
- # beginners (35)
- # biff (5)
- # calva (48)
- # cider (5)
- # clj-kondo (25)
- # cljdoc (14)
- # clojure (84)
- # clojure-czech (2)
- # clojure-dev (6)
- # clojure-europe (58)
- # clojure-nl (6)
- # clojure-norway (19)
- # clojure-portugal (2)
- # clojure-uk (5)
- # clojurescript (23)
- # cloverage (5)
- # code-reviews (5)
- # conjure (28)
- # data-science (1)
- # datomic (53)
- # events (6)
- # exercism (7)
- # fulcro (16)
- # graalvm-mobile (2)
- # honeysql (29)
- # improve-getting-started (2)
- # kaocha (32)
- # lambdaisland (2)
- # lsp (29)
- # malli (3)
- # overtone (1)
- # pedestal (8)
- # polylith (3)
- # portal (6)
- # quil (2)
- # rdf (15)
- # releases (2)
- # rewrite-clj (14)
- # sci (9)
- # shadow-cljs (7)
- # specter (5)
- # sql (5)
- # xtdb (38)
Hi guys, I wanna restrict the query based on "<" predicate. Submitted at is a date in epoch format so it's an integer. I know that i can use ?nps based on trasnaction time but in this specific case i can't use it because this data was transferred from a different db. How do i restrict based on an attribute?
[??? :strive_form_data/submitted_at ?some-value][(< ?some-value 1)]
will work. But only you know what the ???
should be.
@U09R86PA4 amazing thank you so much! Can i ask, if i wanted some-value to be between 1 and 100 for example, how would i go about doing that? Using clojure syntax inside the query keeps leading to an error
Cool that's how i did it. But i was afraid it would be inefficient since this basically does an implicit join?
Ohh ok. Yes exactly how you just said
so the entity id is already known and bound; this is just retrieving the value and filtering
@U09R86PA4 thank you very much! I wanted to ask, is there a way to do computation at the datomic query level? For example in sql we can use functions that divide numbers and uses the result as input of a child query. Is that possible in datalog as well?
With Datomic Cloud, does it ever happen to you that you can't deploy anymore because of no more space left on the disk attached to the instance? Automatic rollbacks also then fail for the same reason. Did we do something we shouldn't??
Cannot allocate memory
Obviously, those zip files aren't in cause - they'less than 1 MB. And the dependencies in .m2 and gitlibs that must be copied from the S3 bucket don't weigh more than 150 MB.
Ok, so regarding the problem I mentioned yesterday, I have more input, I get this on the transactor side:
o.a.activemq.artemis.core.client - AMQ212037: Connection failure to /172.20.0.4:57298 has been detected: AMQ229014: Did not receive data from /172.20.0.4:57298 within the 10,000ms connection TTL. The connection will now be closed. [code=CONNECTION_TIMEDOUT]
2022-04-05 16:29:14.005 WARN default o.a.activemq.artemis.core.server - AMQ222061: Client connection failed, clearing up resources for session 9d3f8e8c-b4f8-11ec-bb6d-0242ac140004
2022-04-05 16:29:14.014 INFO default datomic.update - {:task :reader, :event :update/loop, :msec 2110000.0, :phase :end, :pid 1, :tid 34}
2022-04-05 16:29:14.015 WARN default o.a.activemq.artemis.core.server - AMQ222107: Cleared up resources for session 9d3f8e8c-b4f8-11ec-bb6d-0242ac140004
2022-04-05 16:29:14.198 WARN default o.a.activemq.artemis.core.server - AMQ222061: Client connection failed, clearing up resources for session 9d435f1d-b4f8-11ec-bb6d-0242ac140004
2022-04-05 16:29:14.199 WARN default o.a.activemq.artemis.core.server - AMQ222107: Cleared up resources for session 9d435f1d-b4f8-11ec-bb6d-0242ac140004
2022-04-05 16:29:14.560 INFO default o.a.activemq.artemis.core.server - AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.17.0 [9a4ab8f7-b4f8-11ec-85e0-0242ac140003] stopped, uptime 35 minutes
And this on the peer side:
clojure.lang.ExceptionInfo: Error communicating with HOST 0.0.0.0 or ALT_HOST 172.20.0.3 on PORT 4334
at datomic.connector$endpoint_error.invokeStatic(connector.clj:53)
at datomic.connector$endpoint_error.invoke(connector.clj:50)
at datomic.connector$create_hornet_factory.invokeStatic(connector.clj:134)
at datomic.connector$create_hornet_factory.invoke(connector.clj:118)
at datomic.connector$create_transactor_hornet_connector.invokeStatic(connector.clj:308)
at datomic.connector$create_transactor_hornet_connector.invoke(connector.clj:303)
at datomic.connector$create_transactor_hornet_connector.invokeStatic(connector.clj:306)
at datomic.connector$create_transactor_hornet_connector.invoke(connector.clj:303)
at datomic.peer.Connection$fn__12046.invoke(peer.clj:217)
at datomic.peer.Connection.create_connection_state(peer.clj:205)
at datomic.peer$create_connection$reconnect_fn__12124.invoke(peer.clj:469)
at clojure.core$partial$fn__5857.invoke(core.clj:2627)
at datomic.common$retry_fn$fn__827.invoke(common.clj:543)
at datomic.common$retry_fn.invokeStatic(common.clj:543)
at datomic.common$retry_fn.doInvoke(common.clj:526)
at clojure.lang.RestFn.invoke(RestFn.java:713)
at datomic.peer$create_connection$fn__12126.invoke(peer.clj:473)
at datomic.reconnector2.Reconnector$fn__11300.invoke(reconnector2.clj:57)
at clojure.core$binding_conveyor_fn$fn__5772.invoke(core.clj:2034)
at clojure.lang.AFn.call(AFn.java:18)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
The system works fine for over half an hour, but then basically just dies with those 2 errors in the logs. On the transactor console output I just get Heartbeat failed
Any ideas?yes, but one is the peer, and the other is the transactor
different machines
it works for like 30 mins or so, and then dies. I am using prod level jvm args, 4gb for heap
On google cloud I remember some issue where their networking stack just drops idle tcp connections. I had to add keepalives into the kernel options somehow. IIRC it manifested like this, the peer looked like it went away and it only happened when the system was quiet
no, i am developing a data processing framework with clojure, and currently i am stress testing it, and I am writting the results to datomic
so every second it has 2 tx of 100 datoms to save
so basically 200 a second, splitted between 2 transactions
I should also mention that these are dockerized, and I have also modified the keep alive values, for both containers, to make sure this isn’t a tcp connection issue
is it possible that the peer really did just go away for 10 secs, e.g. a long gc pause?
well, on the transactor i have set the max gc pause to be 50ms
on the peer, I haven’t modified that
is that something one should do?
those targets don’t apply when there’s a full gc and memory pressure. I’m really just suggesting that if you know the peer is busy or could have memory pressure on it, rule out that the timeout isn’t due to a GC pause
oh, ok, thanks for the pointers, much appreciated 🙂
I will check that out
in the same transaction, is it possible to both retract an older entity, and create a new one, where an identity attribute is shared between both?
I’m guessing no, as there’s no way to disambiguate which entity is being referred to via attribute identities.. ?
ie:
[[:db.fn/retractEntity [:a/id1 "a"]]
[:db.fn/retractEntity [:a/id2 "b"]]
{:thing/id "thing"
:thing/stuff {:a/id1 "a"
:a/id2 "b"}}]
this can’t work, even with db/ids?You should run this to know for sure. I would expect all lookups to happen before retraction, so this will cause {:a/id1 "a" :a/id2 "b"}
to expand to [:db/add id1a-entity-id :a/id1 "a"]
etc. Since those same assertions are being retracted in the same transaction via the retractEntity, the transaction will fail with a conflict.
In general all lookups happen on the “before” db and all operations in a transaction are applied atomically. (Exceptions are composite tuples, which do read an intermediate state to know what to update the new values to; and entity predicates, which read an “after” db right before commit.) So it’s not ambiguous at all what a lookup in a transaction will do.
there’s no way, even in a transaction function, to see a “in-progress” or “partially-applied” database value