Fork me on GitHub
Shuky Badeer06:04:58

Hi guys, I wanna restrict the query based on "<" predicate. Submitted at is a date in epoch format so it's an integer. I know that i can use ?nps based on trasnaction time but in this specific case i can't use it because this data was transferred from a different db. How do i restrict based on an attribute?


[??? :strive_form_data/submitted_at ?some-value][(< ?some-value 1)] will work. But only you know what the ??? should be.

❤️ 1
Shuky Badeer07:04:57

@U09R86PA4 amazing thank you so much! Can i ask, if i wanted some-value to be between 1 and 100 for example, how would i go about doing that? Using clojure syntax inside the query keeps leading to an error


[(<= 1 ?some-value)][(<= ?some-value 100)]


<= < > >= = != are special in queries. They’re not the normal clojure comparators

Shuky Badeer07:04:49

Cool that's how i did it. But i was afraid it would be inefficient since this basically does an implicit join?


all binding does an implicit join


how do you get the ?some-value we’ve been talking about in your query?


is it just [?nps :strive_form_data/submitted_at ?some-value]?


or is it on some other entity?

Shuky Badeer07:04:44

Ohh ok. Yes exactly how you just said


so the entity id is already known and bound; this is just retrieving the value and filtering


i.e. applying two predicates


this query is already scanning all answers/belongs_to


are either the second or third clause indexed?

Shuky Badeer12:04:14

@U09R86PA4 thank you very much! I wanted to ask, is there a way to do computation at the datomic query level? For example in sql we can use functions that divide numbers and uses the result as input of a child query. Is that possible in datalog as well?


Query rules are one way. You can also (on on-prem) call any function

👍 1


👍 1
Daniel Jomphe14:04:30

With Datomic Cloud, does it ever happen to you that you can't deploy anymore because of no more space left on the disk attached to the instance? Automatic rollbacks also then fail for the same reason. Did we do something we shouldn't?? Cannot allocate memory

Daniel Jomphe15:04:39

Obviously, those zip files aren't in cause - they'less than 1 MB. And the dependencies in .m2 and gitlibs that must be copied from the S3 bucket don't weigh more than 150 MB.


Ok, so regarding the problem I mentioned yesterday, I have more input, I get this on the transactor side:

o.a.activemq.artemis.core.client - AMQ212037: Connection failure to / has been detected: AMQ229014: Did not receive data from / within the 10,000ms connection TTL. The connection will now be closed. [code=CONNECTION_TIMEDOUT]
2022-04-05 16:29:14.005 WARN  default    o.a.activemq.artemis.core.server - AMQ222061: Client connection failed, clearing up resources for session 9d3f8e8c-b4f8-11ec-bb6d-0242ac140004
2022-04-05 16:29:14.014 INFO  default    datomic.update - {:task :reader, :event :update/loop, :msec 2110000.0, :phase :end, :pid 1, :tid 34}
2022-04-05 16:29:14.015 WARN  default    o.a.activemq.artemis.core.server - AMQ222107: Cleared up resources for session 9d3f8e8c-b4f8-11ec-bb6d-0242ac140004
2022-04-05 16:29:14.198 WARN  default    o.a.activemq.artemis.core.server - AMQ222061: Client connection failed, clearing up resources for session 9d435f1d-b4f8-11ec-bb6d-0242ac140004
2022-04-05 16:29:14.199 WARN  default    o.a.activemq.artemis.core.server - AMQ222107: Cleared up resources for session 9d435f1d-b4f8-11ec-bb6d-0242ac140004
2022-04-05 16:29:14.560 INFO  default    o.a.activemq.artemis.core.server - AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.17.0 [9a4ab8f7-b4f8-11ec-85e0-0242ac140003] stopped, uptime 35 minutes
And this on the peer side:
clojure.lang.ExceptionInfo: Error communicating with HOST or ALT_HOST on PORT 4334
	at datomic.connector$endpoint_error.invokeStatic(connector.clj:53)
	at datomic.connector$endpoint_error.invoke(connector.clj:50)
	at datomic.connector$create_hornet_factory.invokeStatic(connector.clj:134)
	at datomic.connector$create_hornet_factory.invoke(connector.clj:118)
	at datomic.connector$create_transactor_hornet_connector.invokeStatic(connector.clj:308)
	at datomic.connector$create_transactor_hornet_connector.invoke(connector.clj:303)
	at datomic.connector$create_transactor_hornet_connector.invokeStatic(connector.clj:306)
	at datomic.connector$create_transactor_hornet_connector.invoke(connector.clj:303)
	at datomic.peer.Connection$fn__12046.invoke(peer.clj:217)
	at datomic.peer.Connection.create_connection_state(peer.clj:205)
	at datomic.peer$create_connection$reconnect_fn__12124.invoke(peer.clj:469)
	at clojure.core$partial$fn__5857.invoke(core.clj:2627)
	at datomic.common$retry_fn$fn__827.invoke(common.clj:543)
	at datomic.common$retry_fn.invokeStatic(common.clj:543)
	at datomic.common$retry_fn.doInvoke(common.clj:526)
	at clojure.lang.RestFn.invoke(
	at datomic.peer$create_connection$fn__12126.invoke(peer.clj:473)
	at datomic.reconnector2.Reconnector$fn__11300.invoke(reconnector2.clj:57)
	at clojure.core$binding_conveyor_fn$fn__5772.invoke(core.clj:2034)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(
	at java.util.concurrent.ThreadPoolExecutor$
The system works fine for over half an hour, but then basically just dies with those 2 errors in the logs. On the transactor console output I just get Heartbeat failedAny ideas?

favila17:04:56 !=


yes, but one is the peer, and the other is the transactor


different machines


it works for like 30 mins or so, and then dies. I am using prod level jvm args, 4gb for heap


Is the system quiet in that time? Eg no transactions?


On google cloud I remember some issue where their networking stack just drops idle tcp connections. I had to add keepalives into the kernel options somehow. IIRC it manifested like this, the peer looked like it went away and it only happened when the system was quiet


This was more than 5 years ago, my memory is hazy


no, i am developing a data processing framework with clojure, and currently i am stress testing it, and I am writting the results to datomic


so every second it has 2 tx of 100 datoms to save


so basically 200 a second, splitted between 2 transactions


I should also mention that these are dockerized, and I have also modified the keep alive values, for both containers, to make sure this isn’t a tcp connection issue


is it possible that the peer really did just go away for 10 secs, e.g. a long gc pause?


well, on the transactor i have set the max gc pause to be 50ms


on the peer, I haven’t modified that


is that something one should do?


those targets don’t apply when there’s a full gc and memory pressure. I’m really just suggesting that if you know the peer is busy or could have memory pressure on it, rule out that the timeout isn’t due to a GC pause


there are jvm startup flags that will log GC pause activity


to console or to a file


oh, ok, thanks for the pointers, much appreciated 🙂


I will check that out


in the same transaction, is it possible to both retract an older entity, and create a new one, where an identity attribute is shared between both?


I’m guessing no, as there’s no way to disambiguate which entity is being referred to via attribute identities.. ?



[[:db.fn/retractEntity [:a/id1 "a"]]
 [:db.fn/retractEntity [:a/id2 "b"]]
 {:thing/id "thing"
  :thing/stuff {:a/id1 "a"
                :a/id2 "b"}}]
this can’t work, even with db/ids?


You should run this to know for sure. I would expect all lookups to happen before retraction, so this will cause {:a/id1 "a" :a/id2 "b"} to expand to [:db/add id1a-entity-id :a/id1 "a"] etc. Since those same assertions are being retracted in the same transaction via the retractEntity, the transaction will fail with a conflict.

👍 1

In general all lookups happen on the “before” db and all operations in a transaction are applied atomically. (Exceptions are composite tuples, which do read an intermediate state to know what to update the new values to; and entity predicates, which read an “after” db right before commit.) So it’s not ambiguous at all what a lookup in a transaction will do.


there’s no way, even in a transaction function, to see a “in-progress” or “partially-applied” database value


it does indeed fail with a conflict - will try with concrete IDs too (doesn’t work, sadly)


concrete ids will work IFF the entity ids for the retracts and the new assertions are different