Fork me on GitHub
#datomic
<
2023-02-14
>
kenny18:02:17

Oftentimes programs need to apply certain constraints to Datomic queries at runtime and working with the map form of queries is the easiest way to do so. Of course, writing queries in the list form is nice, and we do not want to remove that nicety from end users. As such, to build our dynamic query at runtime, we need a way to ensure all queries passed get fully transformed into the map form prior to doing said dynamic stuff. In a non-public ns there exists this function datomic.query.support/listq->mapq which does exactly that. Would the team consider adding such a function to the public API?

wox19:02:59

Has anyone been encountering increased memory usage with the latest on-prem version 1.0.6610? We’ve been running it in our production since Feb 8, and now twice oom-killer has terminated the k8s pod, while no other settings have changed. We are running the pod with a 5G memory limit, while the transactor is configured with Xmx4g and uses the default recommended settings from sample properties, i.e. memory-index-threshold=32m, memory-index-max=512m, object-cache-max=1g . I can’t really correlate the events with anything special in our application and cloudwatch metrics for the transactor do not show anything interesting either. I suppose I can increase the pod memory limit, but is there some way to predict how much memory on top of the 4g heap will be needed?

jaret14:02:23

Hey @U7PQLLK0S I'd look at at your availableMB over time and your MemoryIndexMBover time in a chart. By far, the most intensive memory usage will come during indexing. Ensuring that you have enough heap to cover the peaks during indexing is what you want. I am assuming in this that the k8s pod is your transactor and not your peer. If you would like we would be happy to look at your logs and metrics in a support setting if you want to shoot a case to <mailto:[email protected]|[email protected]> or log a https://support.cognitect.com/hc/en-us/requests/new.

wox14:02:39

Right, yeah, but I don’t think it ran out of heap, availableMB is showing over 2 gigabytes at the time this last happened and MemoryIndexMB just a few megabytes if I’m reading that correctly. And my assumption is that I would get a OutOfMemoryError from JVM if I ran out of heap? To be clear, it’s also Xms4g so the heap is reserved from the start. So that’s what got me asking why would the OS oom-killer end up killing the container, I think it’s trying to reserve too much memory on top of the heap, but I don’t know why or how much it needs.

wox14:02:39

But sure, I can make an official support case and gather the logs if this still happens.

wox16:02:59

just a note in case anyone ever finds this, the transactor seems to have stabilized to using 4.82 GiB of memory with the 4g = 4 GiB heap setting, so this explains why 5G (= 5 gigabytes = 4.66 GiB) was not enough

wox16:02:54

well, maybe not stabilized, the memory use has increased by 0.11 GiB in a week, so perhaps very slowly growing