This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-02-02
Channels
- # announcements (26)
- # architecture (29)
- # babashka (8)
- # beginners (91)
- # calva (70)
- # cider (7)
- # circleci (2)
- # cljs-dev (1)
- # clojure (79)
- # clojure-australia (2)
- # clojure-dev (3)
- # clojure-europe (40)
- # clojure-italy (2)
- # clojure-losangeles (4)
- # clojure-nl (4)
- # clojure-uk (4)
- # clojurescript (34)
- # cursive (13)
- # datomic (16)
- # defnpodcast (2)
- # emacs (11)
- # events (1)
- # fulcro (13)
- # graalvm (17)
- # gratitude (3)
- # instaparse (10)
- # introduce-yourself (2)
- # jobs (1)
- # jobs-discuss (5)
- # juxt (3)
- # kaocha (5)
- # meander (5)
- # membrane (2)
- # nextjournal (43)
- # off-topic (42)
- # pathom (52)
- # pedestal (8)
- # portal (3)
- # rdf (2)
- # re-frame (10)
- # reveal (21)
- # shadow-cljs (56)
- # slack-help (7)
- # vim (33)
- # xtdb (43)
Hi everyone ☀️ — a couple of questions about xt:fn
— assuming I submit the transaction
[[:xtdb.api/fn :my-func {...}] [:xtdb.api/fn :my-func {...}] ...]
where :my-fun
return false
when detecting some inconsistency , other a :xtdb.api/put
transaction.
• is the context passed to each function call holds a speculative database so far?
• is the entire transaction aborted in case any of the db function call returns false
?
Intuitively I’d assume the answer is yes to both questions, but I’m having a bad time seeing this.
Thank you!!> Intuitively I’d assume the answer is yes to both questions yes and yes 🙂 glad you got to the bottom of it in the end!
you're not the first person that that's caught out, if that helps, what with Clojure often treating false
and nil
as equivalent
How costly is xt/db
I'm getting OutOfMemoryErrors with simple calls to it with a largeish LMDB index (details in thread)
java.lang.OutOfMemoryError: Direct buffer memory
at java.base/java.nio.Bits.reserveMemory(Bits.java:175)
at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:118)
at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:317)
at org.lwjgl.BufferUtils.createByteBuffer(BufferUtils.java:75)
at org.lwjgl.system.MemoryStack.create(MemoryStack.java:86)
at org.lwjgl.system.MemoryStack.create(MemoryStack.java:75)
at java.base/java.lang.ThreadLocal$SuppliedThreadLocal.initialValue(ThreadLocal.java:305)
at java.base/java.lang.ThreadLocal.setInitialValue(ThreadLocal.java:195)
at java.base/java.lang.ThreadLocal.get(ThreadLocal.java:172)
at org.lwjgl.system.MemoryStack.stackGet(MemoryStack.java:790)
at org.lwjgl.system.MemoryStack.stackPush(MemoryStack.java:799)
at xtdb.lmdb$new_transaction.invokeStatic(lmdb.clj:69)
at xtdb.lmdb$new_transaction.invoke(lmdb.clj:66)
at xtdb.lmdb.LMDBKv.new_snapshot(lmdb.clj:220)
at xtdb.kv.index_store.KvIndexStore.open_index_snapshot(index_store.clj:1110)
at xtdb.query.QueryEngine.db(query.clj:1944)
at xtdb.node.XtdbNode.db(node.clj:104)
at xtdb.node.XtdbNode.db(node.clj:100)
at <my code calling (xt/db node)>
the db checkpoint was around 1.5gb and the machine has 8gb of memory with Java max heap set to 6gb
I'm adding a db to each request, as nearly all service calls will need to do some queries. This is happening in AWS Fargate linux env, never encountered this locally on my dev macbook
The xt/db
call itself shouldn't be too costly, but I suspect you may be needing to leave more native memory in reserve for LMDB itself to use. For instance, you could try setting the max heap to 3GB, which leaves 3GB for off-heap (by default it's 1:1 IIRC), and implicitly somewhere in the region of 1.5GB-2GB for native allocations
ah, I had it backwards... my first intuition was to increase the JVM max heap as I was getting OOME thrown
but is the cost of xt/db
directly proportional to the db size? as this only happens when the db is big
> is the cost of `xt/db` directly proportional to the db size? no, it shouldn't be. There are per-db caches that are created, but by default they are quite conservative
any recommendations with how much to leave for linux kernel buffers, if it's memory mapped files, I guess those should benefit from kernel buffers
and another peculiar thing is that it doesn't seem to get resolved, the instance needs to be restarted for it to recover... so it's not that there's a load spike with many requests taking much memory
the system started up, replayed a long tx log, was idle for a while and saved a checkpoint... then served a few requests and started throwing OOME.
> any recommendations with how much to leave for linux kernel buffers hmm, I guess that's what I really meant by "native allocations"(!)
feel free to open an issue to dig into this further, and if you are able to generate a ~minimal repro that would certainly be helpful 🙏
yeah, that might be difficult, but I'll share what I find later (luckily we are not in production now, so this isn't causing actual down time for anyone... just an issue that we need to resolve before too long)
it might just be that we need to throw more memory at the problem, but I want to understand the reasons before doing that
-XX:MaxDirectMemorySize=512m
via https://www.java67.com/2014/01/how-to-fix-javalangoufofmemoryerror-direct-byte-buffer-java.html
one other idea to try is setting MaxDirectMemorySize
to a good value (`==Xmx`), since by default it is (probably) unbounded
so I'll take a couple of gb from heap and move that to max direct memory... that should probably fix it
> By default, JVM allows 64MB for direct buffer memory, that sounds very small default, but I guess most programs don't use big memory mapped databases
IIRC the 64MB is just a placeholder that gets overwritten quite early on when the JVM starts up - either to -XX:MaxDirectMemorySize
, unbounded, or to the same as -Xmx
depending on the flavour and version of JVM
but yes, in either of the two latter cases, this could potentially cause an OOM if you have 6GB heap and an 8GB memory limit
I doubt xt/db
itself is the overall cause, tbh - it does very little beyond creating a handful of small objects - but it could well be the straw that broke the camel's back
if you can, it might be worth checking some of the JVM monitoring tools - you can get usage stats from JMX, a few profilers show it, if it's a local JVM, and I daresay if you have any other monitoring tools attached they may well do too
thanks, I've been meaning to add the cloudwatch reporter with jvm metrics... I've only had the CPU/mem % that fargate reports so far
leaving this here for Googlers in the future: got a curious error message java.lang.IllegalArgumentException: No implementation of method: :latest-completed-tx of protocol: #'xtdb.db/LatestCompletedTx found for class: xtdb.kv.index_store.KvIndexStoreTx
, ended up being that I was using xt/with-tx
inside a transaction function, which (very reasonably) seems to be unsupported. Managed to work around it with no issue
It's definitely something we hope to be able to support at some point, and is (I think) covered by this existing issue https://github.com/xtdb/xtdb/issues/1434 Out of interest, were you trying to implement integrity constraint validations using Datalog?
cool! nah, not doing that, doing something else that I don't even know how to put concisely
@U49U72C4V We're huge fans of the Clojurians archives — the volunteers who maintain them are saints. But, fwiw, http://discuss.xtdb.com is a saner (and mutable, which in this case is a good thing) home for tidbits you'd like to show up in Google searches. 🙂 Please feel free to cross-post there.