Fork me on GitHub
#xtdb
<
2022-11-29
>
zeitstein09:11:09

Is there an efficient way to obtain a list of docs that have been ::xt/deleted (at any time before now)?

refset12:11:11

Unfortunately not, you would need to model this information somehow using additional entities for it to be efficient. Or you could create a secondary index of some kind (in approx. the same vein as the Lucene module)

zeitstein13:11:40

Thanks! I've been thinking about implementing "delete" through attributes. Storing roots of deleted subtrees in a 'special' entity ("trash") might be enough, though.

👍 1
Martynas Maciulevičius11:11:15

Hey. Is it me or XTDB started to use up more memory with release 1.22? Previously I used 1.21.0.1 and my tests were running with "-Xmx2g" (even though it's still a lot) but now I have to increase it to "-Xmx4g" as it still crashes with 3 gigs... Is this expected? :thinking_face: What is a good way to debug it?

thomas12:11:53

Do you get OOM errors with 3 (or less) Gigs?

refset12:11:52

Hey @U028ART884X it's possible that this is a consequence of us switching the default caching implementation to LRU to improve overall stability. See the first note in the https://github.com/xtdb/xtdb/releases/tag/1.22.0 section

refset12:11:33

You could verify that theory by attempting to manually specify (override) the cache to use the "Second Chance" cache again, or perhaps even by analysing memory snapshots with a profiler. But from a practical perspective our decision to switch (back) to LRU at this stage is due to outstanding bugs that we discovered with the Second Chance implementation, see: https://github.com/xtdb/xtdb/issues/1818 https://github.com/xtdb/xtdb/pull/1821 https://github.com/xtdb/xtdb/issues/1822 ...unfortunately we can't recommend people attempting to the Second Chance cache again until those issues are resolved (which we aren't currently expecting to achieve in the near future, happy to discuss/review that though!)

Martynas Maciulevičius12:11:36

> Do you get OOM errors with 3 (or less) Gigs? yeah :thinking_face:

👍 1
refset12:11:42

In the meantime you could also reduce the LRU cache sizes so you can use a smaller machine again, but that will have some impact on performance

thomas12:11:41

(The way I read your question was that is wasn't clear that you were getting OOM, but you are so please ignore my (non)answer)

Martynas Maciulevičius12:11:40

> Do you get OOM errors with 3 (or less) Gigs? I get the errors with 3GB but it's not 100% of times. But I increased it to 4GB and it worked several times. The only thing I insert is some amount of docs, let's say 100. And then I have... well I already did a presentation for Jeremy Taylor that we connect two nodes in a weird way and I only have one transaction function. It shouldn't be a problem as I have two distinct nodes. The interesting thing is that when it gets OOM then sometimes it can spin indefinetly. Then I notice that a build is still spinning for 2.5 hours... then I have to kill it. It was failing the same way when it was getting OOM a month ago. Then I added this 2GB limit as it was stable at that time and didn't exhaust the resources of the runner.

thomas12:11:54

The interesting thing is that when it gets OOM then sometimes it can spin indefinetly. I think when you hit OOM things become very unpredictable. Once you hit that you don't know where you are going to end up. Best to exit then (and if I remember correctly there is a JVM option for that)

☝️ 1
Martynas Maciulevičius12:11:55

This is the option (ofc for linux):

"-XX:OnOutOfMemoryError=kill -9 %p"