Fork me on GitHub

anybody happen to have ballpark estimates on xtdb backends' disk footprint? using single node so all components matter. using lmdb at the moment and it seems rather disk hungry, wondering how others compare before i implement

👀 1

lmdb doesn't compress it's indexes on disk afaik. RocksDB does however, so anecdotelly it is slightly slower but uses less disk-space...

👍 1

compared with rocks and for my usecase (lots of tiny records with 2-10 attrs each) it looks like rocks uses about an order of magnitude less disk space. very good to know, thanks for the pointer!

👍 2

update: actually after indexing was all done and settled lmdb appeared to be bigger only by a factor of 2

👍 1

interesting update, it always pays to measure 🙂


Hello ppl. I've been playing around xtdb using kafka for transaction log, I have an AWS MSK cluster with 2 brokers, and I can only transact stuff when I override the default producer configuration for idempotence. I have other services using this same cluster, so I'm not sure if this is a misconfiguration on MSK or something wrong with xtdb. It looks like I cant get all acks, as soon as I override the default properties, it works

:properties-map    {"acks"               "1"
                    "enable.idempotence" "false"}


Hi @U01UAG3P57B - are you using a Kafke .properties file as well? If so, please can you share it (with secrets redacted)?


@U899JBRPF Im not, the only config other than this is bootstrap-servers


hmm :thinking_face: well I know we've not thoroughly validated the impact of setting "enable.idempotence" to false I can't officially recommend it, but it seems strange that your producer isn't working otherwise. Are there any errors shown anywhere?


@U899JBRPF I think I may have narrowed down the issue to: for some reason I'm not getting acks from all brokers, there are 2. The event still gets through even tough the transaction returns a timout error on produce, I know this because I get to see it in tx_event table in Postgres


hmm :thinking_face: does is topic look like it's being replicated across both brokers? I wonder if the replication factor has some impact. Also, and this may well be completely unrelated to your issue, but I believe 3 brokers is a recommended minimum number


I had issues before with 2 brokers, I think because of leader election algorithm. But the thing you said about replication factor may be at play here, because in this development setup this topic is set as replica one :thinking_face:

💡 1

Thank you so much, helpful as usual @U899JBRPF

🙏 1

You're welcome! So, did you get it working?


yeah, I did. It's working well enough for dev environment, thanks

🙌 1