Fork me on GitHub
#xtdb
<
2021-11-29
>
lgessler05:11:43

anybody happen to have ballpark estimates on xtdb backends' disk footprint? using single node so all components matter. using lmdb at the moment and it seems rather disk hungry, wondering how others compare before i implement

👀 1
xlfe05:11:01

lmdb doesn't compress it's indexes on disk afaik. RocksDB does however, so anecdotelly it is slightly slower but uses less disk-space...

👍 1
lgessler07:11:27

compared with rocks and for my usecase (lots of tiny records with 2-10 attrs each) it looks like rocks uses about an order of magnitude less disk space. very good to know, thanks for the pointer!

👍 2
lgessler15:11:57

update: actually after indexing was all done and settled lmdb appeared to be bigger only by a factor of 2

👍 1
refset16:11:43

interesting update, it always pays to measure 🙂

kschltz18:11:09

Hello ppl. I've been playing around xtdb using kafka for transaction log, I have an AWS MSK cluster with 2 brokers, and I can only transact stuff when I override the default producer configuration for idempotence. I have other services using this same cluster, so I'm not sure if this is a misconfiguration on MSK or something wrong with xtdb. It looks like I cant get all acks, as soon as I override the default properties, it works

:properties-map    {"acks"               "1"
                    "enable.idempotence" "false"}

refset20:11:03

Hi @U01UAG3P57B - are you using a Kafke .properties file as well? If so, please can you share it (with secrets redacted)?

kschltz20:11:51

@U899JBRPF Im not, the only config other than this is bootstrap-servers

refset20:11:07

hmm :thinking_face: well I know we've not thoroughly validated the impact of setting "enable.idempotence" to false before...so I can't officially recommend it, but it seems strange that your producer isn't working otherwise. Are there any errors shown anywhere?

kschltz11:11:08

@U899JBRPF I think I may have narrowed down the issue to: for some reason I'm not getting acks from all brokers, there are 2. The event still gets through even tough the transaction returns a timout error on produce, I know this because I get to see it in tx_event table in Postgres

refset11:11:30

hmm :thinking_face: does is topic look like it's being replicated across both brokers? I wonder if the replication factor has some impact. Also, and this may well be completely unrelated to your issue, but I believe 3 brokers is a recommended minimum number https://stackoverflow.com/questions/58761164/in-kafka-ha-why-minimum-number-of-brokers-required-are-3-and-not-2

kschltz12:11:04

I had issues before with 2 brokers, I think because of leader election algorithm. But the thing you said about replication factor may be at play here, because in this development setup this topic is set as replica one :thinking_face:

💡 1
kschltz13:11:46

Thank you so much, helpful as usual @U899JBRPF

🙏 1
refset13:11:36

You're welcome! So, did you get it working?

kschltz15:12:49

yeah, I did. It's working well enough for dev environment, thanks

🙌 1