Fork me on GitHub
#xtdb
<
2020-10-11
>
maleghast13:10:14

Hello All... I am mulling over an idea and I was just wondering whether or not Crux has a theoretical limit of number of documents that can be indexed...

maleghast13:10:55

In other words, if money were no object could I just keep adding nodes and storage into the billions of documents..?

maleghast13:10:53

And if I did, is there likely to be point at which queries would become un-fixably slow, specifically in reading..?

maleghast13:10:30

(Clearly it does occur to me that the obvious fix is to have more than one Crux Database, as long as there is a way to partition the data such that I would not need to interrogate it across the partitions / DBs)

malcolmsparks13:10:42

Hi Oli. If you're using RocksDB as the index store, it's very high (100s of TB). You should do some research to see use cases similar to what you have in mind.

maleghast13:10:45

That's A LOT of data in terms of the index - I can definitely take that as "would handle my needs" 🙂

maleghast13:10:17

I will go and have a deeper look into RocksDB and some of the other options - I did expect to need to do that anyway, but I was just wondering if there was a theoretical ballpark, in the way that Cognitect say if you want more than 1Bn datoms in Datomic that one should "call them" 😉

refset12:10:29

Yeah there's no theoretical limits from our side, for the moment 🙂

refset14:10:36

Of course, benchmarking the use-case with representative hardware is always highly recommended. For instance it may well be possible that some shapes of queries could degrade too rapidly as the total db size grows