This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-10-11
Channels
- # announcements (1)
- # babashka (132)
- # beginners (52)
- # calva (46)
- # clj-kondo (8)
- # cljdoc (17)
- # clojure (13)
- # clojure-australia (1)
- # clojure-dev (3)
- # clojure-europe (4)
- # clojurescript (4)
- # cloverage (1)
- # conjure (22)
- # datomic (9)
- # emacs (2)
- # fulcro (16)
- # leiningen (5)
- # malli (26)
- # off-topic (16)
- # pathom (3)
- # portal (5)
- # reagent (10)
- # reitit (5)
- # rewrite-clj (1)
- # ring (1)
- # shadow-cljs (14)
- # spacemacs (6)
- # tools-deps (10)
- # vim (11)
- # vscode (1)
- # xtdb (10)
Hello All... I am mulling over an idea and I was just wondering whether or not Crux has a theoretical limit of number of documents that can be indexed...
In other words, if money were no object could I just keep adding nodes and storage into the billions of documents..?
And if I did, is there likely to be point at which queries would become un-fixably slow, specifically in reading..?
(Clearly it does occur to me that the obvious fix is to have more than one Crux Database, as long as there is a way to partition the data such that I would not need to interrogate it across the partitions / DBs)
Hi Oli. If you're using RocksDB as the index store, it's very high (100s of TB). You should do some research to see use cases similar to what you have in mind.
That's A LOT of data in terms of the index - I can definitely take that as "would handle my needs" 🙂
I will go and have a deeper look into RocksDB and some of the other options - I did expect to need to do that anyway, but I was just wondering if there was a theoretical ballpark, in the way that Cognitect say if you want more than 1Bn datoms in Datomic that one should "call them" 😉