Fork me on GitHub

I am trying to figure out, if crux works well with an application I am writing. The app will run as several instances and autoscale. Do I get this right, that I have two options if I want persistent storage with a Postgres Backend: 1. I have to set up a dedicated crux node which I will query from my application nodes. This means I will have more infrastructure to manage. 2. I can embed Crux to the application nodes. This means the kv store I choose will be duplicated to all the nodes I cannot imagine the second case to scale well, if data grows. Also, memory is the most expensive resource in the cloud, which leaves me with option one only, right?


There are 3 separate 'stores' to a Crux system. Let's call them txes (transactions, mostly hashes), docs and idxs (indexes). With Crux, txes and docs are shared between nodes. Only idxs is local to each node, for performance. Your idxs store stores indexes of the top level entries of your documents.


Memory is expensive, but Crux does not require you to keep the entire index in memory. I like to keep my idxs in RocksDB, which stores on disk.


You can choose RDS for txes and docs, shared across all your Crux nodes.


I have set up an example app with Postgres for docs and txes, which works well. You are right of course, that having the idxs in the pods/dynos won’t affect memory much when they are stored in the filesystem. Maybe I’ll have to crunch some numbers to get an idea how big the data might become. If it becomes big enough to be an issue, setting up a dedicated node wouldn’t be to big of a problem


Thanks for your response!

👍 3