Fork me on GitHub

heyo, any other jdbc+sqlite users out there? sometimes if there's an error or i interrupt the process the sqlite file gets and stays locked. curious if there's anything i could do to avoid that and if there's any workarounds


nvm, realized there wasn't a reason for me to not use lmdb (i just wanted the most performant and persistent JAR-embeddable implementation)

David Pham06:07:49

What is the scaling story of crux? Datomic has a single writer, but reads can be scaled horizontally. What is the similar statement for crux?


Reads scale horizontally also. Each node handles writes by itself, so you have a single writer limitation there too, effectively.


@dominicm That is, as long as you use kafka? Using rocksdb as backend will constrict you to one node, as rocksdb only supports one writer. Do I understand that right?


@sveri right. Rocksdb is not yet distributed (although I vaguely recall such a thing existing).

David Pham09:07:09

What about JDBC backend?

David Pham09:07:08

Would it be possible to have multiple tx_events tables in a database with JDBC?


The key thing to understand is that each node synchronizes with your event store. Your bottleneck is constrained by: - time to write to event storage - time to read event storage into indexes on each node


JDBC's speed at taking writes will be dictated by your underlying database & table. I don't know how the underlying table looks, but I bet a timescale hyperlog table would perform far better than a normal table. But you're essentially limited by your ability to append that table I think. I don't know much about the jdbc implementation. So take those guesses with a grain of salt and measure measure measure.


We currently only use JDBC for the golden stores - the transaction log and the document store (ie not for the query indices). Regarding the performance characteristics that we use for those - the transaction log adds rows to the end of a table, the document store is essentially a content-addressable kv-store


It'd be quite reasonable to mix-and-match those, too - Kafka for the transaction log (for its write characteristics) and JDBC for the document store


Each Crux node then consumes the transaction log and document store, and updates its own set of query indices - this is where RocksDB/LMDB shine


Do you have some docs or examples where you configure such a setup?


Thank you, thats helpful 🙂


Is there any best practice when using Crux to implement a "global" counter, such as a human readable integer order number. One possible solution could be to have a document with the current value of the counter and then use a transaction-fn to read the value and add it to any new order, another could be to keep an index of orders in a document {:1 :internal-order-id-1 :2 :internal-order-id-2}, again updating and creating within a transaction-fn. What would be the trade-offs and am I missing any obvious solution?


This seems reasonable - it's naturally a problem that requires serialised transactions so a transaction function seems a good fit.


At first glance, I'd consider a put-order tx-fn that could take the order doc, increment the counter document and assoc the new order-id into the order before putting it


ok, sounds good, thanks for your input!


Are there plans to implement something like :patch instead of only :put for crux? I imagine that the challenges can be incredibly hard, like patching a document with a future valid-time, but I also think it can be a killer feature :)


I think there's an implementation in user space, I asked about this recently and @taylor.jeremydavid answered 🙂


@U3Y18N0UC here's the chat mentioned from a few days ago: I think it is definitely solvable in user-space using transaction functions, but in general though we would only be likely to add a "native" patch operation if we change the nature of the underlying data model in some way. We might also consider it if we could make stronger guarantees about the efficiency, for instance with a doc-store that benefit from structural sharing

David Pham17:07:52

Thanks a lot for all the discussion! May I ask what is a query index? Is it the eavt/aevt/avet B-Tree?


Hi 🙂 Crux indexes are stored using a single set of sorted key-value pairs in Rocks/LMDB (Rocks is an LSM tree, but LMDB is a kind of B-tree). We don't use the eavt/aevt/avet structures specifically but the general idea is the same. For a good overview I can recommend this talk: This is where are our index prefixes are defined:


@dominicm do you think it's possible to, like, transact a change in :age field with valid-time in the future with this userspace implementation, and it'll respect changes on other fields that happened before that valid-time?


I think this can be made to work, but you must funnel all updates to your entity through your txfn, such that the future valid-time version is always correct


This is a kind of transactor, in Datomic parlance, no?

✔️ 3

Hi friends. We are considering firmly to adopt Crux as a persistent solution in places where we wold use a RDBMS or Datomic. It’d be nice to know about production use cases that can support this option.

👋 6

The beta programme already finished or is still running?


Hi! We are working with a few orgs on various aspects of our beta plans, but we are still open to new joiners :) the programme will keep running until crux-core drops the -beta suffix (I estimate another 3 months at this point). Feel free to DM me

👍 6