Fork me on GitHub

Hey folks this post seemed very relevant for crux folks so please allow me to share it. It has the very interesting concept of changelog and aggregated view population from it.

👀 3
🙏 3

Thanks, I hadn't spotted that post yet. > In a future release, ksqlDB will support the same operation but with order defined in terms of timestamps, which can handle ^ This linked presentation is interesting and visualises how ksqlDB's Stream-Tables implement temporal joins. It's certainly a familiar looking problem and tech stack. I spent time last year reflecting on how Crux's model of bitemporality relates to the event streaming ecosystem (in preparation for our Strange Loop talk), and concluded that all the windowed-join mechanisms in KStreams/ksqlDB/Flink/etc. are necessarily constrained means of dealing with out-of-order data at a scale far larger than a single-writer system like Crux. Constrained because the windows have to be finite and proportional to the workload. Perhaps the most important difference to Crux is that a large streaming system has to be designed ahead of time - there's no equivalent of Crux's ad-hoc historical queries. If Crux is a car, then ksqlDB is a public transportation network 🙂


The two definitely have different scalability characteristics though. In a way ksqlDB is able to handle sharding semi-automatically - this is quite the feature actually.

💯 3

Thank you for the accurate response, we are having ordering issues here too and queue ingestion timestamps actually feel quite a hacky way to handle out of order issues. Bitemporality seems like a far superior paradigm w.r.t. that really

🙂 3

Indeed, the scale that Kafka (and friends) is capable of addressing is immense, and way beyond the single-writer realities of where is Crux today. It's interesting to hear about your experiences though & I hope Crux will be of some use!


Hello. I just started playing with Crux and I wonder why I can not update individual attributes on a document. Is there a rationale? Or am I missing something?


Hi 🙂 this is a deep and cross-cutting discussion which deserves more explanation than is currently communicated publicly, but the brief summary is that we think the document model provides the simplest foundation to reason about by both users and developers. There is a "transaction function" feature that allows you to model any other semantics you might wish, including "update individual attributes on a document" - but we don't want to complicate the core API by shipping such capabilities as first-class. You can write and install your own transaction function to add a single attribute-value "triple" to an entity, using something like: One particularly useful consequence of designing everything around documents is that documents can be optimistically committed and indexed without any lookups or validation against existing indexes, and this is very valuable for high-throughput use-cases. From a philosophical standpoint we regard triples as merely one possible information model, not an ideal, and therefore we have focused on providing the means for users to experiment with new and (hopefully) better models. Perhaps the fundamental question to ponder in this debate is: > When does it make sense to simply add a new triple to an entity without first looking at the state of that entity?

thanks 3
Toyam Cox01:10:07

Perhaps when adding a default user attribute across the board, you can do so without reading the user. But I agree that a patch mechanism would be confusing...


@U013YH4QPD0 when you say "a default user attribute", do you mean in terms of something used for auditing purposes? Or something used for tracking "ownership"?

Toyam Cox20:10:09

I mean a system where users are stored in crux, and after data creation it's realized there is a need for a new attribute for each user. An attribute that can safely be set to a default value. Of course there are other ways to solve this too, but it's a possible place.

👍 3