Fork me on GitHub
#xtdb
<
2020-01-13
>
refset00:01:02

Hi @caio I haven't looked into NATS much but it seems like it may not be a viable fit to replace Kafka's role within Crux. For instance, a lack of multi-publisher ordering guarantees means it may not be possible to model the linear transaction log that Crux relies on: https://docs.nats.io/faq#does-nats-offer-any-guarantee-of-message-ordering Also message persistence seems to be missing without using NATS Streaming as well, and even that doesn't feel like it will give you strong enough durability guarantees to rely on for DR scenarios. Perhaps it's okay to use if you only want to run a single-node Crux cluster... I'd be happy to brainstorm further in any case!

refset00:01:26

As for nippy, it has been a useful and sensible default for us so far, but we've not spent much time benchmarking and comparing it in isolation. There's an open issue for pluggable serialisation if you fancy chiming in at all: https://github.com/juxt/crux/issues/151

refset00:01:20

If you want to use a custom TxLog backend it's currently just a matter of implementing the protocol, but we want to simplify all this further still

caio02:01:16

oh, yeah, I was just talking about nats streaming. DR should be fine on nats streaming since it replicates the log using raft, but I'll have to look more into the multi-publisher ordering guarantees because I don't really know anything about it

caio02:01:56

anyway, I just thought it'd be a fun project. no real use cases that are not covered by crux-jdbc

👍 4
refset15:01:04

Hmm, I'm still not sure. Bear in mind that Crux relies on a single-partition topic for the TxLog. Let me know if you find out anything else though as it would be interesting to make work

caio21:01:58

well, that makes it even better hahah. nats streaming doesn't have partitions

teodorlu17:01:33

Hello, everyone! I couldn't find a recommendation on how to do one-to-many relations with Crux in the documentation. Is a vector of IDs on the "left" entity the way to go?

teodorlu17:01:27

In Datomic, I'd reach for :db.cardinality/many

teodorlu18:01:04

I guess I could just refer from the one-part to the many-part of the relation. That way I wouldn't have to update the one-part of the relation each time I wanted to add an item, either.

refset18:01:56

Hi @teodorlu! Your observations are accurate. With the first approach you also have the choice of being able to use either a vec or set, where you can rely on the vec to implicitly record the correct order of the relations. Deciding which option is best will depend on the context, e.g. the default shape of data arriving from upstream systems and the kinds of ingestion latencies you are willing to tolerate. Ideally you always want documents to be small, and "shredded" if necessary, especially if they are being updated frequently ...so the second option is probably a safe bet

teodorlu18:01:14

Thanks for your reply!

👍 4
teodorlu18:01:23

Can I read more about how references are managed somewhere in the docs?

refset19:01:06

Hmm, have you looked through the Space Tutorials already? Or the Bitemporal Tale? Both of them discuss modelling options a bit. In terms of how Datalog relations work under the hood, it's all achieved via extensive hashing and query-time traversals

teodorlu08:01:04

That sounds like a good idea. The fact that I'm asking these questions might just reflect that I need some more time working on fundamentals. Thanks.

👍 4
refset11:01:24

No problem, it's great to discuss these things. Certainly the docs always need expanding 🙂