Fork me on GitHub
#off-topic
<
2023-12-28
>
Tharaka M.04:12:24

Hi, This is an interesting conversation that shows the unique power of lisp! https://youtu.be/fytGL8vzGeQ?si=5vrr9mBnESs1eYSD

👍 4
p-himik16:12:42

> So I'm only developing now in "forever" languages that are going to be around for 20, 30, 40, 50, 100 years from now. Love this part.

👍 2
Daniel Gerson14:01:43

Subscribed recently. I dug the Crash Bandicoot reference which I didn't know about.

Jason Bullers15:12:24

I'm curious if any of you have had experience doing event sourcing with Clojure. Conceptually, it seems like it would be a good fit for data-oriented FP. If you have used it (especially if you've used it at scale), how did you keep things efficient and deal with the increased storage costs of tracking events rather than mutating records in the database? If you chose not to use it, what were your reasons against?

Jason Bullers15:12:42

I've heard some arguments that you can't possibly know what your future requirements will be for the data, so it's better to collect and store what you can rather than prematurely discarding and mutating in place. I'm onboard with that idea (even in git, I'll rarely ever squash commits), but it's not exactly a bulletproof and convincing argument because it's based on speculation and doesn't account for practical considerations like increased storage needs and time to reconstruct state from events.

Ben Sless15:12:22

Cost depends on how many events you're dealing with, but you can probably think of a few ways of reducing storage costs. Even batching and gzipping batches will go a long way (assuming a bunch of events are similar) Over time you can use cheaper storage and for daily usage work with snapshot and latest events. You dont have to get the entire event log out of the freezer every time. I'd prefer to use event sourcing in all cases it's possible and a good idea

p-himik15:12:29

You'll probably find useful information in any article that talks about Datomic or XTDB.

2
jpmonettas15:12:40

using Datomic is a way of implementing event sourcing for your system that is known to scale (nubank) and which already provides a lot of solutions for different aspects of it

Martín Varela06:12:12

I implemented an event-sourced analytics system at $PREVIOUS_JOB, on top of Kafka Streams and XTDB. It was not a panacea, but it worked pretty well. The main issue I had (which came from some historical baggage in the origins of the project, and the way Kafka Streams work) was dealing with the ordering of events (think, dependencies) when replaying from the log. It was solvable, but it required some ingenuity. There were also some performance considerations. In the end, I implemented the so-called "Kappa architecture" to allow for updates in the stream processing aspects with no downtime.

Jakob Durstberger08:12:01

I have worked on a system where all our microservices used event-sourcing with Postgresql. The implementation was really straightforward and we wrote the event and updated the projection within the same transaction. Of course, that would only work if you have a reasonable amount of events for a single stream. But there are ways to optimise later. I have to say that event-sourcing has become one of my favourite practices. Projecting the events with Clojure is also simple as it is just a reduce over a multi-method.

xbrln16:01:21

I used redis pub/sub and redis queues to implement something similar. Am happy with how it adapted to changing requirements so far 🙂