This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (1)
- # babashka (5)
- # beginners (64)
- # cider (9)
- # circleci (11)
- # clojure (80)
- # clojure-uk (25)
- # clojured (6)
- # clojurescript (35)
- # css (1)
- # cursive (3)
- # datomic (6)
- # fulcro (10)
- # graphql (3)
- # kaocha (2)
- # leiningen (3)
- # lumo (3)
- # malli (10)
- # meander (24)
- # off-topic (17)
- # re-frame (10)
- # reagent (2)
- # ring (1)
- # shadow-cljs (27)
- # spacemacs (3)
- # tree-sitter (3)
How do we analyze and decide whether PostgreSQL or Kafka should be used for specific problem or service? I think fundamental question to ask is whether we need a relational database for storing facts or an event driven queue and k-v store. But there is a lot of overlapping area. views? experiences?
the question is confusing to say the least 🙂
if you need relational queries, acid transactions - then you need a relational database....
and if you need a messaging system with persistent storage then kafka is a good candidate ... but since these are so different things (at least in my ears), your question sounds like should one pick a boat or a greenhouse.
event sourcing or event driven mechanisms can be built on both, but you should really go back to the white board to figure out what you are doing in the first place :thinking_face:
@U6MHHF36J thanks. I wanted to ask from the perspective of different kind of applications. In the sense of appropriate tool for the job. Like if I'm building a ticketing application for buses, which one should I choose. I think question is very broad, that's why it's confusing.
if you get to a point where you need to do a cross join between tickets, buc companies, start and end station tows .... you will likely start to want a relational db ....
@UCMNZLJ93 For those interested in this area, I strongly recommend reading Designing Data-Intensive Applications.
I’d say, always go PostgreSQL. If you can’t, then try really hard to find a way. If that doesn’t work, and DB experts say you should use Kafka in that specific case, you can try using that cautiously :)
Kafka is great if 1/ you can afford the added complexity and ops 2/ you have or will have multiple producers, consumers, or data sinks that need to share data 3/ have sufficient amount of data to deal with, unless 1 is a non issue (e.g. ops is someone else's problem)
If you do not have complex querying logic but have data that keeps accumulating you could imagine using Kafka. It's been used successfully for a redirect service in a past job, whereby the service would load all redirects from kafka upon starting, and you are able to see all operations made on a redirect so you can manage them properly, including detect unused ones so you can delete them after checking with original owners
The volume of redirect was in the order of a few tens of thousands, and the logging of their usage is done on a separate topic from the config one
that feeling when a great conference is happening and you are in the other side of the world looking at tweets from attendees about all the awesome presentations