This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-05-16
Channels
- # architecture (12)
- # aws (8)
- # bangalore-clj (1)
- # beginners (172)
- # boot (25)
- # chestnut (3)
- # cider (15)
- # cljsrn (5)
- # clojure (170)
- # clojure-india (1)
- # clojure-italy (21)
- # clojure-nl (87)
- # clojure-romania (3)
- # clojure-sg (1)
- # clojure-spec (1)
- # clojure-uk (79)
- # clojurescript (79)
- # cursive (2)
- # datomic (29)
- # dirac (26)
- # emacs (7)
- # fulcro (13)
- # jobs (4)
- # juxt (22)
- # lein-figwheel (1)
- # leiningen (2)
- # lumo (39)
- # nrepl (1)
- # off-topic (54)
- # onyx (124)
- # pedestal (1)
- # planck (4)
- # portkey (1)
- # re-frame (36)
- # reagent (2)
- # ring-swagger (8)
- # shadow-cljs (107)
- # spacemacs (1)
- # specter (25)
- # sql (7)
- # tools-deps (5)
- # vim (10)
- # yada (25)
Hi, I have a kind of general architecture, etc question. I’m looking to use onyx for a commander’ish pattern implementation. One of the things I’m working through is the best way to manage/maintain the state that a command processor needs in order to do its thing and issue the appropriate event(s). I’ve done stuff with ES/CQRS frameworks like axon, that have things like an explicit ‘event sourcing repository’, such that my say order command processor, would ask the repo for order 27, and it would return the state as a function of all it’s stored events. I’d been considering using datomic as this ‘aggregate repo’ or whatever, but I initially was having some heartburn as it could possibly violate the principle that the events are the source of record for everything, but now I’m thinking that as long as the datomic state is a function of applying events, can be rebuilt as needed, then it actually is ok, and actually potentially makes for a better ‘repository’ implementation,as it’s essentially an ongoing snapshot, as opposed to other libs/approaches where you maintain the snapshot, and still read N events to get to the current state. Sorry for the ramble lol, but just wanted to see what you folks thought about this
@eoliphant Been doing a little thinking about this myself. I am currently implementing this with Kafka streams since I don't have onyx available to me in this case. If using onyx and datomic you probably can use it as the state store and have an onyx job read the datomic log for the events
yeah @camechis I’ve been looking at Kafka streams also, and trying to decide how that might fit in, pros/cons, etc. It’s the Tyranny of Good Choices lol
And yeah, pulling stuff from the datomic log is yet another dilemma lol. Because, in that case strictly speaking, the datomic log(s) are the SOR and the event stream/store is derived, so it’s not event ‘sourcing’ per se. I know the NuBank guys did that (microservice datomic log -> kafka) but I saw a talk by their CTO recently in which he indicated that if he had to do it again, he’d have flipped it around
i think datomic could be fine for this, but i don't think it's that good of an event store
@eoinhurrell have you seen the latest project by the onyx guys ? http://pyrostore.io/
so what issues have you had with it from the event storage perspective? I’ve seen some rumblings along these lines lol
if you want to allow your data scientists to query the event store directly, it sucks
if you use kafka as the event store, imho it's not a great tool for ad-hoc querying and data exploration
did you see this ? https://yuppiechef.github.io/cqrs-server/
since this is business stuff as opposed to just streams of data from IOT or somehting
i came to the conclusion that the only reliable way to deal with it is eventual consistency
if your aggregate processors detect a conflicting operation (e.g. deleting the same user twice), they do conflict resolution at that point
imho the cqrs / event sourcing pattern demands those kind of conflict resolutions. you cannot achieve strong consistency like a rdbms anymore.
i would explore it, because if it's possible, you give yourself more freedom in choice of database
Clojurescript in the browser, this backend stuff we’re discussing, EDN/Transit all over, but basicially where throughout the system, :application/id means the same thing, can be validated the same way, etc. Then some automated to the extent possible translators for typical REST/GraphQL for clients who aren’t fortunate enough to be using this cool stuff lol
@lmergen I'm using the SQL plugin to get some initial values and then a downstream task uses those to call out to SQL. I'm thinking now that the downstream task is actually the issue here. I'm still struggling to get it to consistently call out to SQL from within a function task. I'm not sure how the SQL plugin is able to do it so consistently with the PooledDataSource; but for me, even with an input sequence it's not able to get consistent results back from a SQL call