This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-05-11
Channels
- # aws (6)
- # beginners (105)
- # boot (6)
- # cider (50)
- # cljsrn (10)
- # clojure (41)
- # clojure-brasil (6)
- # clojure-italy (25)
- # clojure-nl (17)
- # clojure-russia (4)
- # clojure-serbia (1)
- # clojure-spec (8)
- # clojure-uk (242)
- # clojurescript (27)
- # core-async (10)
- # cursive (5)
- # data-science (9)
- # datomic (43)
- # emacs (6)
- # fulcro (6)
- # graphql (1)
- # javascript (3)
- # juxt (4)
- # lein-figwheel (1)
- # mount (1)
- # onyx (19)
- # parinfer (2)
- # portkey (15)
- # protorepl (1)
- # re-frame (30)
- # reagent (3)
- # ring-swagger (1)
- # shadow-cljs (22)
- # sql (6)
- # tools-deps (23)
- # vim (13)
@lucasbradstreet thanks so much! That makes a lot of sense
👏 Congrats on releasing Pyrostore. Looks awesome! It's the exact tool to fit the architecture I've been working on for the last couple of years. Wish I had it when I started 🙂 Keep up the great work!
For Pyrostore, I am trying to understand a bit more. I see this on the blog-
Pyrostore's consumer reads records directly out of cloud storage, and it's intelligent enough to cross its reads back into Kafka when records are not yet available in the archive.
I don’t think I follow what cross its reads back into Kafka
means. Is Pyrostore intended to be a Kafka-history-stream
alongside of Kafka to be used by select consumers? Or would you envision all consumers read from "cloud storage"
as a scalable stream-with-cost-effective-scalable-storage around Kafka? Both/and?@nrako Both, sort of. The archive in cloud storage will always be a little behind on what's actually in Kafka because, physics. Consumers can choose the policy for which storage they read out of when the records it wants exists in both.
It let's you trade-off read scalability (better against the cloud), latency (better against Kafka), availability (probably better against the cloud), etc.
I see. Thanks for the note. So Pyrostore proposes to be an infinite, cost-effective replication of the Kafka stream, and where the Pyrostore consumers are subscribed (whether Kafka itself or archive) is a configurable implementation detail.
That’s a pretty good summary, yes.
Sounds great. Just now reading Designing Data-Intensive Applications
and thinking through what an implementation would look like. An infinite Kafka stream seems critical. Thanks for the feedback...
Great book 🙂
Hi I have a quick conceptual question. I’ve played around with onyx for some simple use cases. I’m now looking into implementing something along the lines of calderwood’s commander pattern. And just discovered there’s already an onyx example 🙂 My question is more around ‘unit’s of deployment’ with onyx Let’s say I take your commander example, it’s my ‘accounts’ processor, all good. Now I want to add a ‘customers’ processor to the mix, keeping its state in its own datomic db. Is it simply a matter of a similar project that i jar up and point to the same zookeeper/kafka/etc ?
@eoliphant Onyx is pretty flexible in this respect. The main thing is that the jar is started for a given tenancy contains all of the code necessary to run the jobs for that tenancy.
@eoliphant so you could have two separate jars on two separate tenancies, each from a project that runs its own code. Or you could have a jar that is able to run code for both, on the same tenancy.
Or lastly you could have a jar that can run code for both on separate tenancies, which gives you some more scheduling / node isolation.
ok that helps. In my case these guys are basically microservice/command processors. so yeah they have all the code they need for the commands they handle and processing events they may be interested in. So conceptually they should be relatively independent. Based on what you’re saying, it sounds like, for me, each service/processor should be in its own tenancy
Sounds right. It’ll be easier to schedule as you can just add more nodes to a tenancy as you wanna scale up
so beyond that, say these guys are dockerized, etc. I’d just run 1 to n copies for reliability, etc?
Yeah, you can add more peers than you need so the job will continue running as nodes fail