This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-01-19
Channels
- # beginners (34)
- # boot (111)
- # cider (37)
- # clara (57)
- # cljsjs (1)
- # cljsrn (22)
- # clojure (156)
- # clojure-austin (2)
- # clojure-mke (7)
- # clojure-russia (9)
- # clojure-spec (221)
- # clojure-uk (47)
- # clojurescript (42)
- # code-reviews (4)
- # community-development (9)
- # core-async (3)
- # cursive (50)
- # datomic (81)
- # emacs (12)
- # events (5)
- # hoplon (1)
- # jobs (2)
- # lein-figwheel (4)
- # leiningen (1)
- # luminus (3)
- # mount (2)
- # off-topic (1)
- # om (94)
- # om-next (3)
- # onyx (33)
- # re-frame (23)
- # reagent (41)
- # remote-jobs (9)
- # rum (30)
- # slack-help (2)
- # specter (1)
- # untangled (20)
- # yada (17)
@michaeldrogalis @mariusz_jachimowicz if you want feel free to guide/prod me in the direction that you are thinking regarding the new action and I can attempt to contribute the change
also is it required to shutdown a peer group manually when closing/shutting down an app or is that closed by the virtue of no peers being around (the peers get shutdown first)
https://mariusz-jachimowicz-83.gitbooks.io/mastering-onyx/content/main-usage.html
@rc1140 That is working as designed. The peer group initializes shared resources for the virtual peers.
Hey all! I've got a weird problem and I'm absolutely unsure how to tackle it: There is this seemingly usual job, which reads message from Kafka, makes a bunch of writes to Postgres and ElasticSearch, and then writes a message to another topic in Kafka. There are two of those jobs, and one works flawlessy. The other, though, is strange: it never updates its offset in Kafka, and processes same messages (I presume a batch?). Just over and over again, Any thoughts on what can cause that?
@asolovyov it sounds like messages aren’t getting fully acked and are getting retried
If you have onyx-metrics setup, you can check whether that job is seeing retries
@lucasbradstreet oh, right, I'll try to add that to my charts
I don't know, it looks like no retries: https://monosnap.com/file/KTJSxYL7WYNZJ0VoSuguQGKlc6OH6u also, from time to time it writes a bunch of feedbacks, but read throughput has weird pattern
@asolovyov Is the misbehaving job running on the same hardware with the same networking rules as the correct one?
not sure why there are no retries, but serialization functions for kafka are different - I'll try to unify them and see what happens
Cool, yeah keep us posted.
btw, I have feeling that I asked that once before already, but still - why is throughput of input task is so much higher than throughput of output task?
@asolovyov Either because your job is stripping off segments before they each the output by using flow conditions, or your metrics aren’t reporting correctly.
Interesting. So I have this job sending emails and metrics look like this: https://monosnap.com/file/DY6PyR5rF7q294781gQwN3O29QzRKu I'm pretty sure I send all emails I have to 🙂
read-email is an input task reading from kafka, send-email is an output task to onyx-http
It’s also possible that you have one input peer, and multiple function/output peers, and you’re not summing the multiple peers throughputs
Metrics are great and absolutely essential - but be sure they’re not lying to you 🙂
Anytime 🙂
Yeah, it’s either that you’re you’re filtering somewhere
But probably riemann