This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-04-22
Channels
- # admin-announcements (7)
- # beginners (56)
- # boot (69)
- # cider (168)
- # cljs-dev (2)
- # clojure (170)
- # clojure-austin (25)
- # clojure-beijing (3)
- # clojure-belgium (2)
- # clojure-france (3)
- # clojure-poland (17)
- # clojure-russia (115)
- # clojure-uk (40)
- # clojurebridge (3)
- # clojurescript (87)
- # cursive (9)
- # datomic (30)
- # dirac (18)
- # editors (3)
- # emacs (14)
- # hoplon (195)
- # immutant (14)
- # jobs (3)
- # jobs-discuss (4)
- # leiningen (11)
- # melbourne (5)
- # mount (42)
- # off-topic (5)
- # om (24)
- # onyx (48)
- # parinfer (53)
- # proton (1)
- # protorepl (2)
- # re-frame (3)
- # reactive (2)
- # reagent (29)
- # rum (5)
- # spacemacs (4)
- # untangled (91)
- # yada (1)
What happens with the scenario that the batch-write on the output plugin fails in terms of segment acking and retrying? Would that mean that the retry-segment will be called for all the root segments in the batch?
Yes, that's right
It keeps the pending segments in memory and depends on the input source being replayable
The plugin's ability to read from the input source more than once
Yup. No worries
http://www.onyxplatform.org/docs/user-guide/latest/scheduling.html > For example, if you have 2 jobs - A and B, you’d give each of this percentage values - say 70% and 30%, respectively. If you had 100 virtual peers running, 70 would be allocated to A, and 30 to B. If you then added 100 more peers to the cluster, job A would be allocated 140 peers, and job B 30.
I’m interested in understanding how :batch-functions
work as described here https://github.com/onyx-platform/onyx/blob/cf577d415a9b3f4016cfbf8675fb751a1671bb8d/doc/design/proposals/batch_primitives.md#batch-functions
Is there an example somewhere?
Batch primitives were cancelled in favour of better streaming techniques. We may pick them up again in the future
Ah ok, good to know. I thought i was missing something
I'm seeing 'Stopping task lifecycle' messages in the Onyx log, but my plugin doesn't seem to be seeing the seal-resource
call
I believe the idea is for seal-resource to be called to close off the output stream when the job is done e.g. in the case of core async write a :done to the channel
I'm having a hard time understanding how best to 'close' my reader. I'd like to use a sentinel like ':done' but I end up just passing :done
along the workflow, which isn't what I want, clearly. Usually, during read-batch
I am returning {:onyx.core/batch [state]}
where state is (t/input id message)
- what should I return/do after I see that sentinel?
You need to make sure drained? returns true after there are no more messages left and all messages have been acked
Oh I think you need to return done too
I believe the code works fine, we just dropped read support because we didnt want folks using it thinking that it could checkpoint/recover.
@gardnervickers: Yes, it does work fine, but my downstream workflow is receiving :done
.
Is your downstream task receiving the done or is it being put on the medium by the output plugin?
I've moved some lifecycle stuff around and the problem has disappeared now...but for a while I was definitely getting :done
on an output async chan
Yeah, you should get it on a core.async output channel, if you're using the core.async plugin
Nope, it needs to be :done
Any time
@rasom: Can you please open an issue on GitHub for those things? Happy to fix them if I can find them later.
@rasom: thanks for the report, we definitely appreciate them
Is onyx a reasonable choice for a continuous integration style app? It's not computationally intense, but the work has many steps with a complex dependency tree.
@dg: Yeah, I think it would do well in that domain. The Windowing features should help a lot. We're missing first class iteration right now, but the workarounds are reasonable, and we expect to land iteration within 6 months.
Cool, thanks. Didn't want to dive in too deep if someone here said, "no it's just for big data processing, dummy"
Hehe, not at all. Happy to answer any questions, including beginner material.