This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (1)
- # aws (11)
- # beginners (3)
- # boot (63)
- # cbus (1)
- # cljs-dev (4)
- # clojure (96)
- # clojure-dev (5)
- # clojure-germany (2)
- # clojure-japan (43)
- # clojure-poland (2)
- # clojure-russia (38)
- # clojure-sg (2)
- # clojurescript (138)
- # clojurex (1)
- # cursive (3)
- # datomic (16)
- # docs (6)
- # emacs (3)
- # events (2)
- # ldnclj (42)
- # off-topic (6)
- # om (384)
- # onyx (122)
- # spacemacs (6)
Maybe this is my misconception about how onyx works then... can't I run the same job multiple times with different inputs?
Ideally different input/output channels, since maybe they are both running at the same time
If you want to keep their results separate, I believe it is idiomatic to launch two jobs in this case.
So you are saying, there is only one output channel for a particular workflow, even if I submit multiple jobs for it?
that makes sense to me - you define the number of workers reading from that channel
You can have multiple output channels, how you route messages internal to that workflow is up to you
If you wanted to do something like route based on an id, there are flow conditions for that.
If each is reading from the same channel, then one will get messages meant for the other and vice versa
I submit the job, send the network info in on the input channel, and read the result from the output channel
I think output channels are only for things you would combine and / or reduce over
You would submit separate jobs, especially in this case as it (i assume) invovles aggregation.
@gardnervickers Right, that is what I want, to literally call
(submit-job) from two different processes, put input on two different channels and read output from two different channels
The thing is, I don't see how to provide different input channels during submit job, since the lifecycle map refers to def'd functions as keywords
As long as you have a reference to it somewhere it shouldnt be GC’d, so you could throw a bunch of channels in a map
Well, how do you dissoc a memoized function? I mean, memoizing a function is basically using a map behind the scenes really
especially if they’re closed (which you should do when you finish a particular graph processing)
So, every time I submit a job, assoc the chan into a map, do the processing, close the channel?
Honestly the data-driven approach makes this kind of stuff easy to do programatically
Also, the Gitter chat room is far more likely to get a response in my experience, Mike and Lucas are on there reliably
@gardnervickers Ah, I was trying to avoid having another communication channel open : P
@spangler: The core.async I/O plugins are pretty much only useful for dev. Have you seen the application template? We have a few idioms in there to use multiple channels.
@michaeldrogalis I don't know if you have read through the entire history here... are you referring to the memoizing the channel function based on id?
But all the examples I have seen the lifecycle refers to functions to inject the channels by keyword
Storing them in a map is functionally equivalent to just making a bunch of
(def a (chan)), (def b (chan)) but way easier to manipulate with code.
It would be awesome if I could pass the channels I need into the call to
submit-job somehow... or the call to
submit-job returned the channels I need. But I can basically create an abstraction that does this
A quick explanation of how things work might help: -everything- you pass into
onyx.api/submit-job is data, and is strictly not shared across jobs. You can reuse those things freely, and they need not exist on the peers at compile time. Lifecycles are a construct that helps you create side effects. You're feeling stuck because you're sharing the "recipe" for creating core.async channels, but the actual implementation of how channel access works is where you want to focus. I think you have what you're looking for, but that's another way to phrase it.
Everything that goes into submit-job must be serializable, we drop it into ZooKeeper. Keep learning to use lifecycles, they're what you want.
So if I need channels for each job, I need to refer to a function that generates them
I wonder, is this how onyx was intended to be used? Or should I be decomposing my problem another way?
Other possibly helpful idioms: https://github.com/onyx-platform/learn-onyx/blob/master/src/workshop/workshop_utils.clj
The 'new-years resolution' of the software world, empty git repo https://github.com/gardnervickers/onyx_cookbook
@shaunxcode: There's a patch in develop that implements the grouping by window behavior that we discussed.
Haven't documented it yet - basically if you use
:onyx/group-by-fn, window state is maintained as a map rather than a scalar.