This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2015-10-16
Channels
- # announcements (1)
- # aws (11)
- # beginners (3)
- # boot (63)
- # cbus (1)
- # cljs-dev (4)
- # clojure (96)
- # clojure-dev (5)
- # clojure-germany (2)
- # clojure-japan (43)
- # clojure-poland (2)
- # clojure-russia (38)
- # clojure-sg (2)
- # clojurescript (138)
- # clojurex (1)
- # cursive (3)
- # datomic (16)
- # docs (6)
- # emacs (3)
- # events (2)
- # ldnclj (42)
- # off-topic (6)
- # om (384)
- # onyx (122)
- # spacemacs (6)
replacing in-calls
with it’s version for each job
Maybe this is my misconception about how onyx works then... can't I run the same job multiple times with different inputs?
Ideally different input/output channels, since maybe they are both running at the same time
If you want to keep their results separate, I believe it is idiomatic to launch two jobs in this case.
spangler: sorry to butt in, but why not sort them by task-id on the way back out?
So you are saying, there is only one output channel for a particular workflow, even if I submit multiple jobs for it?
that makes sense to me - you define the number of workers reading from that channel
the channel itself should not be a bottleneck
You can have multiple output channels, how you route messages internal to that workflow is up to you
remember the peers config
If you wanted to do something like route based on an id, there are flow conditions for that.
Am I thinking of something different than you?
Do you mean the workflow varied for each kind of work you wanted to do?
If each is reading from the same channel, then one will get messages meant for the other and vice versa
You can have two different channels as input in your workflow
something like this
[[:in1 :inc]
[:in2 :inc]
[:inc :out]]
A job in the docs is an entire work-unit. Lifecycle+Catalog+Workflow
I submit the job, send the network info in on the input channel, and read the result from the output channel
I think output channels are only for things you would combine and / or reduce over
they aren't for specific task results
You would submit separate jobs, especially in this case as it (i assume) invovles aggregation.
You can get around this, but I dont see the benefit
@gardnervickers Right, that is what I want, to literally call (submit-job)
from two different processes, put input on two different channels and read output from two different channels
Yea, i’m sorry was that what you were asking about all along?
Shoot, didnt pick up on that for some reason haha
The thing is, I don't see how to provide different input channels during submit job, since the lifecycle map refers to def'd functions as keywords
hold on I have a snippet of code from a while ago doing this
As long as you have a reference to it somewhere it shouldnt be GC’d, so you could throw a bunch of channels in a map
Having some trouble finding this code
Yea I believe the leiningen template does something similiar
stores them by GUID
or UUID
they actually have a really good setup in there for this kind of stuff
yea that’s it
Or you can just leave them hanging around, they’re so light weight
dissoc should toss them to the GC though
I’m not sure about standard but that’s the approach I took
Well, how do you dissoc a memoized function? I mean, memoizing a function is basically using a map behind the scenes really
Oh I meant just dissoc from a hash map
but dont even worry about that, chans
use nothing in terms of resources
especially if they’re closed (which you should do when you finish a particular graph processing)
So, every time I submit a job, assoc the chan into a map, do the processing, close the channel?
Yea you can have that be part of the lifecycle
to close the channel after :done
or something
if they still use :done
😕
it’s been a while
Honestly the data-driven approach makes this kind of stuff easy to do programatically
Also, the Gitter chat room is far more likely to get a response in my experience, Mike and Lucas are on there reliably
Hello everyone.
@gardnervickers Ah, I was trying to avoid having another communication channel open : P
@spangler: The core.async I/O plugins are pretty much only useful for dev. Have you seen the application template? We have a few idioms in there to use multiple channels.
@michaeldrogalis I don't know if you have read through the entire history here... are you referring to the memoizing the channel function based on id?
Yeah. I read through some of it. Still stuck, should I reread it?
But all the examples I have seen the lifecycle refers to functions to inject the channels by keyword
Storing them in a map is functionally equivalent to just making a bunch of (def a (chan)), (def b (chan))
but way easier to manipulate with code.
@gardnervickers is correct.
One for the cookbook/faq/tips&tricks 😉
It would be awesome if I could pass the channels I need into the call to submit-job
somehow... or the call to submit-job
returned the channels I need. But I can basically create an abstraction that does this
Yea that sounds interesting
A quick explanation of how things work might help: -everything- you pass into onyx.api/submit-job
is data, and is strictly not shared across jobs. You can reuse those things freely, and they need not exist on the peers at compile time. Lifecycles are a construct that helps you create side effects. You're feeling stuck because you're sharing the "recipe" for creating core.async channels, but the actual implementation of how channel access works is where you want to focus. I think you have what you're looking for, but that's another way to phrase it.
Everything that goes into submit-job must be serializable, we drop it into ZooKeeper. Keep learning to use lifecycles, they're what you want.
Indeed, @gardnervickers, need to write that cookbook sometime.
So if I need channels for each job, I need to refer to a function that generates them
I wonder, is this how onyx was intended to be used? Or should I be decomposing my problem another way?
That's a normal usage pattern.
Other possibly helpful idioms: https://github.com/onyx-platform/learn-onyx/blob/master/src/workshop/workshop_utils.clj
The 'new-years resolution' of the software world, empty git repo https://github.com/gardnervickers/onyx_cookbook
now’s a good time to transfer my notes as any
Well, thanks for your help @michaeldrogalis and @gardnervickers. I will probably be asking more questions in the future : )
@gardnervickers: Hah, thanks buddy.
@spangler: Np. Good luck!
@spangler: always ask I love working on this stuff.
@shaunxcode: There's a patch in develop that implements the grouping by window behavior that we discussed.
Haven't documented it yet - basically if you use :onyx/group-by-key
or :onyx/group-by-fn
, window state is maintained as a map rather than a scalar.
awesome, looking at it now!
Hey @ericlavigne! How's it going?
@cddr Andy? Having a great time. Trying to choose a scalable database at the moment.