This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-06-15
Channels
- # admin-announcements (7)
- # alda (1)
- # aws-lambda (1)
- # beginners (12)
- # boot (20)
- # cider (59)
- # cljs-dev (4)
- # cljsrn (69)
- # clojure (232)
- # clojure-austin (3)
- # clojure-austria (1)
- # clojure-belgium (2)
- # clojure-canada (3)
- # clojure-dev (16)
- # clojure-greece (33)
- # clojure-nl (4)
- # clojure-quebec (12)
- # clojure-russia (12)
- # clojure-spec (27)
- # clojure-uk (38)
- # clojurescript (29)
- # community-development (7)
- # component (53)
- # core-async (16)
- # core-logic (1)
- # datascript (7)
- # datomic (11)
- # editors (7)
- # emacs (69)
- # hoplon (157)
- # keechma (1)
- # lambdaisland (2)
- # lein-figwheel (31)
- # leiningen (8)
- # mount (3)
- # off-topic (11)
- # om (23)
- # onyx (64)
- # planck (2)
- # re-frame (18)
- # reagent (21)
- # specter (118)
- # untangled (145)
- # yada (1)
Oh that's awesome. So medium memory is basically shm?
Yea itโs stored in /dev/shm
on the host
Excellent
@devth I found this code useful to manage development jobs https://gist.github.com/jeroenvandijk/95a7d09b3c8cb7904547c95c3b4a3360#file-user-clj-L32-L47. Maybe a (simple) variation would be useful for managing remote jobs
@emil0r: possibly, depends on exactly what your trying to do
So https://github.com/onyx-platform/lib-onyx/blob/master/doc/server-api.md doesn't include an endpoint to kill or restart a job?
my company just asked them to figure out how to chain Map reduce jobs together. ๐ข
@drewverlee: Time to go job hunting. ๐
@devth: Not yet it doesn't! ๐
PR's always welcome, this is definitely on our radar. My approach was going to be allowing lookups by job metadata
Btw, Onyx dashboard is so cool! Watching it react in realtime to submitting and killing jobs ๐ฏ
Yea it's pretty slick
However, I still don't understand why a newly submitted job isn't getting any host allocation
Your cluster needs at least 1 peer for every task in each of its jobs
So if you have 2 jobs that each have 5 tasks, you will need your host machines to run JVM's that spawn a total of 10 peers
More than a 1 for 1 peer to task ratio means more concurrency
From a performance standpoint it does not make sense to be running more peers than cores on a box. However it's fine to have many more peers than cores.
I recommend getting your jobs setup and tuning for that stuff last, while developing give yourself enough peers to run your jobs
Also if your jobs might be idle for some time, like when handling a streaming workload, it's reasonable to over-allocate peers on a box.
Yea the peers terminology is a bit tricky, most immediately think of physical nodes or hosts. It's more akin to a thread pool
Yea nodes are the machines/hosts that are running a JVM
Yea a diagram would be helpful
Lol yup
@devth: A PR would be great if you add any new endpoints. I don't actively work on that project.
Cool, yeah if anything comes up as needed, just send it in. Thanks!
This stuff generally turns out better if people contribute directly from the project's they work on
@devth: I was able to get going with the official java alpine image. Thanks for your PR.
gardnervickers: great! sorry I didn't get around to figuring out the issue. glad you found it.
It was in a sense, there was no /bin/bash so file not found was appropriate
But yea I get what you mean
@gardnervickers: i wish to build a modular financial system with a set of core modules and the ability to extend the system by adding more modules. a module would be an area of interest for the system such as accounting, accounts, etc
@emil0r: If your system is flowing information unidirectionally and asynchronously, it would be a good fit.
As far I understand model, the whole cluster needs to have the same code available. If that's true, the above idea is feasible if designed around shared primitives (running on Onyx). Right Mike?
So you wouldnโt be able to have a job called foo.bar running in the cluster and then add another called foo.baz which would have a completely different code base but still use Onyx?
You can technically have different code running on each node so long as the interfaces match up, but the only time you'd want to do that is during a rolling deployment.
@emil0r: Think of it like this. You have a jar running on each node in your cluster with some Clojure functions on it. You can submit as many jobs, which are all data, as needed to the cluster for concurrent execution that uses those functions in different configurations.
Fn signature
@andrewhr: Yeah, definitely the right idea with shared primitives.