This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-06-29
Channels
- # aws (6)
- # beginners (33)
- # bitcoin (2)
- # boot (22)
- # carry (2)
- # cider (5)
- # clara (21)
- # cljs-dev (115)
- # cljsrn (40)
- # clojure (161)
- # clojure-dev (73)
- # clojure-italy (38)
- # clojure-russia (88)
- # clojure-spec (123)
- # clojure-uk (58)
- # clojurescript (88)
- # core-async (26)
- # cursive (5)
- # datascript (18)
- # datomic (26)
- # hoplon (50)
- # java (2)
- # jobs (1)
- # leiningen (10)
- # lumo (1)
- # off-topic (18)
- # om (9)
- # onyx (26)
- # parinfer (13)
- # pedestal (41)
- # quil (1)
- # re-frame (27)
- # reagent (21)
- # ring-swagger (11)
- # slack-help (3)
- # spacemacs (8)
- # specter (5)
- # sql (42)
- # timbre (1)
- # uncomplicate (7)
- # untangled (3)
- # videos (1)
- # yada (26)
@michaeldrogalis how many people use the amazon s3 plugin 0.10.x. Are there good experiences with it?
@eelke hard to know how many people are using it at the moment. We've used it without too many issues in the past, but for pretty restricted use cases
We are putting up a load test now, checking if we can reproduce the situation under heavier load
Does it have a special fault-tolerance philosophy/architecture? can extra nodes be added on-the-fly to share on workload, without restarting the entire application cluster?
Yes, you can absolutely use it for a message driven architecture.
It uses asynchronous barrier snapshotting to take state snapshots that are consistent over the cluster. The process is described here https://github.com/onyx-platform/onyx/blob/0.10.x/ABS_RELEASE.md
Extra nodes can be added on the fly, though not currently for any peers that are processing state, as we do not support dynamic re-partitionining yet. The input, output and transformation peers can be dynamically resized, plugin dependent.
well @lucasbradstreet thanks, I think this gives me the right pointers to explore more now! 🙂
Just not sure how adding a peer node to an existing cluster would look like ― the entire cluster needs to be restarted, if it's not an input or output peer but a processing peer? I am not sure where the different peer types are discussed in the user guide (http://www.onyxplatform.org/docs/user-guide/0.10.x/)
@matan You can add machines to the cluster dynamically. What @lucasbradstreet was saying is about adding more resources to a running job.
If a job is doing stateful operations, more available resources won’t get allocated to tasks that accrete state at the moment. You’re probably not going to run into that case much.
@michaeldrogalis Thanks for putting it in perspective
I should probably start with better understanding workflow definitions, in http://www.onyxplatform.org/docs/user-guide/0.10.x/#_workflow
To be a little more direct: peers get added to the cluster by connecting to a common ZooKeeper address, and, as @camechis says, using the same tenancy ID. They’ll start picking up work automatically from there.
Also, regarding this particular topic, almost all the content in this talk is still accurate as of the latest release: https://www.youtube.com/watch?v=KVByn_kp2fQ&t=2s
That's easy to understand, thanks. I guess the tenancy ID is like a cluster ID (or application ID)
Just one question till then, regarding elasticity. in http://www.onyxplatform.org/docs/user-guide/0.10.x/#_workflow, do the nodes of the edges represent a "worker type" or a specifically named "worker instance"?
Nodes in the workflow graph represent tasks. Tasks are fanned out across peers, which are closer to a worker. Each task can run on one or more peers concurrently depending on how you configure the scheduler.
Sure, anytime!