This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-02-18
Channels
- # admin-announcements (3)
- # announcements (7)
- # aws (1)
- # beginners (76)
- # boot (340)
- # cider (9)
- # clara (35)
- # cljs-dev (7)
- # cljsjs (16)
- # cljsrn (11)
- # clojars (1)
- # clojure (192)
- # clojure-dev (6)
- # clojure-madison (8)
- # clojure-russia (373)
- # clojurebridge (1)
- # clojured (9)
- # clojurescript (172)
- # community-development (1)
- # core-async (2)
- # cursive (7)
- # data-science (2)
- # datomic (12)
- # devcards (1)
- # dirac (63)
- # emacs (3)
- # events (10)
- # gsoc (3)
- # hoplon (1)
- # jobs (1)
- # juxt (20)
- # ldnclj (4)
- # lein-figwheel (12)
- # leiningen (1)
- # off-topic (21)
- # om (232)
- # onyx (64)
- # parinfer (8)
- # proton (21)
- # re-frame (8)
- # reagent (1)
- # ring (3)
- # ring-swagger (3)
- # slack-help (4)
- # spacemacs (6)
- # testing (3)
@lsnape @robert-stuttaford & anyone else that wants to help out with this project: voila! https://github.com/MichaelDrogalis/onyx-log-subscriber-demo#onyx-log-subscriber-demo This is a self contained project that shows how you can subscribe to the Onyx log for incremental changes of the cluster state over time. It includes a Docker container preloaded with some activity on an Onyx cluster I spun up earlier - meaning you won't need to run Onyx at all to try this. Once the container is up, just change the IP and port to match your Docker container, and away it goes.
You can poke around at the replica more to get an idea of what sort of information you have access to. Other interest things in there are the IP addresses and ports of every peer in the cluster. Most relevant for what we're doing right now is :allocations
though. That key maps :job-id
-> :task-id
-> [:peer-ids]
If we can get a visualization that lets you step through the log over time, showing the movement of peers to tasks over time, I think that's already a big win.
I need to make a few changes to Onyx itself to support undo - that is, moving backwards through the log over time. It's actually easy to implement, I just some spare time to do it. Still, being able to incrementally move forward through time and see what's going on is awesome. 😄
Signing off for tonight, Ill be around tomorrow in the AM. Exciting!!
@michaeldrogalis: i think you should capture what you’re hoping to see in the project readme, so that we can tweet the project link
@lsnape: I'm thinking about quitting to find time to do some Clojure / Open-Source / personnal projects / maybe freelance
@michaeldrogalis: Nice! I’m going to be busy today but hopefully tomorrow I can spend a few hours on this. I will need to take some time to reinforce my understanding of the information contained in the logs.
@lsnape: please feel free to hit me up if you have any questions and @michaeldrogalis isn’t around
@lucasbradstreet: will do :thumbsup:
@nha: the idea of working on open source full-time is very attractive if you’re able to support yourself financially, or are confident that you can easily get work if needed. I guess that entirely depends on your circumstances
@lsnape: agreed. I think I can support myself financially (through freelance) but that remain to be seen
I've been working on Onyx full time for almost a year with very little income to try to bootstrap it
The picture is looking rosier now thankfully
The technical experience that I'll be able to show to an employer, as a backup plan, was definitely part of my thinking
will submitting a job via onyx.api/submit-job
succeed even if there are no active peers/peer-groups?
that is, does it matter if i warm peers up first and then submit, or can i do it in any order?
1. take down peers. 2. warm up peers. a. kill jobs. b. start jobs. could 1,2 happen after a,b?
You can do it in any order
brilliant
Though you probably don't want to have both onyx-ids running at the same time
if i do 1,2,a,b, does a need to wait for 1 to finish?
that is, does the job-stop/start entry point need to wait for the peer-stop/start process to finish - would killing jobs while killing peers muddy the waters
Right, so it's preferable you kill the job first for clean shutdown, then take down the peers, then bring up the peers on the new onyx-id
The submit job to a new onyx-Id can happen at any time
a1 b2/2b
cool. i got it, thanks
I'd probably do the submit job first
Just to get it out of the way
But if you do that you need to make sure the peers are really down, or the job is killed before the new peers come up
cool. going to try to make this work. thanks!
You could even do ba12
That would be my preference
interesting. wouldn’t that attempt to use any idle peers?
given that we’re likely to have surplus
Sorry, we're talking about running on a new onyx-id, right?
The submit job will be scoped to the new onyx-id, so there won't be any peers to run on the job until 2
right, so they’d be compartmentalised, got it
Yeah, we're about to rename onyx/id as onyx/tenancy-id
One pattern that could be used is submit, stop peers, start peers, check metrics. This would allow you to roll back to the previous jar / onyx-id and have the jobs keep running in case of issues.
I'd have to think about it and try it out to make sure it's a good idea
It might be better to just use the same procedure to rollback
interesting. we’re not there, yet. but that sounds like a good place to get to
Agreed
we’re using aws codedeploy which has rollback capability
if the validate step fails, it rolls back. validate is a .sh on-server, so it could do this
The kill job isn't strictly necessary since there won't be any peers running, but it's still mostly a good idea
yeah. cleaning up is good
@robert-stuttaford: Yep, I'll write something up in the README later today. I'd prefer not to tweet about it though. I made that mistake with the CheatSheet project. Lots of people said they wanted to help, but no one ended up taking responsibility. We can try to involve a wider audience once there is progress and someone is clearly in charge.
that’s a wise plan
Agreed, since we definitely need help to get it done, since we're pretty busy with core work
dude, i’ve got too many of those already
running a 13 person team is hard work!
no matter how awesome the language and tools
People are hard