This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-06-25
Channels
- # beginners (21)
- # boot (37)
- # cljsjs (1)
- # cljsrn (1)
- # clojure (48)
- # clojure-greece (3)
- # clojure-poland (1)
- # clojure-quebec (4)
- # clojure-spec (40)
- # clojure-uk (1)
- # clojurescript (113)
- # cursive (13)
- # events (3)
- # hoplon (183)
- # jobs (5)
- # off-topic (2)
- # onyx (49)
- # planck (35)
- # re-frame (8)
- # reagent (2)
- # sim-testing (1)
- # specter (4)
- # spirituality-ethics (2)
- # untangled (1)
- # vim (2)
- # yada (1)
@lucasbradstreet: Let me know where you are in the process of coming up with a kubernetes tutorial. I was going to work on putting something together but haven’t circled back to it. If it’s really a hot issue then I would really like to take a shot at putting it together.
@drewverlee: @gardnervickers is starting on the tutorial now. Maybe you guys could help each other out with it?
I'm not sure how far along he is with it. I think he's mostly been doing all the leg work to get stuff like bookkeeper working nicely under kubernetes
Hey @drewverlee there’s a few snags I’m not sure the best way to resolve.
So both ZooKeeper and BookKeeper need https://github.com/kubernetes/kubernetes/issues/260 to work properly
hmmm, I’m guessing he will spend more time explaining what he is trying to do, then me helping him.
This is a pretty good go at things right now https://github.com/onyx-platform/onyx-twitter-sample/tree/master/kubernetes
It’ll get a working kubernetes cluster up for you running the twitter-sample-job
I recommend using https://github.com/redspread/spread to get a local kubernetes cluster going
then it’s spread cluster start
, spread build kubernetes
The biggest issue right now is there’s no clean way to get services like BK and ZK working with Kubernetes, as both services tie their persistent data to their runtime state, if either changes then it’s considered invalid.
PetSets, or Nominal Services fix that. In the meantime it’s possible to pre-definine a set of ReplicationControllers that each spawn a single container that maintains the runtime attributes you want, tied to the persistent volumes. That requires you to have a setup that looks like zookeeper1, zookeeper2, zookeeper3, etc…
Same thing with BookKeeper, the only caveat being you’ll have to redefine your container’s hostname to be the name of a kubernetes service, like bookkeeper1, bookkeeper2, bookkeeper3, because bookkeeper draws it’s identity from it’s hostname (not from config), while with ZooKeeper you can get away with setting it’s “ID" from an environment variable.
@gardnervickers: So kubernetes tries to change zookeepers runtime state?
This is all in the context of recovering from container (pod) crashes.
With both ZK and BK, recovering persisted data requires that the restarted container has certain attributes present that were identical when the data was written
For ZK, it’s the ID config, for BK, it’s the hostname
We can fake this, but it’s hacky and forces cluster size to be pre-declared.
I suppose im shocked how this isn’t a major problem for everyone using k8s… zookeeper seems at the heart of everyones system.
It is 😄
It’s also a really difficult problem to get right
If you take a skim through the github issue I posted, it was originally conceived to deal with ZooKeeper
Yea, i’ll do that. It was probably the longest github issue i have every seen 😓
No need, Here’s an example of what I was talking about, with defining multiple ZK ReplicationControllers https://hub.docker.com/r/elevy/zookeeper/
A similar approach can be taken for BookKeeper
By that I mean that’s what I plan to implement.
Reading up on PetSets requires diving pretty deep into the Kubernetes code, there’s no real docs out yet and the existing examples for using it conflate some other new features arriving in Kubernetes 1.3.
The TL;DR of the matter is, there is a way of doing it that’s ugly, I’ll be finishing that up over the next couple days, in the future it’ll be less ugly with PetSets 😄
But yea @drewverlee it’s been a looong time in the making, Zookeeper support.
how does etcd fit into this? <— thats the best i can do at 2:00 am
Same here 🙂
Do you mean etcd as in what’s with kubernetes?
or deploying your own etcd on kubernetes
My understanding was that etcd was the defacto way to manage consensus in k8s. I suppose I can’t wrap my head around a system that uses zk and etcd.
Yea so K8s requires an etcd cluster for dealing with consensus in much the same way that Onyx requires ZooKeeper for dealing with consensus
They have no relation to each other though, in the domain of running Onyx on kubernetes
its simple, just use etcd instead of zookeeper!
heh, you definitely don’t want to run anything except K8s on the K8s etcd cluster 🙂
But etcd even hits the problem ZK faces when deployed on K8s
It would be possible to get Onyx running on etcd, but there would have to be a lot of work put into replicating the functionality of ephermeral nodes. TTL values + a heartbeat renewal or something.
> But etcd even hits the problem ZK faces when deployed on K8s Is there a distinction here between deployed on vs deployed with?
Yup, without going too far into it, the K8s etcd instances get their “identity” from the metal hosts their deployed on. Running ontop of kubernetes (using the pod/service model) abstract any uniqueness/identity away. PetSets are a way to bring this back.
i just got why their called petsets
heh yea, cattle versus pets 😄
Sorry that’s the best I can do right now, crazy tired. I’ll be around tomorrow 😄 see ya!
night! thanks a lot. my head is spinning