Fork me on GitHub
#onyx
<
2016-06-25
>
Drew Verlee05:06:34

@lucasbradstreet: Let me know where you are in the process of coming up with a kubernetes tutorial. I was going to work on putting something together but haven’t circled back to it. If it’s really a hot issue then I would really like to take a shot at putting it together.

lucasbradstreet05:06:57

@drewverlee: @gardnervickers is starting on the tutorial now. Maybe you guys could help each other out with it?

lucasbradstreet05:06:27

I'm not sure how far along he is with it. I think he's mostly been doing all the leg work to get stuff like bookkeeper working nicely under kubernetes

gardnervickers05:06:58

Hey @drewverlee there’s a few snags I’m not sure the best way to resolve.

gardnervickers05:06:24

So both ZooKeeper and BookKeeper need https://github.com/kubernetes/kubernetes/issues/260 to work properly

Drew Verlee05:06:27

hmmm, I’m guessing he will spend more time explaining what he is trying to do, then me helping him.

gardnervickers05:06:11

It’ll get a working kubernetes cluster up for you running the twitter-sample-job

gardnervickers05:06:49

I recommend using https://github.com/redspread/spread to get a local kubernetes cluster going

gardnervickers05:06:22

then it’s spread cluster start, spread build kubernetes

gardnervickers05:06:58

The biggest issue right now is there’s no clean way to get services like BK and ZK working with Kubernetes, as both services tie their persistent data to their runtime state, if either changes then it’s considered invalid.

gardnervickers05:06:25

PetSets, or Nominal Services fix that. In the meantime it’s possible to pre-definine a set of ReplicationControllers that each spawn a single container that maintains the runtime attributes you want, tied to the persistent volumes. That requires you to have a setup that looks like zookeeper1, zookeeper2, zookeeper3, etc…

gardnervickers05:06:50

Same thing with BookKeeper, the only caveat being you’ll have to redefine your container’s hostname to be the name of a kubernetes service, like bookkeeper1, bookkeeper2, bookkeeper3, because bookkeeper draws it’s identity from it’s hostname (not from config), while with ZooKeeper you can get away with setting it’s “ID" from an environment variable.

Drew Verlee05:06:22

@gardnervickers: So kubernetes tries to change zookeepers runtime state?

gardnervickers05:06:25

This is all in the context of recovering from container (pod) crashes.

gardnervickers05:06:24

With both ZK and BK, recovering persisted data requires that the restarted container has certain attributes present that were identical when the data was written

gardnervickers05:06:44

For ZK, it’s the ID config, for BK, it’s the hostname

gardnervickers05:06:13

We can fake this, but it’s hacky and forces cluster size to be pre-declared.

Drew Verlee05:06:22

I suppose im shocked how this isn’t a major problem for everyone using k8s… zookeeper seems at the heart of everyones system.

gardnervickers05:06:47

It’s also a really difficult problem to get right

gardnervickers05:06:28

If you take a skim through the github issue I posted, it was originally conceived to deal with ZooKeeper

Drew Verlee05:06:10

Yea, i’ll do that. It was probably the longest github issue i have every seen 😓

gardnervickers05:06:30

No need, Here’s an example of what I was talking about, with defining multiple ZK ReplicationControllers https://hub.docker.com/r/elevy/zookeeper/

gardnervickers05:06:52

A similar approach can be taken for BookKeeper

gardnervickers05:06:17

By that I mean that’s what I plan to implement.

gardnervickers05:06:44

Reading up on PetSets requires diving pretty deep into the Kubernetes code, there’s no real docs out yet and the existing examples for using it conflate some other new features arriving in Kubernetes 1.3.

gardnervickers06:06:03

The TL;DR of the matter is, there is a way of doing it that’s ugly, I’ll be finishing that up over the next couple days, in the future it’ll be less ugly with PetSets 😄

gardnervickers06:06:52

But yea @drewverlee it’s been a looong time in the making, Zookeeper support.

Drew Verlee06:06:40

how does etcd fit into this? <— thats the best i can do at 2:00 am

gardnervickers06:06:59

Do you mean etcd as in what’s with kubernetes?

gardnervickers06:06:11

or deploying your own etcd on kubernetes

Drew Verlee06:06:02

My understanding was that etcd was the defacto way to manage consensus in k8s. I suppose I can’t wrap my head around a system that uses zk and etcd.

gardnervickers06:06:49

Yea so K8s requires an etcd cluster for dealing with consensus in much the same way that Onyx requires ZooKeeper for dealing with consensus

gardnervickers06:06:13

They have no relation to each other though, in the domain of running Onyx on kubernetes

Drew Verlee06:06:22

its simple, just use etcd instead of zookeeper!

gardnervickers06:06:42

heh, you definitely don’t want to run anything except K8s on the K8s etcd cluster 🙂

gardnervickers06:06:59

But etcd even hits the problem ZK faces when deployed on K8s

gardnervickers06:06:33

It would be possible to get Onyx running on etcd, but there would have to be a lot of work put into replicating the functionality of ephermeral nodes. TTL values + a heartbeat renewal or something.

Drew Verlee06:06:38

> But etcd even hits the problem ZK faces when deployed on K8s Is there a distinction here between deployed on vs deployed with?

gardnervickers06:06:33

Yup, without going too far into it, the K8s etcd instances get their “identity” from the metal hosts their deployed on. Running ontop of kubernetes (using the pod/service model) abstract any uniqueness/identity away. PetSets are a way to bring this back.

Drew Verlee06:06:56

i just got why their called petsets

gardnervickers06:06:18

heh yea, cattle versus pets 😄

gardnervickers06:06:48

Sorry that’s the best I can do right now, crazy tired. I’ll be around tomorrow 😄 see ya!

Drew Verlee06:06:01

night! thanks a lot. my head is spinning