Fork me on GitHub
#onyx
<
2017-01-12
>
andrewhr01:01:28

Links to papers and images on a changelog to explain a new implementation. You all are the best! onyx 👏

lucasbradstreet01:01:15

@andrewhr Thanks! Much of the main docs still need to be revamped before the release, so that README will have to do for now

lucasbradstreet01:01:28

Let us know if there’s anything confusing in it

boyanb13:01:39

Morning guys.

boyanb13:01:23

We're testing onyx a bit at the workplace and we're running into the persistent bookkeeper cookie in zk issue. We are using docker containers to wrap our onyx workers and are a bit unsure what would constitute a "clean shutdown".

boyanb13:01:50

As our expectations was that stopping the container should lead to cookie removal, if the delete-server-data? was set to true.

boyanb13:01:58

Has anybody run into this recently?

gardnervickers13:01:26

I assume you're running ZK outside your containers @boyanb?

gardnervickers14:01:48

The delete-server-data option sort of assumes you're running in-memory zookeeper, where ZK is created anew for each run. BookKeeper stores data in ZK with an TTL. When BK is killed and started in quick succession, it see's the old data in ZK and assumes there is already a BK bookie running for that "identity".

gardnervickers14:01:30

It's part of how BK maintains consistency in the face of node failure. Unfortunately that makes it difficult to provide a generic solution for the various development/testing configurations folks use.

gardnervickers14:01:05

If you are using something like docker-compose for this, I suggest just deleting and remaking Zookeeper every time you delete and remake your Onyx cluster. Another option is to run BookKeeper in its own docker container, so it won't rapidly restart with your Onyx peers.

jasonbell14:01:55

Has anyone tried consuming gzip files as messages through the Kafka-8 plugin? I’m deserializing with what I’d expect to work

(-> bytes
ByteArrayInputStream.
GZIPInputStream.

line-seq
vec)
but I always get a EOFException when the message is read. The income is a byte array so that should work in theory.

boyanb14:01:01

Thanks gardner, indeed, we are running zk "semi-persistent". It's provisioned outside of the onyx containers.

boyanb14:01:20

We are using ansible, and yes, the solution we came up with is removing the container at each provisioning.

boyanb14:01:35

but we wanted to make sure that there isn't a way to shut down in a graceful way, that would clean up the zk state.

gardnervickers14:01:33

Ahh ok. Onyx namespaces clusters in ZooKeeper by the :onyx/tenancy-id. You could re-deploy your Onyx peers with a different :onyx/tenancy-id to achieve the same effect as clearing the ZK state.

boyanb14:01:34

The zk instance is serving only the onyx workers in this case, so dropping and recreating is not an issue.

gardnervickers14:01:50

Fantastic, that’s the best option IMO.

boyanb14:01:56

Thanks again.

michaeldrogalis15:01:14

@boyanb 0.10 drops BookKeeper and RocksDB for what it’s worth, so whatever can get you around the problem for the short term is a good solution.

Travis15:01:00

Really excited to start trying out .10

mariusz_jachimowicz17:01:57

I am making dashboard new UI