This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-05-31
Channels
- # admin-announcements (4)
- # alda (3)
- # aws (1)
- # beginners (2)
- # boot (33)
- # braid-chat (4)
- # braveandtrue (20)
- # cider (52)
- # cljs-dev (13)
- # cljsrn (55)
- # clojure (111)
- # clojure-belgium (4)
- # clojure-brasil (6)
- # clojure-dusseldorf (1)
- # clojure-greece (116)
- # clojure-mexico (1)
- # clojure-nl (3)
- # clojure-russia (56)
- # clojure-spec (72)
- # clojure-uk (13)
- # clojurescript (66)
- # community-development (2)
- # component (24)
- # core-async (1)
- # cursive (19)
- # datomic (27)
- # devcards (5)
- # emacs (1)
- # funcool (34)
- # hoplon (313)
- # jobs (1)
- # lein-figwheel (11)
- # luminus (5)
- # mount (30)
- # off-topic (63)
- # om (375)
- # onyx (67)
- # perun (8)
- # proton (1)
- # reagent (4)
- # rum (1)
- # specter (55)
- # spirituality-ethics (7)
- # test-check (2)
- # untangled (34)
- # yada (20)
Patch for reducing number of ZooKeeper connections to 1 per machine is passing the full test suite. Moving into code review and Jepsen testing this week, hopefully releasing early next week. I'll make a blog post to talk away why this is a big step forward for scalability, since the patch also contains more detailed changes about how we handle local replica manipulation.
Does it apply for all zookeeper connections or just the ones that are required for Onyx internals. I.e. we have 25 kafka partitions requiring 1 virtual peer per partition, and thus 1 zookeeper connection per partition. Can this connection also be shared? It will probably be a different Zookeeper cluster than the Onyx Zookeeper cluster
I think you will end up with one ZK connection per kafka peer, however I think that generally the Kafka peers don’t need to keep the connection open (this may still be a problem at startup however)
@lucasbradstreet: Ok cool, will give it a try when it’s ready 🙂
We'll see how long it takes to pass jepsen. It's already found one issue :)
is the first task should always be a onyx/type
:input ? Is there a way to run a :function
to start the job ?
You can put a function task on an output node, but not an input node
Unfortunately it depends on a bunch of protocol functions being defined on the task. If you put a function on the input node you’ll end up with it trying to read segments from the messenger
onyx-seq is the go to here
That thing is a swiss army knife
well the reason was, I was going to download a s3 file to local and then pass it to the onyx-seq
Can’t you just do that in the lifecycle function?
e.g. similar to what we do for the file reader: https://github.com/onyx-platform/onyx-seq#example-use---buffered-line-reader
Any time
@lucasbradstreet or @michaeldrogalis. There is a podcast called “Software Engineering daily” hosted by jeff meyerson, which I have really enjoyed the last 4 months. There seems to be a heavy focus on distributed systems. On a whim, I asked him if he would he thought a show on Onyx would work and he seemed excited about the idea. So obviously no pressure, but if the team wants to investigate that option he seems open to it.
I like and listen to it. He really manages to pump them out (I guess that’s the daily part)!
When we want to kill the onyx job in case of an exception, should the handle exception fn return false or :kill?
Go for it
I see different versions here http://www.onyxplatform.org/docs/cheat-sheet/latest/#lifecycle-calls/:lifecycle/handle-exception
It should be kill
What do you mean by different versions?
Oh, sorry, I didn’t see the second link. The second doc is out of date
cheat sheet is almost always right. We really need to get the docs generated from the information model / cheat sheet info
I’ll fix that up. Thanks!
(almost always more correct than the main docs I should say :P)
@drewverlee: Thanks for the heads up!
Hi - fyi, I'm going through the README.md steps generated by lein new onyx-app my-app-name -- +docker +metrics
. The ./script/build
step worked fine and the docker-compose up
command ran for a while before complaining
ERROR: Service 'kafkacat' failed to build: The command '/bin/sh -c apt-get update -y && apt-get install $BUILD_PACKAGES -y && git clone && cd kafkacat && ./bootstrap.sh && make install && cd .. && rm -rf kafkacat && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*' returned a non-zero code: 100
But re-running docker-compose up
seems to have corrected things and gone past this error.Now, I'm at the
mysql -h $(echo $DOCKER_HOST|cut -d ':' -f 2|sed "s/\/\///g") -P3306 -uroot
phase... is there a way to unblock the connection? (and perhaps add this to the excellent docs?)
Are you on Linux or OS X?
And you can see mysql running when you docker ps
?
when you look at the output of docker-compose up, do you see mysql starting up?
Could you publish the log lines where that’s happening?
what's the difference between docker-compose exec db mysql -uroot
and mysql -h $(echo $DOCKER_HOST|cut -d ':' -f 2|sed "s/\/\///g") -P3306 -uroot
One’s on your OS, one’s run inside the container I would think?
docker-compose exec
runs inside the mysql container
It probably just needed to bootstrap
If you don’t see it explicitly exiting or throwing errors then it’s usually just still working
sorry, one last question... I can start things up...
16-May-31 11:52:13 Avram-Aelonys-MacBook-Pro.local INFO [onyx.log.zookeeper] - Starting ZooKeeper client connection. If Onyx hangs here it may indicate a difficulty connecting to ZooKeeper.
16-May-31 11:52:13 Avram-Aelonys-MacBook-Pro.local INFO [onyx.log.zookeeper] - Stopping ZooKeeper client connection
Submitted job: #uuid "5a2c8f5f-220e-4883-8b77-6bf006021200"
but checking the mysql table I don't see the data captured to recentMeetups
What am I doing wrong?works now.. need to understand better why it fails at times then works at other times
Did you restart docker machine?
at this point yes, and shut it down.. I'm new to docker so still have yet to try all the combinations
There’s a problem with docker-machine i’ve run into where the DNS gets messed up and you need to restart the docker-machine
happens especially when I sleep my laptop and change networks, docker-machine wont change the DNS server