This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # admin-announcements (1)
- # aleph (3)
- # bangalore-clj (4)
- # beginners (167)
- # boot (14)
- # cider (4)
- # cljs-dev (24)
- # cljsjs (21)
- # cljsrn (33)
- # clojure (214)
- # clojure-greece (2)
- # clojure-italy (2)
- # clojure-korea (16)
- # clojure-russia (29)
- # clojure-spec (31)
- # clojure-uk (86)
- # clojurescript (144)
- # core-matrix (2)
- # cursive (37)
- # datascript (5)
- # datomic (104)
- # devcards (2)
- # events (2)
- # jobs (2)
- # luminus (8)
- # midje (1)
- # nyc (4)
- # off-topic (1)
- # om (39)
- # om-next (1)
- # onyx (47)
- # perun (1)
- # planck (6)
- # proton (2)
- # re-frame (25)
- # reagent (40)
- # spacemacs (9)
- # vim (71)
- # yada (3)
The dashboard says: “No log entries found.” but jobs run fine on that tenancy. Yesterday it worked fine. How can I debug this?
I restarted it and now it works again. In the logs it said at the end:
INFO [onyx-dashboard.tenancy:116] - Stopping Track Tenancy manager. INFO [onyx.log.zookeeper:126] - Stopping ZooKeeper client connection
I have another question: The result of my job is what I write out into a storage on
:lifecycle/after-task-stop of the output task which has no normal output. My problem is that the job is considered finished before my lifecycle function returns. Is it possible to change that behaviour (to wait for all lifecycle functions)? Or is where a better way to output aggregates after all? My ideal workflow would be to feed my aggregates back into the workflow.
There currently isn’t a good way to wait until all the after-task-stop calls have been made. The next major release of Onyx will give the ability to emit segments back into the job from trigger calls. Technical preview for it will be soon.
That sounds good. I also experimented with triggers and talked with Michael about it. For now I’ll use an atom and after-task-stop because my aggregation is idempotent and I don’t like to put everything in Bookkeeper. But at the end my current approach is not very future proof because the semantics are a bit vague. I would really use windows and triggers if I don’t had the overhead of Bookkeeper.
Yeah, I think doing it the quick and dirty way for now, and waiting for our next major release is a good play, since it should solve all your problems. There is a bit of risk around our release date though
I’m a little unclear about what you mean by that. Could you explain a bit further?
In the next release we won’t really need deduplication either, because the engine is exactly once without needing to deduplicate
To start with, I don’t know Bookkeeper and I try to avoid setting up a cluster in production.
We have a RiakCS running. So checkpointing to S3 (which works with RiakCS) would be better.
You definitely need something for fault tolerance, but S3 should be simpler - though you still have to have a bit of an understanding about what it is checkpointing
I’ll make the checkpointing pluggable, so you can use something else if you wish
I understand that checkpointing is important. Otherwise you can only replay from the input. But it should be configurable to checkpoint on a task or not. Otherwise the performance overhead could be to big.
Are you OK with the window results being incorrect? Because you will definitely have that happen without some kind of checkpointing if any peer/node crashes
I would possibly be OK with making checkpointing optional with the idea that it will kill the job, and will need to be restarted if something goes wrong. That’s basically the only way we can guarantee the window results otherwise
The results won’t be incorrect if you just can replay from the input or from the last checkpoint on idempotent aggregations.
Right, yes. With jobs like that it’s probably better to just restart and get it over in 1/3 the time without the checkpoints
Yes but after some time fuddling around my job runs really fine now. So big thanks to you.
FINALLY. Unlimited peers on a datomic license at one price http://blog.datomic.com/2016/11/datomic-update-client-api-unlimited.html
Yeah, they were shooting themselves in the foot so badly with the peer count licensing