This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-11-08
Channels
- # bangalore-clj (4)
- # beginners (88)
- # boot (12)
- # cljs-dev (10)
- # cljsjs (1)
- # clojure (284)
- # clojure-denmark (2)
- # clojure-dev (35)
- # clojure-italy (8)
- # clojure-russia (36)
- # clojure-spec (38)
- # clojure-uk (51)
- # clojurescript (145)
- # cursive (6)
- # data-science (1)
- # datomic (8)
- # duct (43)
- # emacs (9)
- # figwheel (2)
- # fulcro (29)
- # graphql (1)
- # immutant (3)
- # instaparse (1)
- # jobs (1)
- # jobs-discuss (1)
- # lumo (16)
- # off-topic (50)
- # onyx (90)
- # re-frame (6)
- # reagent (20)
- # remote-jobs (3)
- # ring-swagger (18)
- # schema (8)
- # shadow-cljs (141)
- # slack-help (3)
- # spacemacs (36)
- # unrepl (7)
- # vim (1)
- # yada (2)
OMG it was stuck because out-chan is an atom! Why did it not throw an exception there?! š
Hmm, it definitely should have thrown an exception. Weird
i've been noticing as well that some plugin exceptions get 'eaten' by onyx sometimes without providing output
Hmm. Not good
Iāll try to reproduce the out chan issue tomorrow. If I can reproduce it, it should be easy to fix
okay I'm almost done with onyx-http! Retry works, happy case works, the only thing which is not working yet is that exception from async-exception-fn
is not propagated. This is because write-batch is not called and I'm not sure how to force it.
I'll check it out -- that's a nasty bug. š
@lmergen @asolovyov At what stage of the lifecycle is the exception being squashed? Im having trouble reproducing it
Hmm. Ill try with a different plugin.
Hi there, Iām new to Onyx and am running into an issue when trying to add S3 checkpointing to my job. I configured my peer config with the s3.storage
option and all the subsequent configuration options (auth-type, bucket, region, etc) and it looks like my Onyx job can talk to the bucket I specified, but I am getting a 403 - Access Denied response. Has anyone encountered that before?
For context, the job is running in an AWS EC2 instance with an appropriate role (S3 full access) and the S3 bucket has granted permissions to that role to perform all S3 operations.
Okay, gonna keep trying to track it down anyway. These bugs are really annoying when they crop up
@forrest.thomas Hmm. It sounds like you donāt have the right permissions setup still, but Iām not sure what could be wrong.
i thought so as well, but when I use the AWS CLI from that EC2 I can read/write to the bucket
normally, I would look at the object ACL to see what was happening, but I donāt think I can do that in this case since Onyx is the creator of the checkpoint and uploads it from in-memory (as far as i can tell)
Is Onyx running in a container? Maybe it's not seeing the same AWS keys as your CLI process?
I also have an encryption policy set on the bucket that requires aes256. I set that configuration option as well. is it possible that is getting lost somehow?
I doubt it, but you may want to try using a fresh bucket with no encryption just to check it out
I think itās encryptionās fault
I see the config option but I donāt think weāve actually implemented it (looking at the code now)
I stand corrected š
If you can test whether it works without encryption, Iām happy to add it today.
That was an oversight.
All of the code is already there, we just donāt pass the setting through.
OK cool, Iāll have a fix shortly.
I forgot to mention. Iām using Onxy 10, will this fix be for that version or just 11+?
Iād like to make it against 0.12, which we should be releasing today, but you should be able to upgrade to 0.12 pretty seamlessly.
There are some breaking changes in the last two versions, but itās not too bad.
No worries. Iāll let you know when thereās something to try.
Just wondering if anyone has any methods/best practices for doing blue/green or just general job upgrading methods
@lucasbradstreet I'm afraid I need more of your help to finish onyx-http š I have no idea why it doesn't call write-batches
after some time even though completed?
returns false...
I'm going to leave right now but any pointers are really welcome, I would be happy to finish it tomorrow š
We're updating to Onyx 0.12.0
in anticipation of the S3 encryption fix. I've made the appropriate fixes for the breaking changes (we've been running 0.10.0
), and we're testing against 0.12.0-alpha4
. It appears that some window triggers are firing twice (I'm seeing two copies of the same state getting emitted), which is breaking a lot of our tests. When I try 0.11.0.1
it all works fine. Seems like a bug in 12
?
What trigger type are you using?
This could be due to the new support for watermarks causing a new event-type ā:watermarkā which is not being handled correctly by whatever trigger is being used.
And 0.11 is ok. Hmm, Iāll check it out shortly. Thanks
@fellows could you please give 0.12.0-20171108.231118-16
a go?
Are you using :trigger/fire-all-extents? true
?
Also, could you check from the trigger firing whether itās firing on :job-completed
?
Check the event-type of the state-event. That will fire when the job completes, which is probably happening in your tests.
If you could also check the :extents
key in the state-event from your sync or emit, that would help.
Did you happen to drop a :trigger/refinement
:discarding
from the trigger, but you didnāt add :trigger/post-evictor
in?
ok, nothing changed there then
state-event is the fourth argument to trigger/sync and trigger/emit
no worries. Yeah, if you can get me more info about the scenario of the fires thatād help
Cool, we did a better job of sealing on job-completed now. We do it like that because maybe you have a trigger set to fire on every 10 elements, but then you had one more element be added and you still want it to flush when you complete the job.
So that will trigger even if it's going to emit a state that's identical to the previous one? Is there a way to turn that off, or do I need to specifically check for the event-type every time?
Thereās no way for it to know what the previous state was, otherwise the memory consumption would increase by default. You should ignore the job completed event, or dedupe yourself, or use an evictor and combine your outputs as theyāre evicted/synced