This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # bangalore-clj (4)
- # beginners (88)
- # boot (12)
- # cljs-dev (10)
- # cljsjs (1)
- # clojure (284)
- # clojure-denmark (2)
- # clojure-dev (35)
- # clojure-italy (8)
- # clojure-russia (36)
- # clojure-spec (38)
- # clojure-uk (51)
- # clojurescript (145)
- # cursive (6)
- # data-science (1)
- # datomic (8)
- # duct (43)
- # emacs (9)
- # figwheel (2)
- # fulcro (29)
- # graphql (1)
- # immutant (3)
- # instaparse (1)
- # jobs (1)
- # jobs-discuss (1)
- # lumo (16)
- # off-topic (50)
- # onyx (90)
- # re-frame (6)
- # reagent (20)
- # remote-jobs (3)
- # ring-swagger (18)
- # schema (8)
- # shadow-cljs (141)
- # slack-help (3)
- # spacemacs (36)
- # unrepl (7)
- # vim (1)
- # yada (2)
OMG it was stuck because out-chan is an atom! Why did it not throw an exception there?! 😞
i've been noticing as well that some plugin exceptions get 'eaten' by onyx sometimes without providing output
I’ll try to reproduce the out chan issue tomorrow. If I can reproduce it, it should be easy to fix
okay I'm almost done with onyx-http! Retry works, happy case works, the only thing which is not working yet is that exception from
async-exception-fn is not propagated. This is because write-batch is not called and I'm not sure how to force it.
Hi there, I’m new to Onyx and am running into an issue when trying to add S3 checkpointing to my job. I configured my peer config with the
s3.storage option and all the subsequent configuration options (auth-type, bucket, region, etc) and it looks like my Onyx job can talk to the bucket I specified, but I am getting a 403 - Access Denied response. Has anyone encountered that before?
For context, the job is running in an AWS EC2 instance with an appropriate role (S3 full access) and the S3 bucket has granted permissions to that role to perform all S3 operations.
Okay, gonna keep trying to track it down anyway. These bugs are really annoying when they crop up
@forrest.thomas Hmm. It sounds like you don’t have the right permissions setup still, but I’m not sure what could be wrong.
i thought so as well, but when I use the AWS CLI from that EC2 I can read/write to the bucket
normally, I would look at the object ACL to see what was happening, but I don’t think I can do that in this case since Onyx is the creator of the checkpoint and uploads it from in-memory (as far as i can tell)
Is Onyx running in a container? Maybe it's not seeing the same AWS keys as your CLI process?
I also have an encryption policy set on the bucket that requires aes256. I set that configuration option as well. is it possible that is getting lost somehow?
I doubt it, but you may want to try using a fresh bucket with no encryption just to check it out
I see the config option but I don’t think we’ve actually implemented it (looking at the code now)
If you can test whether it works without encryption, I’m happy to add it today.
I forgot to mention. I’m using Onxy 10, will this fix be for that version or just 11+?
I’d like to make it against 0.12, which we should be releasing today, but you should be able to upgrade to 0.12 pretty seamlessly.
There are some breaking changes in the last two versions, but it’s not too bad.
Just wondering if anyone has any methods/best practices for doing blue/green or just general job upgrading methods
@lucasbradstreet I'm afraid I need more of your help to finish onyx-http 🙂 I have no idea why it doesn't call
write-batches after some time even though
completed? returns false...
I'm going to leave right now but any pointers are really welcome, I would be happy to finish it tomorrow 🙂
We're updating to Onyx
0.12.0 in anticipation of the S3 encryption fix. I've made the appropriate fixes for the breaking changes (we've been running
0.10.0), and we're testing against
0.12.0-alpha4. It appears that some window triggers are firing twice (I'm seeing two copies of the same state getting emitted), which is breaking a lot of our tests. When I try
0.11.0.1 it all works fine. Seems like a bug in
This could be due to the new support for watermarks causing a new event-type “:watermark” which is not being handled correctly by whatever trigger is being used.
Also, could you check from the trigger firing whether it’s firing on
Check the event-type of the state-event. That will fire when the job completes, which is probably happening in your tests.
If you could also check the
:extents key in the state-event from your sync or emit, that would help.
Did you happen to drop a
:discarding from the trigger, but you didn’t add
no worries. Yeah, if you can get me more info about the scenario of the fires that’d help
Cool, we did a better job of sealing on job-completed now. We do it like that because maybe you have a trigger set to fire on every 10 elements, but then you had one more element be added and you still want it to flush when you complete the job.
So that will trigger even if it's going to emit a state that's identical to the previous one? Is there a way to turn that off, or do I need to specifically check for the event-type every time?