This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-10-28
Channels
- # aws (5)
- # aws-lambda (1)
- # beginners (133)
- # boot (1)
- # cljsrn (1)
- # clojure (28)
- # clojure-austin (3)
- # clojure-italy (2)
- # clojure-spec (17)
- # clojure-uk (18)
- # clojurescript (38)
- # cursive (6)
- # datomic (6)
- # figwheel (1)
- # graphql (1)
- # klipse (1)
- # leiningen (1)
- # off-topic (13)
- # onyx (30)
- # re-frame (44)
- # reagent (7)
- # shadow-cljs (77)
- # spacemacs (7)
@lucasbradstreet I have some flight recordings any thing I should be looking for ?
the first thing I’d have a look at is the memory tab / under GC times
I didn’t realise that your job is actually being killed. That suggests you might need a handle-exception lifecycle to stop it from being killed when it throws an exception (depending on the exception)
Any idea what the exception was in this case?
the one we keep seeing is the media driver timeout and the onyx.messaging.aeron.utils:76] - Error stopping publication io.aeron.exceptions.RegistrationException: Unknown publication: 73```
Ah right, still that one. That’s still a more fundamental issue
OK, lets get into those jfrs then
lets do peer first
if you want to send them to me we can step through them
you can PM the files to me
there’s a couple largish pauses but nothing too brutal
Is it possible that it’s only getting 1GB of heap?
or even 768MB?
If you switch to the memory tab and look at the green line
yeah, the maximum heap size concerns me
Ah, so it should adjust based on how much memory the container gets
I’m not sure how that would change what you see in flight recording then
that’s my best guess based on what I’m seeing. The pauses don’t seem big enough to cause it though
there’s not that much happening cpu wise
if you could increase the RAM and CPUs they get to test, it might be worth it. it’s not being given a lot of resources so it might need some tuning to work under these settings