This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-02-20
Channels
- # aleph (19)
- # aws-lambda (8)
- # bangalore-clj (1)
- # beginners (13)
- # boot (179)
- # cljs-dev (12)
- # cljsjs (2)
- # cljsrn (6)
- # clojure (174)
- # clojure-italy (14)
- # clojure-nl (2)
- # clojure-russia (172)
- # clojure-spec (29)
- # clojure-uk (22)
- # clojurebridge (10)
- # clojureremote (1)
- # clojurescript (79)
- # cursive (46)
- # data-science (1)
- # datascript (8)
- # datomic (18)
- # defnpodcast (2)
- # emacs (9)
- # events (6)
- # hoplon (11)
- # klipse (13)
- # lein-figwheel (1)
- # leiningen (1)
- # luminus (1)
- # lumo (88)
- # numerical-computing (1)
- # off-topic (24)
- # om (33)
- # onyx (58)
- # protorepl (8)
- # re-frame (10)
- # reagent (26)
- # ring (8)
- # ring-swagger (7)
- # rum (22)
- # spacemacs (25)
- # specter (5)
- # uncomplicate (37)
- # untangled (75)
- # vim (17)
- # yada (3)
2017-02-20T08:40:28.953+0000 cantona-api-383746225-whp97 ERROR [aleph.http.server:0] - error in HTTP handler
java.lang.Thread.run Thread.java: 745
...
manifold.executor/thread-factory/reify/f executor.clj: 44
io.aleph.dirigiste.Executor$Worker$1.run Executor.java: 62
...
aleph.http.server/handle-request/fn/f--auto-- server.clj: 156
bidi.vhosts/make-handler/fn vhosts.clj: 199
bidi.ring/fn/G ring.clj: 12
yada.bidi/fn bidi.clj: 75
yada.handler/handle-request handler.clj: 169
yada.handler/handle-request-with-maybe-subresources handler.clj: 120
clojure.core/apply core.clj: 648
...
manifold.deferred/chain deferred.clj: 909
manifold.deferred/chain deferred.clj: 933
...
clojure.core/apply core.clj: 641
clojure.core/apply core.clj: 654
...
manifold.deferred/fn/chain- deferred.clj: 888
manifold.deferred.Deferred/onRealized deferred.clj: 417
manifold.deferred/add-listener! deferred.clj: 262
manifold.deferred.Deferred/addListener deferred.clj: 384
io.aleph.dirigiste.Executor.execute Executor.java: 332
java.util.concurrent.RejectedExecutionException:
it started happening when the server was under heavy load, but it doesn't 'repair' itself when the load lowers
I’m not sure what’s the default in your stack but the default Manifold execute-pool allows an Integer/MAX_VALUE number of threads
so either the pool you’re using has lower limits, or something was hogging resources for a long time
also I’m not sure what’s the behaviour in case the pool can’t create a new Thread - can this only happen due to OOM - then an OOM would’ve been thrown I think
well, I’ve seen this too. Do you, by chance, make requests to other services with aleph.http
?
no, we've deliberately swapped this out for http-kit because of https://github.com/ztellman/aleph/issues/217
Yes, that what I had in mind