This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-04-29
Channels
- # announcements (35)
- # aws (40)
- # babashka (10)
- # beginners (119)
- # calva (25)
- # cider (13)
- # clj-kondo (15)
- # cljsrn (23)
- # clojure (205)
- # clojure-dev (3)
- # clojure-europe (15)
- # clojure-germany (3)
- # clojure-italy (3)
- # clojure-nl (2)
- # clojure-uk (58)
- # clojurescript (193)
- # community-development (2)
- # conjure (147)
- # core-async (49)
- # cursive (47)
- # datomic (27)
- # duct (1)
- # fulcro (19)
- # graalvm (3)
- # graphql (1)
- # helix (3)
- # hoplon (11)
- # jackdaw (1)
- # joker (1)
- # juxt (5)
- # kaocha (1)
- # keechma (3)
- # lambdaisland (6)
- # local-first-clojure (27)
- # malli (5)
- # off-topic (41)
- # rdf (27)
- # re-frame (7)
- # reagent (15)
- # reitit (5)
- # rum (11)
- # shadow-cljs (157)
- # spacemacs (18)
- # sql (4)
- # xtdb (8)
It reads from the file system.
@kenny does that answer your question?
I've seen https://github.com/cognitect-labs/aws-api/issues/41 on no current support for timeouts in aws-api. Is there a workaround? This seems like a big hole. Am I supposed to let requests hang around forever if AWS would let them?
we're planning on supporting this explicitly, but for now you can use the async API and put an external timeout on your call
Both, I suppose. I slept my computer and went for lunch. When I came back, I ran an API call to AWS lambda with a <!!
around the aws-api response. It sat for several minutes with no response. I don't know exactly what caused this, but it made the issue very obvious and quite frightening. If that could happen, it could (we must assume it will) lock up the calling thread for who knows how long. I can, and should, add the alts! but if something like this were to happen often in production, it could easily eat up lots of resources.
I bet this isn't an http timeout issue at all, my guess is a deadlock core async threadpool
Certainly possible. From the REPL history, I had only ran the code 3 times: twice before lunch and the 1 after. That wouldn't deadlock the threadpool, right?
It depends on what other code is running. You can look at thread dumps or manually schedule a job on the core async threadpool to see if it runs
The async API will also retry operations with some kind of backoff for certain errors
That could have brought the total time to return above 5 mins if the Lambda kept timing out.
It doesn't tell you if that is happening, but you can configure that stuff when creating a client, so you could pass it a function that logs something