This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-04-27
Channels
- # aleph (1)
- # announcements (39)
- # aws (11)
- # beginners (58)
- # calva (10)
- # cider (7)
- # clj-kondo (65)
- # cljs-dev (5)
- # clojure (90)
- # clojure-dev (48)
- # clojure-europe (23)
- # clojure-madison (1)
- # clojure-norway (1)
- # clojure-uk (40)
- # clojured (11)
- # clojurescript (20)
- # conjure (12)
- # core-async (4)
- # core-logic (4)
- # cursive (3)
- # datalevin (1)
- # emacs (7)
- # events (2)
- # fulcro (48)
- # introduce-yourself (2)
- # lsp (36)
- # malli (11)
- # missionary (1)
- # off-topic (1)
- # other-languages (72)
- # pathom (4)
- # polylith (13)
- # portal (94)
- # re-frame (14)
- # react (5)
- # releases (1)
- # sci (12)
- # shadow-cljs (29)
- # spacemacs (3)
- # vim (4)
- # xtdb (12)
I am using the cognitect labs aws api and am occasionally seeing this error when calling invoke on an aws client for the SNS SQS api
{:cognitect.anomalies/category :cognitect.anomalies/fault,
:cognitect.anomalies/message "Abruptly closed by peer",
:cognitect.http-client/throwable #error {:cause "Abruptly closed by peer"
:via
[{:type javax.net.ssl.SSLHandshakeException
:message "Abruptly closed by peer"
:at [org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint fill "SslConnection.java" 769]}]
:trace
[[org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint fill "SslConnection.java" 769]
[org.eclipse.jetty.client.http.HttpReceiverOverHTTP process "HttpReceiverOverHTTP.java" 164]
[org.eclipse.jetty.client.http.HttpReceiverOverHTTP receive "HttpReceiverOverHTTP.java" 79]
[org.eclipse.jetty.client.http.HttpChannelOverHTTP receive "HttpChannelOverHTTP.java" 131]
[org.eclipse.jetty.client.http.HttpConnectionOverHTTP onFillable "HttpConnectionOverHTTP.java" 172]
[org.eclipse.jetty.io.AbstractConnection$ReadCallback succeeded "AbstractConnection.java" 311]
[org.eclipse.jetty.io.FillInterest fillable "FillInterest.java" 105]
[org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint onFillable "SslConnection.java" 555]
[org.eclipse.jetty.io.ssl.SslConnection onFillable "SslConnection.java" 410]
[org.eclipse.jetty.io.ssl.SslConnection$2 succeeded "SslConnection.java" 164]
[org.eclipse.jetty.io.FillInterest fillable "FillInterest.java" 105]
[org.eclipse.jetty.io.ChannelEndPoint$1 run "ChannelEndPoint.java" 104]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill runTask "EatWhatYouKill.java" 338]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill doProduce "EatWhatYouKill.java" 315]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill tryProduce "EatWhatYouKill.java" 173]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill run "EatWhatYouKill.java" 131]
[org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread run "ReservedThreadExecutor.java" 409]
[org.eclipse.jetty.util.thread.QueuedThreadPool runJob "QueuedThreadPool.java" 883]
[org.eclipse.jetty.util.thread.QueuedThreadPool$Runner run "QueuedThreadPool.java" 1034]
[java.lang.Thread run nil -1]]}}}
It does not happen every time, just sporadically. Has anyone seen this before, or know how I can prevent it? Is it okay to retry?
Edit: our version numbers
com.cognitect.aws/api {:mvn/version "0.8.539"}
com.cognitect.aws/endpoints {:mvn/version "1.1.12.181"}
com.cognitect.aws/sns {:mvn/version "820.2.1083.0"}
com.cognitect.aws/sqs {:mvn/version "814.2.1053.0"}
No sir, this is running in an ECS container
though it's not marked so, I think it's retriable @dannyfreeman.
I'm not sure we have any way to tell. There is not much other context to the error. We have other core async work happening, while I don't believe it's hogging the thread pool used by core async (and don't really have a way to prove it right now), could that cause timeout issues, or would it be a problem with the aws service we are hitting?
I said above it was an SNS api, but I meant SQS. This issue only happens when we call the DeleteMessage
sqs api endpoint.
We're going to try to retry by providing an override for :retriable?
when calling (aws/invoke client ...)
that checks for the default retry conditions and this specific error.
We've been cooking with a retry for a little while now and it seems to have taken care of this issue. Is this something that could be added to the the default https://github.com/cognitect-labs/aws-api/blob/a1c15961b35c1a40a76fe9ab4dddfeafc4474eb1/src/cognitect/aws/retry.clj#L47-L56 @ghadi? If so I can open up a ticket or PR in that repository
an issue would be welcome -- http/connection faults should be classified as one of the retryable anomalies, not the fallthrough "fault"
Right on. I'll work on writing up an issue to describe what we're experiencing and how we solved it. Thanks for your help
Seeing this today is interesting. We finished the most basic solution for working around intermittent DNS issues with a condition like this:
(and (#{::anomalies/not-found} category)
(str/includes? (or (-> e ex-data ::anomalies/message) "") ""))
Said issues only happen on a local dev machine with wired connection to their router.
We're wondering if some other conditions could happen that would make this solution a solid bother. After all, not-found is not part of the set of retryable anomalies provided by Cognitect.