Fork me on GitHub
#aws
<
2020-04-29
>
kenny16:04:48

Is it safe to assume cognitect.aws.client.api/client is non-blocking?

dchelimsky16:04:02

It reads from the file system.

dchelimsky17:04:45

@kenny does that answer your question?

kenny17:04:02

I think so. Probably safest to create clients not in a go block then, right?

kenny18:04:31

I've seen https://github.com/cognitect-labs/aws-api/issues/41 on no current support for timeouts in aws-api. Is there a workaround? This seems like a big hole. Am I supposed to let requests hang around forever if AWS would let them?

ghadi18:04:42

we're planning on supporting this explicitly, but for now you can use the async API and put an external timeout on your call

ghadi18:04:28

(async/alts! [aws-response your-timeout])

✔️ 4
kenny18:04:42

Couldn't this result in lots of resources left hanging though?

ghadi18:04:38

meaning open http connections?

ghadi18:04:21

quantifying it will depend on your use-case

ghadi18:04:17

my guess is that we can/will use the http client library's timeout mechanism

ghadi18:04:35

what APIs are you using, or is this an abstract question?

ghadi18:04:52

SQS ReceiveMessage offers a explicit long-polling timeout

ghadi18:04:14

for example

kenny18:04:40

Both, I suppose. I slept my computer and went for lunch. When I came back, I ran an API call to AWS lambda with a <!! around the aws-api response. It sat for several minutes with no response. I don't know exactly what caused this, but it made the issue very obvious and quite frightening. If that could happen, it could (we must assume it will) lock up the calling thread for who knows how long. I can, and should, add the alts! but if something like this were to happen often in production, it could easily eat up lots of resources.

hiredman19:04:35

I bet this isn't an http timeout issue at all, my guess is a deadlock core async threadpool

kenny19:04:41

Certainly possible. From the REPL history, I had only ran the code 3 times: twice before lunch and the 1 after. That wouldn't deadlock the threadpool, right?

kenny19:04:20

This code is also incredibly simple. No obvious blocking calls.

hiredman19:04:24

It depends on what other code is running. You can look at thread dumps or manually schedule a job on the core async threadpool to see if it runs

hiredman19:04:43

The async API will also retry operations with some kind of backoff for certain errors

kenny19:04:24

That could have brought the total time to return above 5 mins if the Lambda kept timing out.

hiredman19:04:13

It doesn't tell you if that is happening, but you can configure that stuff when creating a client, so you could pass it a function that logs something

ghadi18:04:48

InvokeLambda?

kenny18:04:03

:op :Invoke

kenny18:04:35

Like so

(aws-api-async/invoke
  (async/<! @*lambda-client)
  {:op      :Invoke
   :request {:FunctionName   function-name
             :InvocationType "RequestResponse"
             :Payload        (transit/encode-transit query-request)}})

kenny18:04:19

Well, that's newer. The weird (async/<! @*lambda-client) wasn't there before.

ghadi18:04:27

if you care about the lambda response, you have to wait for it though

kenny18:04:40

I care about it but not enough to wait minutes 🙂

ghadi18:04:53

in that case put a timeout on the Lambda itself

kenny18:04:08

It should never take that long. After the lambda is hot, it executes in 10s of millis.

ghadi18:04:31

default timeout is 3 seconds on a lambda

ghadi18:04:37

max is 15 minutes

kenny18:04:53

This is a Datomic Ion. Guessing there's some default timeout on it.

kenny18:04:14

I see 1 min in the AWS console. I waited at least 5 mins.

ghadi18:04:37

that would be an interesting repro for the datomic team

ghadi18:04:53

the lambda proxies that Ions spin up have a default 60s timeout

kenny18:04:13

So for whatever reason, it sat for 5 mins before I restarted the REPL.