Fork me on GitHub

Hi. I am using to connect to s3 to upload and download files. My requirements needs me to download a large number of files in parallel. We are running into Unable to execute HTTP request: Timeout waiting for connection from pool . We would like to increase the max connections and reduce the max idle time. But we are not sure how to set these configruation. We tried setting properties in defclientconfig but not sure if we are setting it right. We are getting null pointers after that. Can some one help with this configuration change. Thanks in advance


How many files are you downloading in parallel? And how exactly you are doing that?


we have multiple users downloading or uploading. so there is a constant flow of requests. also there is option to select multiple files and download. in that case, we spawn multiple thread and download in parallel


We use claypoole's pmap like this and I haven't seen this problem yet:

(cp/pmap 100 #(s3/get-object bucket-name %) keys)


I guess your files are probably much bigger than ours (they are quite small, a few kilobytes usually)


Do you know how many of them are running in parallel when the problem happens? If it's more than 100, I would think about introducing a queue and delay the new requests, because I guess you won't get better throughput by launching that many connections from a single machine. might be useful as they recommend


yeah we have most of our files over 10Mbs. some may be around 500Mbs. Also its not just once. There will be constant hits with this ranges. Thats why we are considering increasing the connection pool size. any ideas on how to achieve it with amazonica


What I'm saying is that increasing the pool size might not be the best idea because it can already be quite big. Anyway, did you look at this? ->


Yeah. But not sure how too add it in configuration and use it while calling s3/get-object.