Fork me on GitHub
#clojars
<
2020-07-24
>
jarohen11:07:00

Hey folks 🙂 We're getting an 'S3 request failed' trying to deploy a snapshot version of a Crux module to Clojars Other modules (and indeed the jar/pom files from this module) seem to be uploading fine - anyone know whether it's something we're doing wrong or is it a case of waiting it out for a bit?

jarohen11:07:06

Sending juxt/crux-http-server/maven-metadata.xml (1k) to 
Could not transfer metadata juxt:crux-http-server/maven-metadata.xml from/to clojars (): Access denied to: , ReasonPhrase: Forbidden - S3 request failed.
Failed to deploy metadata: Could not transfer metadata juxt:crux-http-server/maven-metadata.xml from/to clojars (): Access denied to: , ReasonPhrase: Forbidden - S3 request failed

tcrawley12:07:33

This might me an intermittent failure that was cached by fastly - that looks like your client is trying to read the file. What do you see if you visit https://repo.clojars.org/juxt/crux-http-server/maven-metadata.xml in a browser?

tcrawley12:07:47

I see the correct file, but I would be hitting a different fastly node.

jarohen12:07:05

mm, I see what looks like a valid maven-metadata.xml

jarohen12:07:04

I'll give it another go, it's been a couple of hours

tcrawley12:07:11

ok, let me know how it goes. I suspected this was a read issue because we only write to s3 at the very end, but realize now that "the very end" is when you upload the maven-metadata.xml file - that's the signal to finalize the deploy. So the s3 failure could be on any artifact that is part of the deploy, not just the metadata file (not that it matters here).

jarohen12:07:51

Ah, ok, thanks

jarohen12:07:17

I've just tried to redeploy a previous version (as the same snapshot) and that went through ok, but coming back to current version still fails. One difference is that our JAR's got bigger - it's gone from around 2MB to around 6MB. Do you know if there's a file size limit? (I'll look into the issue of the larger JAR separately 🙂)

tcrawley12:07:29

There is a limit, but I'm not sure what it is atm. But it is enforced by nginx, so we should get a failure earlier in that case. I'll take a look at the logs to see if there is anything more useful there

jarohen12:07:04

Thanks 🙂

tcrawley13:07:41

Well, nothing helpful there, just :message "S3 request failed" - no exception logged and no exception sent to sentry :(

jarohen13:07:44

Ah, thanks for checking 🙂

tcrawley13:07:48

I'm adding better error reporting now, should be just a few minutes

jarohen13:07:28

File size is looking a likely culprit at the moment, seems consistently fine with the smaller (2MB) JAR, consistently failing with the larger (6MB) JAR

tcrawley13:07:41

I just deployed a change that should log the exception, so let me know if you still see the issue after figuring out the jar size and I'll take a look