Fork me on GitHub
Rachel Westmacott14:11:57

I’m performing something like:

(let [s3-response (s3/get s3-component file_location)]
      (-> (:response ctx)
          (assoc :status 200
                 :body (:content s3-response))
          (update :headers merge {"Content-Disposition" "attachment; filename=\"my-report.pdf\""
                                  "Content-Type"        "application/pdf"
                                  "Content-Length"      (-> s3-response :metadata :content-length)
                                  :content-length       (-> s3-response :metadata :content-length)
                                  "X-Custom-Header"     "Just to see what happens."})))

Rachel Westmacott14:11:25

and seeing something like this at the command line from curl:

> GET /report/f7570b24-fd66-4486-ba68-f4ca7f67a1e9 HTTP/1.1
> Host:
> User-Agent: curl/7.43.0
> Accept: */*
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0< HTTP/1.1 200 OK
< Content-Disposition: attachment; filename="my-report.pdf"
< Content-Type: application/pdf
< X-Custom-Header: Just to see what happens.
< X-Frame-Options: SAMEORIGIN
< X-XSS-Protection: 1; mode=block
< X-Content-Type-Options: nosniff
< Content-Type: application/pdf
< Server: Aleph/0.4.1
< Connection: Keep-Alive
< Date: Fri, 03 Nov 2017 13:16:15 GMT
< transfer-encoding: chunked

Rachel Westmacott14:11:01

weirdly I have the Content-Type twice, but no length


I think I might see why


the transfer-encoding is chunked :thinking_face:

Rachel Westmacott14:11:26

I suspect that the double Content-Type is because I have defined :produces "application/pdf" for the route as well as passing it explicitly

Rachel Westmacott14:11:48

ah - is that XOR with content-length?


I might be wrong.


but it would seem logical to me that they're incompatible.

Rachel Westmacott14:11:05

I’m trying to stream the data straight from S3 back to the client, but S3 tells me the content-length upfront, so it is knowable

Rachel Westmacott14:11:22

it’s not a biggie. I just wanted people downloading documents to get a proper progress bar in their browser


I don't think there's any reason to do a chunked transfer if you know the content length?

Rachel Westmacott14:11:28

I don’t necessarily have all the data in memory when the response begins.


I don't think that matters though

Rachel Westmacott14:11:56

I also don’t know why it is doing chunked transfer - I don’t think I’ve explicitly asked for that anywhere


(At least, not from a fundamental perspective anyway!)

Rachel Westmacott14:11:16

no - it can’t really - I just write the data to the network when I have it


Yep. It's not like you need to load it all into memory and send it as one big chunk, you just write it to the socket. The difference being that the client knows when you've finished.


Someone more knowledgeable than me would need to explain how to turn off chunked transfer. It's a safe default.

Rachel Westmacott14:11:07

I’m handing a type of InputStream to yada/aleph, so I can see why it might assume that was sensible.