Fork me on GitHub
#pedestal
<
2020-10-01
>
emccue06:10:00

Probably a decently common problem

emccue06:10:07

I have some images in s3

emccue06:10:25

and I want to send them to the user

emccue06:10:00

right now I have a basic handler that finds the right file

emccue06:10:49

and gets the contents as an input stream

emccue06:10:18

does anyone know why this is so slow and how can I improve it?

Karol Wójcik08:10:57

What for you’re proxying the image? Wouldn’t be easier to just send url to resource?

emccue11:10:52

I had no clue about the presigned url thing

emccue12:10:08

It seems like a nightmarish tunnel of CORS so far

Joe Lane15:10:39

@emccue You could also serve the images through the cloudfront CDN Backed by S3. Can include auth and everything, then your webservers aren't burdened with all that io / memory usage.

emccue19:10:47

What is the cost of this approach vs S3?

emccue19:10:02

I've never used cloudfront CDN before

Joe Lane19:10:56

How do you want to measure it?

Joe Lane19:10:12

Time? Dollars? Complexity? Performance?

Janne Sauvala15:10:07

Sorry for hijacking the above discussion but it is close to a problem I have been thinking. How about the other way around when you want the user to be able to save images to S3 and store information about that image to your DB (user abc112 has uploaded image cat-foobar.jpg). One option is that the frontend client directly stores the image to S3 and then reports to backend API that image was stored with a name X. But these two operations are not atomic and if the image upload is successful and backend API call fails then I have an image in S3 that does not have DB record. Any thoughts how this could be solved?

isak15:10:03

One way is to have them upload to an s3 bucket that gets autowiped after a day (or something) (There are lifecycle policies that enable this, IIRC). Then if you get the acknowledgement that they successfully uploaded, move that object to your real storage bucket.

emccue15:10:23

I am gonna be doing this too

emccue15:10:42

My current plan to handle this is to somehow have the user's upload be key'ed under a uuid

emccue15:10:49

and then who cares

✔️ 3
emccue16:10:16

and I can run a background job to clean it up later, if my bill goes too high or something

Joe Lane16:10:16

You can use an s3 lambda trigger to call your API / put a message onto SQS that adds that record to your database.

👍 3
Janne Sauvala16:10:50

Good ideas, thanks!

abdullahibra15:10:51

Hello everyone,

abdullahibra15:10:46

i'm using vector based routes, how can i pass not common interceptors for get and post for the same route

abdullahibra15:10:43

for example: ["/company" {:get company/get-all :post [:create-company company/create!]} ^:interceptors [auth]]

abdullahibra15:10:27

now how can i add new interceptor for post for example: validate-new-company

abdullahibra15:10:17

i assume interceptors defined as metadata for get and post for company are common

abdullahibra15:10:37

but i only want to apply validate-new-company for only post

emccue16:10:57

(also maybe a good trigger point to have this not be empty)

emccue16:10:26

(considering its the third result on google right after example repos)

dangercoder17:10:07

I found something when using pedestal. The first request (when i boot up the web server) always takes a long time (1 seconds~) then it's 20-40ms 🙂 What could be the cause?

dangercoder17:10:29

This is on localhost without TLS.

emccue20:10:34

somewhat confused why json params would be a map of string->string

emccue20:10:10

since [1, 2, 3] is valid json

emccue20:10:16

and so is {"a": 10}

emccue21:10:02

is this just a docs mistake?

souenzzo18:10:21

For is. It shoud be any? JSON also allow values like 42 true false "s"