This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-10-01
Channels
- # announcements (8)
- # aws (8)
- # babashka (21)
- # beginners (125)
- # calva (12)
- # cider (10)
- # circleci (29)
- # clara (6)
- # clj-kondo (34)
- # cljdoc (3)
- # cljfx (65)
- # cljs-dev (18)
- # clojure (38)
- # clojure-australia (4)
- # clojure-berlin (5)
- # clojure-czech (2)
- # clojure-dev (15)
- # clojure-europe (22)
- # clojure-nl (3)
- # clojure-uk (31)
- # clojuredesign-podcast (7)
- # clojurescript (87)
- # code-reviews (1)
- # conjure (3)
- # cursive (2)
- # data-science (1)
- # datalog (1)
- # datomic (36)
- # emacs (12)
- # events (1)
- # fulcro (3)
- # graalvm (68)
- # instaparse (2)
- # jackdaw (2)
- # jobs (2)
- # leiningen (8)
- # luminus (2)
- # nrepl (31)
- # pedestal (44)
- # releases (1)
- # remote-jobs (6)
- # shadow-cljs (4)
- # spacemacs (4)
- # sql (13)
- # tools-deps (56)
- # uncomplicate (4)
- # xtdb (40)
- # yada (11)
What for you’re proxying the image? Wouldn’t be easier to just send url to resource?
https://stackoverflow.com/a/33605888 that should be helpful.
@emccue You could also serve the images through the cloudfront CDN Backed by S3. Can include auth and everything, then your webservers aren't burdened with all that io / memory usage.
Sorry for hijacking the above discussion but it is close to a problem I have been thinking. How about the other way around when you want the user to be able to save images to S3 and store information about that image to your DB (user abc112 has uploaded image cat-foobar.jpg). One option is that the frontend client directly stores the image to S3 and then reports to backend API that image was stored with a name X. But these two operations are not atomic and if the image upload is successful and backend API call fails then I have an image in S3 that does not have DB record. Any thoughts how this could be solved?
One way is to have them upload to an s3 bucket that gets autowiped after a day (or something) (There are lifecycle policies that enable this, IIRC). Then if you get the acknowledgement that they successfully uploaded, move that object to your real storage bucket.
My current plan to handle this is to somehow have the user's upload be key'ed under a uuid
and I can run a background job to clean it up later, if my bill goes too high or something
You can use an s3 lambda trigger to call your API / put a message onto SQS that adds that record to your database.
Good ideas, thanks!
Hello everyone,
i'm using vector based routes, how can i pass not common interceptors for get and post for the same route
for example: ["/company" {:get company/get-all :post [:create-company company/create!]} ^:interceptors [auth]]
now how can i add new interceptor for post for example: validate-new-company
i assume interceptors defined as metadata for get and post for company are common
but i only want to apply validate-new-company for only post
I found something when using pedestal. The first request (when i boot up the web server) always takes a long time (1 seconds~) then it's 20-40ms 🙂 What could be the cause?
This is on localhost without TLS.