This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-03-07
Channels
- # admin-announcements (5)
- # aws (2)
- # boot (313)
- # cider (69)
- # cljsfiddle (18)
- # cljsrn (17)
- # clojars (6)
- # clojure (121)
- # clojure-austin (4)
- # clojure-bangladesh (4)
- # clojure-colombia (2)
- # clojure-dusseldorf (17)
- # clojure-japan (1)
- # clojure-russia (65)
- # clojure-sg (4)
- # clojurescript (94)
- # community-development (6)
- # core-matrix (2)
- # cursive (2)
- # data-science (6)
- # datomic (28)
- # hoplon (4)
- # jobs (1)
- # jobs-discuss (1)
- # keechma (15)
- # ldnclj (2)
- # off-topic (6)
- # om (140)
- # om-next (1)
- # onyx (47)
- # parinfer (11)
- # re-frame (13)
- # reagent (4)
- # spacemacs (7)
- # specter (7)
- # yada (18)
@imre I've committed/pushed the file upload code to master. It's in a demo called dev/upload.clj
["/post"
(resource
{:id ::index
:methods
{:post
{:consumes "application/octet-stream"
:consumer (fn [ctx _ body-stream]
(save-to-file
ctx body-stream
(java.io.File/createTempFile "yada" ".tmp" (io/file "/tmp"))))
:response (fn [ctx] (format "Thank you, saved upload content to file: %s\n" (:file ctx)))}}})]
The idea here is that you provide an optional :consumer
function that will override the default processing of the request body. The consumer function returns the ctx
, perhaps augmented with information about what it did with the incoming request body. In this way, you could build all sort of things (e.g. direct S3 passthrough)
[yada.consume :refer [save-to-file]]
s
is manifold.stream
In this example, we batch up 100 8k network buffers to optimize disk writes to a file. Note the s/reduce is the asynchronous aspect of this - and b/to-byte-buffer takes the batch of 100 network buffers and creates a single NIO buffer which is written direct to the file channel.
I was experimenting with various parameters to tweak performance last night, testing with my 1Gbyte test file. Sometimes I managed to get that uploaded through yada into /tmp in ~1second, which is extremely fast. However, I've been getting some OutOfMemory errors in Netty which indicate that there's something not right. I suspected the problem was not releasing the Netty buffers properly but after quite a bit of investigation I'm not 100% sure now. There's no indication from Netty's resource leak detection system either.
This could be a bug/issue remaining in Netty, which is still a new version (4.1.0 RC3).
maybe @ztellman could see something here I'm doing wrong
but the overall reliability of handling this level of throughput was disappointing.
so if I understand correctly you can now supply :consumer
which will be run instead of process-request-body
right?
yes - actually :consumer is run by process-request-body, if it exists
@malcolmsparks: you are doing a 1Gbyte file in one second? that is very impressive indeed!!!