This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-03-04
Channels
- # admin-announcements (3)
- # alda (4)
- # beginners (30)
- # boot (116)
- # cbus (5)
- # cider (20)
- # clara (10)
- # cljs-dev (12)
- # cljsjs (41)
- # cljsrn (9)
- # clojars (6)
- # clojure (131)
- # clojure-bangladesh (5)
- # clojure-colombia (2)
- # clojure-dev (9)
- # clojure-ireland (4)
- # clojure-japan (3)
- # clojure-norway (10)
- # clojure-poland (6)
- # clojure-russia (59)
- # clojure-sg (1)
- # clojurebridge (2)
- # clojurescript (76)
- # clojurewerkz (4)
- # css (6)
- # cursive (21)
- # data-science (24)
- # datomic (27)
- # emacs (9)
- # hoplon (68)
- # jobs (2)
- # jobs-rus (1)
- # ldnclj (10)
- # lein-figwheel (9)
- # leiningen (21)
- # off-topic (5)
- # om (232)
- # onyx (63)
- # parinfer (2)
- # proton (25)
- # re-frame (12)
- # reagent (39)
- # untangled (6)
- # yada (122)
Yes. There has been a bug that manifests itself intermittantly. It is due to a missing conditional in one part of rhe multipart code which means a part is mislabelled as a preamble
It's been fixed in the latest release this morning.
We've done some extensive property-based testing (which is how we found the bug in the first place)
It all looks good now, BUT...
You MUST set raw-streams option to true when starting the aleph server. I can't stress this enough. If you don't enable the raw streams then the data can arrive with some corriptions.
I have added a yada.yada/server convenience function on master which starts aleph for you with the correct setup
@imre tl;dr upgrade to this morning's release. Thank you dear beta programme members!
@imre great!
on a different note just as that got fixed I realized multipart is probably not what I want to use in my case...
will yada support plain binary uploads like an "application/pdf" with binary contents in the body?
something like
POST /avatars HTTP/1.1
Host: localhost:3000
Content-Type: image/jpeg
Content-Length: 284
raw image content
It does already
(yada/resource
{:consumes [{:media-type #{"application/txt"}}]
:produces [{:media-type #{"text/html"}
:charset #{"UTF-8"}}]
:access-control {:allow-origin "*"
:allow-methods "POST"}
:methods {:post {:parameters {:body schema.core/Any
:header #_{(schema.core/optional-key "content-type") schema.core/Any}}
:response (fn [ctx]
(prn "raw:" (get-in ctx [:parameters]))
"Hello")}}
:responses {415 {:produces [{:media-type #{"text/html"}
:charset #{"UTF-8"}}]
:response (fn [ctx]
(prn (:request ctx))
(:response ctx))}}})
I guess I should probably just go with octet-stream and get metadata like file extension in a header
I think I need to spend some time polishing off the upload feature, but it implemented and tested, but badly needs some finishing and docs - I'll see what I can do - in the meantime, let me know how you get on and if you manage to figure it out - I can always give pointers if you get stuck
I have a load of problems right now which really starts from me being unsure about what API design approach would be the best
API design is really hard
I keep hearing nightmarish stories about developers struggling with poor API designs - that of course can never be fixed because it's too late and its all hardcoded and distributed in microsrevices and whatever
i think one of the best reasons to stay true to a RESTful approach is it helps avoid most of the really horrible pitfalls because people have thought hard about them
not saying REST is the answer, just a good default
one reason I work really hard to keep all the http and domain stuff separate so if I mess up the domain interface I can refactor internally while keeping the same http api
and if I mess up the http api I can "easily" express the whole domain as a V2 in another http api
yes, I worry this generation is creating such a painfully huge legacy for the next
keeping all that separate is an art - not easy to do, especially for less experienced devs
I think I made up my mind: gonna go with application/octet-stream
and the Content-Disposition
header
@imre I think you're just saying 'I try not to complect the domain with the transport' - which is an excellent principle to hold to
yeah - good idea, not sure yada handles the Content-Disposition header for you there, but you can do all this by hand, it's not hard (and then you've built a reusable yada resource you can use in other situations!)
not currently 😞 but good feature request - there is an http status code for this: 413
a question about octet-stream requests: I haven't found any tests in yada that send an octet-stream in the body and I seem to be slow in figuring out where such a body would be available to the request handler
You don't specify it in parameters, you just process it and make it available in the request context. Let me look. Not on my laptop right now
will it be a stream? you mean a manifold stream right?
remember you're using raw-streams true, so you'll get a bunch of netty byte buffers
you don't ever want to do that - it'll cripple the hyperdrive
let me explain - it's really easy
you see that yada.interceptors/process-request-body ?
see
(rb/process-request-body
ctx
(stream/map bs/to-byte-array (bs/to-byte-buffers (:body request)))
(:name content-type))
that stream/map thing is the most magical line in yada
it converts all the manifold stuff into normal byte-buffers, and handles all the netty reference counting for you
and gives you a (deferred) stream of byte buffers
so you just need to add a defmethod for your content-type
the body-stream here is a manifold stream of byte arrays
(defmethod process-request-body "application/octet-stream"
[ctx body-stream media-type & args]
(d/chain
(s/reduce (fn [acc buf] (inc acc)) 0 body-stream)
;; 1. Get the body buffer receiver from the ctx - a default one will
;; be configured for each method, it will be configurable in the
;; service options, since it's an infrastructural concern.
;; 2. Send each buffer in the reduce to the body buffer receiver
;; 3. At the end of the reduce, as the body buffer receiver to
;; provide the context's :body.
(fn [acc]
(infof ":default acc is %s" acc)))
ctx)
it isn't implemented - it just counts the number of byte-arrays in the stream
ok I need to fix that
it's a placeholder - but the best example currently is in multipart.clj
ok, looks like i know what i'm doing this weekend 😉
it's not a big job because multipart.clj already works
the idea is that you have some reducing function declared in your resource model - the reducing function will get a stream of all the byte buffers (as they are realized)
so it's all still glorious async
this is a pull model for development - I leave gaps and wait for people to complain
s/reduce ftw.
I'll try to expain: the intention is that you declare some function in your resource model which is a reducing function - if all you want to do is delegate the stream to some /tmp storage, you'll get called every time there's a byte-array to deal with, which you could append to that /tmp file - or S3 - or anything you like - it's a reducing function, you could be calculating a hash or anything
anyway, i'm glad you spotted this, I had something at the back of my mind that there was still something left to do with async uploads - we've been hard at work smoking out a bug in the multipart code this week
ok, please wait for monday, normal service will be resumed
can you tell me what you're trying to do ? is it a file upload to store to disk?
for now though no one in my org complained about the api being too sync so I guess I'll just go with a crude implementation:
(defmethod yada.request-body/process-request-body "application/octet-stream"
[ctx body-stream & _]
(assoc-in ctx [:body] (yadabs/to-byte-array body-stream)))
exactly - the transit code needs inputstreams
and as you know, you can substitute my stub defmethods for your proper ones
sorry for the excuses, i'm a bit embarrassed about that application/octet-stream stub now, had a mentally hard few weeks and misremembered it wasn't complete!
but having see yada hammered hard this week, and standing up to some punishment, I'm confident all this netty/aleph/manifold stuff is fundamentally sound and we'll be ok as we scale up
(so long as you keep the hyperdrive switched on)
yep, i actually understand what that means