This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-05-03
Channels
- # boot-dev (1)
- # cider (27)
- # cljsjs (6)
- # cljsrn (11)
- # clojure (249)
- # clojure-dusseldorf (1)
- # clojure-finland (1)
- # clojure-greece (1)
- # clojure-italy (28)
- # clojure-nl (12)
- # clojure-russia (2)
- # clojure-spec (5)
- # clojure-uk (27)
- # clojurescript (24)
- # clojutre (2)
- # component (8)
- # cryogen (1)
- # cursive (7)
- # datomic (61)
- # editors (18)
- # emacs (1)
- # events (1)
- # figwheel (4)
- # fulcro (35)
- # graphql (4)
- # jobs (3)
- # jobs-rus (1)
- # keechma (1)
- # leiningen (1)
- # london-clojurians (1)
- # luminus (62)
- # off-topic (154)
- # onyx (23)
- # pedestal (43)
- # portkey (66)
- # re-frame (49)
- # reagent (23)
- # shadow-cljs (92)
- # tools-deps (113)
- # uncomplicate (2)
{“name” #{},
“http” #{“method” “requestUri”},
“input” #{“shape”},
“output” #{“resultWrapper” “shape”},
“errors” #{}}
and here are the keys that I found for the query
protocol
errors contains some shapes representing errors
so really simplified compared to rest-json
so my goal is to translate each shape
(`map` list
, atomic ones) into query-string or form-params
if I have time, I could take a look at grabbing docs from separate files, maybe on some evening
if you can check also if everything is fine for the LATEST thing
In the next few weeks, it’s open-source, no deadline 🙂
food for thought http://www.serverlesscomputing.org/wosc2/presentations/s2-wosc-slides.pdf
>>> Takeaways • The future is granular, interactive and massively parallel. • Many applications can benefit from this “Laptop Extension” model. • Better platforms are needed to be built to support “bursty” massively-parallel jobs.
it’s too easy to just throw users at a database and then wonder why stuff doesn’t work performantly
one story: Loading a single large file can take really long if not parallelized. Lesson to learn is to split the file by number of slices (vCPU) on the cluster. In one case a 20GB dump file (4.3GB gzip compressed) load went from 1h to 7minutes.
Tables can be distributed by a column (distkey) and queries with joins that work on distkey column result in query executing in parallel on each node. Joins that don’t match distkey can lead to redistributing large amount of data during the query, which leads to headache.
Similar things are to watch out when using spark, but haven’t used a real distributed cluster. Would like to see a Spark cluster used with powderkeg 🙂
@cgrand is it a conférence you attended ?
you are working on distributed computing ?
I really don’t understand how you can split an encoding job between 5000' thread
on conferences, https://www.eventbrite.com/e/clojutre-goes-helsinki-2018-tickets-45337528769?aff=eand
I have very good feedbacks on ClojuTRE
I have to go this year
one story: Loading a single large file can take really long if not parallelized. Lesson to learn is to split the file by number of slices (vCPU) on the cluster. In one case a 20GB dump file (4.3GB gzip compressed) load went from 1h to 7minutes.
Tables can be distributed by a column (distkey) and queries with joins that work on distkey column result in query executing in parallel on each node. Joins that don’t match distkey can lead to redistributing large amount of data during the query, which leads to headache.
The attribute stuff is so uncool
https://docs.aws.amazon.com/sns/latest/api/API_SetPlatformApplicationAttributes.html
question, should we create different (spec/conformer) based on the protocol to format the request or use only specs globally and rework the input
[{:topic-arn "2w",
:label "h",
:awsaccount-id ["8L" "" "" "" "ff"],
:action-name ["1" "m" "n" "ne" "" "GR" "" "56" "6"]}
{"TopicArn" "2w",
"Label" "h",
"AWSAccountId" ["8L" "" "" "" "ff"],
"ActionName" ["1" "m" "n" "ne" "" "GR" "" "56" "6"]}]
this exercise
on :portkey.aws.sns.-2010-03-31/add-permission-input
returns vectors for the list
“type”
If I read the doc well, this translate to something like that =>
?TopicArn=arn%3Aaws%3Asns%3Aus-east-1%3A123456789012%3AMy-Test
&ActionName.member.1=Publish &ActionName.member.2=GetTopicAttributes
&Label=NewPermission &AWSAccountId.member.1=987654321000
&AWSAccountId.member.2=876543210000 &Action=AddPermission &SignatureVersion=2
&SignatureMethod=HmacSHA256 &Timestamp=2010-03-31T12%3A00%3A00.000Z
&AWSAccessKeyId=(AWS Access Key ID)
&Signature=k%2FAU%2FKp13pjndwJ7rr1sZszy6MZMlOhRBCHx1ZaZFiw%3D
with ActionName.member.1=Publish &ActionName.member.2=GetTopicAttributes
as values
kind of the same pattern for maps
so structure
, map
and list
are the 3 compound types
but we can exclude structure
as it’s a global wrapper
I might be off but spec.tools has a way of changing a conformer: https://github.com/metosin/spec-tools/blob/master/README.md#spec-coercion
I wrote something like that =>
(defn conformed-input->query-protocol-input
"Given a conformed input, transform the input to be compliant to
the `query` protocol of AWS."
[input]
(into {} (for [[k v] input]
(cond
(string? v) [k v]
(vector? v) (into {} (map-indexed (fn [i v'] [(str k ".member." i) v'])) v)
(map? v) (into {}
(comp
(x/for [[k' v'] %]
[[(str k ".entry.") k'] [(str k ".entry.") v']])
(map-indexed (fn [i [[k1 v1] [k2 v2]]]
[[(str k1 i ".key") v1] [(str k2 i ".value") v2]]))
cat)
v)))))
I am not sure it’s the best way but it works
you mean not use spec conformers ?
what we know is that description files are the same for the 5 protocols
so specs are working that way
but output/conforming has little differences
by the way, same goes for the output specs
does it recall you something @cgrand =>
(concat (map #(str "ser-" %) inputs) (map #(str "req<-" %) input-roots)
(map #(str "deser-" %) outputs) (map #(str "resp->" %) output-roots))
Yes I categorize shapes according to their usage (one shape may have several usages) and I create one spec for each usage.
the shape-seq
makes me really nervous
@cgrand when you have the time, it would be awesome of you can explain to me the overall archetecture that you were looking for
from there I can try to draw some stuff and then go back to your code
Yes I categorize shapes according to their usage (one shape may have several usages) and I create one spec for each usage.
Less conforming indeed.
Starting from a shape it returns shapes it depends on.