This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-12-09
Channels
- # adventofcode (229)
- # announcements (1)
- # beginners (9)
- # boot (1)
- # calva (11)
- # cider (14)
- # clojure (26)
- # clojure-kc (1)
- # clojurescript (46)
- # core-async (10)
- # cursive (6)
- # datomic (53)
- # figwheel-main (2)
- # fulcro (3)
- # hoplon (2)
- # hyperfiddle (1)
- # kaocha (2)
- # off-topic (11)
- # om (5)
- # quil (11)
- # re-frame (7)
- # reagent (6)
- # reitit (9)
- # shadow-cljs (9)
- # spacemacs (5)
- # vim (5)
Seeking your opinion: after following the Ions tutorials where each fn is it’s own lambda, then trying out a single request handler fn/Ion, the single Ion seems much better. The main reason is cold starts: with a single Ion, there are a lot less cold starts for users. It means using less of the API Gateway machinery but this is actually a good thing if you want a local dev server. So that’s two compelling reasons to use a single entry-point. What am I missing in this assessment?
That's been my conclusion so far as well. There may be tasks around long-term API maintenance that the Gateway features help with, but I haven't reached that problem yet.
Security through cognito is certainly one usecase. It would allow you to disentangle biz logic from the auth(z) code.
If you can isolate all that stuff at the boundary it can simplify quite a lot of stuff. But thats kind of a design+biz tradeoff for whether you want to separate auth(z) from biz code.
On the one hand you could trust that the functions are only run by properly authorized roles if you have a mechanism to ensure all function invocations are piped through cognito.
On the other hand, whats the consequence for getting it wrong because of a typo if you decouple them? Does a user in a game get to do something they shouldnt? nbd. Does your firm have a catastrophic hippa violation? Company ends with lawsuits burning it to the ground.
for what it's worth i started by deploying many "atomic" functions behind API Gateway routes, and then eventually folded them in to one proxy resource to avoid cold starts. my reasoning was that some end points are very important but not used often, and the cold start of those end points resulted in a poor user experience.
i think the Ions tutorial leaves readers in a funny place - on one hand Ions advertises itself as atomic functions in the cloud, yet the tutorial steers readers to internal routing without demonstrating how to do so. you're left to choose one path or the other without knowing the consequences.
Thanks for the thoughts. Re Cognito, I am using it already and I learned that with 1 interceptor I can replicate the checks that are done by API-GW. However I had to make an extra AWS call because the Cognito ID token doesn’t contain the roles but it’s the one used for decorating requests. Instead the auth handler needs to extra the role from the Access Token i.e. a bit of extra complexity at Auth time. Not a high price to pay. At this point I’m pretty much ready to not use Cognito roles and implement it myself because the local dev server can use that as well
1/ if you call the datomic lambda and that fails (e.g. after an ion deploy it happens frequently), you'll get back an internal server error, but api gw doesn't let you change the response on proxy methods. we would like to add some headers for CORS and set the response to e.g. 503, because a retry makes sense in these cases. you could solve that by adding another lambda in front i guess
2/ if you have large requests (> 6MB = the lambda limit), you have to find another way to get your data in/out. if you go the serverless way that would mean something like using presigned S3 urls for both upload and download. Also the max timeout for api gateway is 30s. maybe we are misusing all this, but file uploads / downloads is kind of crucial to our application
if you don't have any of these requirements I think api gateway is good, but i'd still use it with 1 proxy endpoint, 1 lambda and do the routing in the ion.
in the context of datomic, what patterns to people tend to use to deal with “unknown values” (ie missing datums) and “known unknown values” (ie explicit nils) given that datomic doesn’t support the latter?
Usually there is some code that expresses it in the same coding system as whatever expresses a positive value
In fields that have less extensive coding I’m not sure how to handle it without having two attributes
i’m curious why Datomic is the way it is, especially given the context of the recent Maybe Not talk
i wonder if there are some technical reasons having to do w/ indexing or the datalog implementation - or if it’s just an oversight, but i’d be skeptical of the latter
Nil is convenient for simple cases of “I know about this attr but I don’t know the value” but there are more dimensions of unknownness
ie string or keyword, so you could do ‘Brandon’ or :unknown, or :not_yet_named or whatever
although this brings up a related problem i’ve encountered a bunch: the “when” of classification
ie do i have just one nil value? or do i have 10 different keywords? i may need to distinguish, but i may also want to just treat them all the same
If anyone has time, I could use help structuring this query a little better 😕
The issue is it's an or join situation, but each branch is conceptually joined to the branch that came before it. So..
What I'm trying to say is get the event where:
the event is a link issued
(has a link) that was sent to this recipient-address,
OR the event is a session creation
(has a session) that originated with (has reference to) said link issued event
OR the event is a session joined
(has a session) that originated from afore specified session creation
event
phrased another way, given the following events with the following keys:
------------------
link/created: [link, recipient-address]
session/created: [session, link]
session/joined: [session]
I want all events that relate to that recipient address.(patch/q
'[:find [?e ...]
:in $ ?email
:where
[?e ::tid ?tid]
(or-join [?email ?tid]
(and [?e :recipient_address ?email]
[?e :magiclink ?magic]
[?e ::tid ?tid])
(and [?e :recipient_address ?email]
[?e :magiclink ?magic]
[?e2 :session ?session]
[?e2 :magiclink ?magic]
[?e2 ::tid ?tid])
(and [?e :recipient_address ?email]
[?e :magiclink ?magic]
[?e2 :magiclink ?magic]
[?e2 :session ?session]
[?e3 :session ?session]
[?e3 ::tid ?tid]))]
db
"")
so that's the rawest of the raw ways to do that, and doesn't at all take advantage of the fact that the steps are kind of an accumulation of the previous steps plus something else