Fork me on GitHub

Seeking your opinion: after following the Ions tutorials where each fn is it’s own lambda, then trying out a single request handler fn/Ion, the single Ion seems much better. The main reason is cold starts: with a single Ion, there are a lot less cold starts for users. It means using less of the API Gateway machinery but this is actually a good thing if you want a local dev server. So that’s two compelling reasons to use a single entry-point. What am I missing in this assessment?


That's been my conclusion so far as well. There may be tasks around long-term API maintenance that the Gateway features help with, but I haven't reached that problem yet.

Joe Lane17:12:29

Security through cognito is certainly one usecase. It would allow you to disentangle biz logic from the auth(z) code.

Joe Lane17:12:37

If you can isolate all that stuff at the boundary it can simplify quite a lot of stuff. But thats kind of a design+biz tradeoff for whether you want to separate auth(z) from biz code.

Joe Lane17:12:28

On the one hand you could trust that the functions are only run by properly authorized roles if you have a mechanism to ensure all function invocations are piped through cognito.

Joe Lane17:12:40

On the other hand, whats the consequence for getting it wrong because of a typo if you decouple them? Does a user in a game get to do something they shouldnt? nbd. Does your firm have a catastrophic hippa violation? Company ends with lawsuits burning it to the ground.


in that case, perhaps you'd complect on purpose


for what it's worth i started by deploying many "atomic" functions behind API Gateway routes, and then eventually folded them in to one proxy resource to avoid cold starts. my reasoning was that some end points are very important but not used often, and the cold start of those end points resulted in a poor user experience.


i think the Ions tutorial leaves readers in a funny place - on one hand Ions advertises itself as atomic functions in the cloud, yet the tutorial steers readers to internal routing without demonstrating how to do so. you're left to choose one path or the other without knowing the consequences.


Thanks for the thoughts. Re Cognito, I am using it already and I learned that with 1 interceptor I can replicate the checks that are done by API-GW. However I had to make an extra AWS call because the Cognito ID token doesn’t contain the roles but it’s the one used for decorating requests. Instead the auth handler needs to extra the role from the Access Token i.e. a bit of extra complexity at Auth time. Not a high price to pay. At this point I’m pretty much ready to not use Cognito roles and implement it myself because the local dev server can use that as well


I'll give my 2 cents: we have even given up on API GW as a proxy. There's 2 reasons.


1/ if you call the datomic lambda and that fails (e.g. after an ion deploy it happens frequently), you'll get back an internal server error, but api gw doesn't let you change the response on proxy methods. we would like to add some headers for CORS and set the response to e.g. 503, because a retry makes sense in these cases. you could solve that by adding another lambda in front i guess


2/ if you have large requests (> 6MB = the lambda limit), you have to find another way to get your data in/out. if you go the serverless way that would mean something like using presigned S3 urls for both upload and download. Also the max timeout for api gateway is 30s. maybe we are misusing all this, but file uploads / downloads is kind of crucial to our application


if you don't have any of these requirements I think api gateway is good, but i'd still use it with 1 proxy endpoint, 1 lambda and do the routing in the ion.


in the context of datomic, what patterns to people tend to use to deal with “unknown values” (ie missing datums) and “known unknown values” (ie explicit nils) given that datomic doesn’t support the latter?


Known unknowns are common in healthcare


yeah, anything with a form that permits an “N/A” - which i’ve dealt with a lot


Usually there is some code that expresses it in the same coding system as whatever expresses a positive value


In fields that have less extensive coding I’m not sure how to handle it without having two attributes


(Because the type will be different)


like foo and foo_known?


i’ve done that a bunch, but it tends to be fiddly


Yeah and then a constraint that you have one or the other not both


Yes it is fiddly


i’m curious why Datomic is the way it is, especially given the context of the recent Maybe Not talk


i wonder if there are some technical reasons having to do w/ indexing or the datalog implementation - or if it’s just an oversight, but i’d be skeptical of the latter


Another pattern that I use for polymorphic attrs in general in datomic is this


{:attr/base :attr/baseTYPE :attr/baseTYPE VAL}


I’ve never thought of using this to express known unknowns but it seems possible


:attr/nameUnknown and then the value is an enumeration to kind of known


just a generalized tagged union? yeah seems like that’s perfectly reasonable


Essentially, but a way that cooperated well with daatalog and datomic’s model


[?e :attr/base ?a][?e ?a ?v]


ah - clever


seems like maybe it’s intentional for known/unknown to be encoded “one level up”


Nil is convenient for simple cases of “I know about this attr but I don’t know the value” but there are more dimensions of unknownness


for sure, i’ve encountered many different variants of nil 🙂


Nil can blur those the same way using a Boolean vs a enum can


but a variant type might be nice


For fun google “hl7 nullflavor”


ie string or keyword, so you could do ‘Brandon’ or :unknown, or :not_yet_named or whatever


Extreme example of this


heh, i’ve seen this 🙂 fuuuun times


although this brings up a related problem i’ve encountered a bunch: the “when” of classification


ie do i have just one nil value? or do i have 10 different keywords? i may need to distinguish, but i may also want to just treat them all the same


and can’t put metadata on nil 😉


If anyone has time, I could use help structuring this query a little better 😕 The issue is it's an or join situation, but each branch is conceptually joined to the branch that came before it. So.. What I'm trying to say is get the event where: the event is a link issued (has a link) that was sent to this recipient-address, OR the event is a session creation (has a session) that originated with (has reference to) said link issued event OR the event is a session joined (has a session) that originated from afore specified session creation event phrased another way, given the following events with the following keys: ------------------

link/created: [link, recipient-address]
session/created: [session, link]
session/joined: [session]
I want all events that relate to that recipient address.


My best effort so far has produced this:


    '[:find [?e ...]
      :in $ ?email
      [?e ::tid ?tid]
      (or-join [?email ?tid]
               (and [?e :recipient_address ?email]
                    [?e :magiclink ?magic]
                    [?e ::tid ?tid])
               (and [?e :recipient_address ?email]
                    [?e :magiclink ?magic]
                    [?e2 :session ?session]
                    [?e2 :magiclink ?magic]
                    [?e2 ::tid ?tid])
               (and [?e :recipient_address ?email]
                    [?e :magiclink ?magic]
                    [?e2 :magiclink ?magic]
                    [?e2 :session ?session]
                    [?e3 :session ?session]
                    [?e3 ::tid ?tid]))]


okay, I've gotten it running... but this can't possibly be the best way to do it


editing the above^^^


so that's the rawest of the raw ways to do that, and doesn't at all take advantage of the fact that the steps are kind of an accumulation of the previous steps plus something else