This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-07-09
Channels
- # beginners (20)
- # boot (4)
- # cider (2)
- # cljs-dev (25)
- # clojure (45)
- # clojure-dev (1)
- # clojure-greece (5)
- # clojure-italy (20)
- # clojure-nl (12)
- # clojure-russia (11)
- # clojure-uk (256)
- # clojurescript (176)
- # data-science (33)
- # datomic (47)
- # docs (1)
- # duct (13)
- # fulcro (54)
- # graphql (24)
- # hoplon (3)
- # jobs (1)
- # leiningen (32)
- # luminus (3)
- # midje (1)
- # mount (2)
- # off-topic (3)
- # onyx (5)
- # overtone (1)
- # parinfer (12)
- # pedestal (4)
- # re-frame (60)
- # reagent (11)
- # reitit (3)
- # ring-swagger (21)
- # rum (1)
- # shadow-cljs (16)
- # spacemacs (23)
- # tools-deps (19)
- # vim (79)
I'm thinking through the right place to do schema migrations. The key issue being that a deployment is unstable until schema migrations are run. I've seen examples of doing the schema migration as part of a memoized get-connection
helper. Are there other approaches I'm missing? Perhaps there should be a hook to run a schema migrations as part of the code deploy deploy
step.
Ref: https://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L69
yeah some sort of :init-method
in the config would be nice for that kind of stuff. But i’m just doing the memoization for now was well
@eoliphant the downside I see regarding memoization is the risk that the schema fails at "run time". I'd prefer to the deploy to fail. Could argue it's unlikely given schemas should grow, not break, but human error is a thing. e.g. added a unique constraint but values in db aren't unique.
You could write your own deploy code that transacts the schema before running the ions deploy op, perhaps
Hi @U0DHVSBHA, yes that would work. For a commit based deploy there is little chance of the local environment (running the migration) being inconsistent. That's the risk I see. Deployment becomes coupled to local dev env and the code pushed. More moving parts. More complexity.
yeah I totally agree, but I think that’s going to have to be something they build in. If say the :init-function returns falsy, etc yeah then kill the deploy. Though even that gets interesting, since you could possibly say successfully transact in some schema, but then have something else in the code fail, such that it returns false. So you’d roll back the deployment, but still have made changes.
True enough but happily we're talking about an unlikely case given "grow don't break" schema approach. Does require some coding practices reduce the risk of surprises. Ref: https://docs.datomic.com/cloud/best.html#plan-for-accretion
yep, that’s what I do with all my datomic stuff. And similarly one would just have to be disciplined about not doing anything too hinky in this proposed init-method.
Crazy idea would be a schema migration transaction function which runs a suite of tests before committing (if that's even possible).
ok i’m pulling my hair out right now.. My ionized gw function is base64 encoding my responses lol
i use conformity for my on-prem stuff. it’s like flyway/liquibase lite. But provides a little structure around the process
Thanks, I'll check it out.
I took a stab at this using a small gist. Haven't discussed a PR or anything though. https://gist.github.com/cjsauer/4dc258cb812024b49fb7f18ebd1fa6b5
What I do for migrations is test that all of the queued migrations work using d/with
. If I don't throw any exceptions, then I commit the transactions.
anyone had this problem? I had this function go a bit wonky on me. I’ve stripped it down to this
(defn handle-request*
"Handle API Requests"
[{:keys [headers body]}]
(log/debug "here's something")
{:body "body here" #_(json/generate-string {:its "ok"})
:headers {"Content-Type" "application/json"}
:status 200}
#_{:status 200
but in the gateway log, I’m seeing the following (and the encoded value is returned to my client)
(c562e562-8313-11e8-8b30-f3eb1fd30d3f) Endpoint response body before transformations:
{
"body": "Ym9keSBoZXJl",
"headers": {
"Content-Type": "application/json"
},
"statusCode": 200,
"isBase64Encoded": true
}
`@eoliphant Have you added */*
as a Binary Media Type?
been pulling my hair out all day trying to get logging working via a logback appender for loggly. And just realized that most of the typical java ecosystem stuff will probably never work. Since most of it depends on classpath scanning, etc etc, so when your ions deploy all that’s already taken place.. ugh.. Maybe a good use case for modules 🙂
hi, is there a preferred strategy to manage data locally in a standalone application, which ordinarily online to access several other datomic databases. When offline the app should be able to still perform with whatever information it has cached. It should be able to store locally some of the work and then try to update the remote databases as applicable. Of course, the issue of conflict needs to be addressed in a sane way. I was thinking may be have a local datomic instance to serve as a cache for multiple remote datomic instances. or is there something better and simpler.
@pradyumna take a look at the AWS AppSync javascript lib. It handles all of these requirements for you, including conflict resolution. I’m helping out on a project which hopes to expose Ions using AppSync so it should fit pretty well
thanks @steveb8n. i checked this. unfortunately its not exactly fitting in. its clojure (jvm, not javascript)
You’d probably have to implement this yourself @pradyumna like most db’s there’s no explicit support for that use case AFAIK. You could potentially use something like onyx for moving the updates between databases, but you’d be on the hook for conflict resolution ,etc
Hi, I’m still trying to get some form of logging working. In the course of this I’ve run into another issue. Given what I mentioned previously about commons/sl4j/etc stuff probably never working, I tried creating a custom logger with timbre, that just fires entries via rest in to loggly. I’m using cljs-ajax for this, and it works fine in local dev, but when I call in now, i’m getting a classnotfoundexception for org.apache.http.HttpResponse, so there are presumably some classloader conflicts there. I noticed that the ion-event-example uses some cognitect http-client lib, but I can’t seem to find it in any of the repos
that's great to hear, thanks!
When I instantiate a Client, the following is printed:
Reflection warning, cognitect/hmac_authn.clj:80:12 - call to static method encodeHex on org.apache.commons.codec.binary.Hex can't be resolved (argument types: unknown, java.lang.Boolean).
Reflection warning, cognitect/hmac_authn.clj:80:3 - call to java.lang.String ctor can't be resolved.
It returns the Client, though, and it seems like there aren't any issues. I just want to know if this will be a problem.hi @U0LSQU69Z what version of Clojure and Java are you running?
that won't harm anything, but I will squelch it in a future build
Hi everybody, I could have sworn I saw a project on here that you could point to a datomic database and get a GraphViz diagram of the schema, but now I can't seem to find it. Anyone remember it?
Think I played around with this a while back. https://github.com/felixflores/datomic_schema_grapher
Hey @stuarthalloway, I think there may be an issue ionized lambdas handling of OPTIONS requests. For this ‘echo’ ion
(defn api-request*
"lambda entry point"
[{:keys [headers body request-method]}]
(try
{:status 200
:body (json/generate-string request-method)}
....
I get “post” “get”, etc just fine but a "message": "Internal server error"
for an OPTIONS request, need that since API gateway expects the lambda to respond to the CORS preflight stuffSomething similar was happening to me because I hit "Enable API Gateway CORS". If you did the same, delete the OPTIONS method and handle it in your Ion. This is happening because AWS matches your OPTIONS request content-type to */*
and base64 encodes it. The "Mock Integration" that the preconfigured CORS handler generates expects JSON and throws when it can't parse.