This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-12-11
Channels
- # adventofcode (116)
- # aleph (10)
- # announcements (2)
- # beginners (67)
- # boot (3)
- # calva (17)
- # cider (8)
- # cljdoc (27)
- # cljsrn (6)
- # clojure (144)
- # clojure-austin (3)
- # clojure-boston (1)
- # clojure-dev (25)
- # clojure-europe (4)
- # clojure-italy (26)
- # clojure-losangeles (4)
- # clojure-nl (28)
- # clojure-russia (1)
- # clojure-uk (34)
- # clojurescript (130)
- # cursive (20)
- # datomic (69)
- # emacs (14)
- # figwheel-main (2)
- # fulcro (31)
- # graphql (3)
- # hyperfiddle (3)
- # jobs (1)
- # jobs-discuss (1)
- # kaocha (1)
- # leiningen (2)
- # lumo (2)
- # nrepl (1)
- # off-topic (182)
- # onyx (5)
- # re-frame (88)
- # reagent (12)
- # reitit (2)
- # ring-swagger (13)
- # shadow-cljs (136)
- # tools-deps (28)
- # vim (4)
wondering if someone from cognitect is around to help? yesterday we upgraded from solo to production after our ec2 instance tipped over. the upgrade went well and we can connect to our cloud instance, but we can't deploy our existing ion functions.
I’m from Cognitect, but probably not qualified to help. But if I were, I would ask what “can’t deploy” means
whoops. i solved the problem shortly after asking. that's how it works right? 😉 the upgrade path to production failed so we had to delete the existing compute stack and install as-new (using the previous app-name). our bastion cloud connections came back, however the already-deployed ions were throwing an internal server error, and re-deploying them resulted in {:deploy-status "FAILED", :code-deploy-status "FAILED"}
. i tested the lambdas via the AWS console which worked as expected, and a third code push finally succeeded. i still had to edit our API gateway, reselect the proxy resource and authorizer functions, then deploy the API.
just a wild guess, but i'm assuming that the new stack with the old app name crossed some wires. problem solved.
Is there a common practice for purposefully triggering rollback of the CodeDeploy from within the app? Eg if a Datomic schema migration fails in production or another startup condition isn't met. What condition is CodeDeploy polling to determine that "the service is up"?
@robert.mather.rmm The Datomic process not starting is the most common cause of rollback usually caused by a bug or deps conflict in an ion that throws when the ns is loaded
@robert.mather.rmm ^^ Do you think this covers your usecase?
Probably. I'll have to try it out. I generally set an uncaught exception handler (ala https://stuartsierra.com/2015/05/27/clojure-uncaught-exceptions), but maybe I can do that last in my startup process.
@robert.mather.rmm so, you are loading a bunch of stuff at compile time instead of at first request time?
Initially yes. The Code Deploy was timing out and rolling back because it only gave 2 minutes for startup. I switched to first request and doing everything lazy, I was hoping to switch back. Is the time to establish the Datomic connection proportional to data size or something?
To me it's quite desirable to be able to do schema transaction/migration and check everything worked before exposing the new instances to the world.
yes, I think it would benefit our use case too, just checking if it is possible. i'll try it out
Failing to deploy on Ion because of a mysterious error thrown when calling d/client
: "Assert failed: cfg"
. Does anyone know what this could mean?
The error is thrown by:
(d/client {:server-type :ion
:region "eu-central-1"
:system "linnaeus"
:query-group "linnaeus"
:endpoint " "})
Here's what the error looks like in Cloudwatch (reported via cast/alert
):
Running on com.datomic/ion {:mvn/version "0.9.26"}
and org.clojure/clojure {:mvn/version "1.9.0"}
on a freshly-updated stack.
Initially yes. The Code Deploy was timing out and rolling back because it only gave 2 minutes for startup. I switched to first request and doing everything lazy, I was hoping to switch back. Is the time to establish the Datomic connection proportional to data size or something?
To me it's quite desirable to be able to do schema transaction/migration and check everything worked before exposing the new instances to the world.
@val_waeselynck Can you go to the latest ion? (0.9.28) and also, what version of ion-dev are you using?
@marshall running on com.datomic/ion-dev {:mvn/version "0.9.176"}
Let me try updating ion
com.datomic/client-cloud {:mvn/version "0.8.71"}
, but that's a :dev
dep
deploying, stay tuned...
Nope, still same error 😕
deploy
Note that I'm making this call at ns-load time
Don't know if that's OK
as in (def client (d/client ...))
well, i wouldnt expect it to be an issue; could you put it in a memoized fn instead?
i.e. https://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L12
if that fixes the issue i can explore the reasons further and then either fix or at least document
I could, but it could take some time for me to be able to reproduce the issue then. I'll try that tomorrow. Is there a particular rationale for this memoization anyway?
OK you just answered
I think (and this is totally not rigorous reporting) that it used to work on a previous deploy - I'm suspecting our stack is in a pathological state, will re-create it tomorrow
By the time our timezones meet, I will either have succeeded or be in need for more help 🙂
@val_waeselynck I’ve confirmed that a change in the latest version loads user namespaces earlier in the process to reduce instance cycle time; one consequence is that you can’t connect to a database as a side effect of ns loading I would recommend following the pattern shown in the example I provided (delay connection until first invoke). I will also look into making this more evident in documentation
@marshall thanks, initializing the connection lazily did work. I suggest adding a comment with that link in the tutorial's repo as well.
the link for A running Datomic System
is broken -> https://docs.datomic.com/cloud/getting-started/connecting.html
the link for datomic-cloud repository
is broken -> https://docs.datomic.com/cloud/releases.html
@timeyyy_da_man thanks, i’ll fix it