This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-08-14
Channels
- # aleph (1)
- # announcements (1)
- # beginners (59)
- # boot (2)
- # calva (5)
- # cider (8)
- # clj-kondo (6)
- # cljdoc (5)
- # cljsrn (11)
- # clojure (123)
- # clojure-dusseldorf (1)
- # clojure-europe (4)
- # clojure-italy (22)
- # clojure-losangeles (4)
- # clojure-nl (10)
- # clojure-spec (18)
- # clojure-uk (22)
- # clojurescript (103)
- # cursive (32)
- # data-science (1)
- # datomic (21)
- # events (2)
- # figwheel (1)
- # fulcro (12)
- # graalvm (3)
- # graphql (8)
- # jobs (2)
- # kaocha (4)
- # klipse (2)
- # lein-figwheel (4)
- # leiningen (23)
- # off-topic (11)
- # planck (11)
- # re-frame (8)
- # reagent (2)
- # reitit (3)
- # rewrite-clj (1)
- # ring (1)
- # ring-swagger (31)
- # schema (2)
- # shadow-cljs (66)
- # spacemacs (3)
- # specter (16)
- # sql (9)
- # tools-deps (16)
- # vim (26)
tryna set up ion
s
looks like something error'd:
clojure -A:dev -m datomic.ion.dev '{:op :deploy-status :execution-arn arn:aws:states:us-east-2:101416954809:execution:datomic-dev-Compute-784GREJAJTLX:dev-Compute-784GREJAJTLX-bd6deb15afeee59dd2dd16943cf3c0313f534c34-1565755664290}'
{:deploy-status "FAILED", :code-deploy-status "FAILED"}
{...
"status": "Failed",
"errorInformation": {
"code": "HEALTH_CONSTRAINTS",
"message": "The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems."
}
}
any idea how to troubleshoot?
all my CloudWatch Logs look like:
START RequestId: a012efad-3c01-4a75-8b08-615978c5f177 Version: $LATEST
2019-08-14T04:11:53.124Z a012efad-3c01-4a75-8b08-615978c5f177 { event:
{ codeDeploy: { deployment: [Object] },
lambda: { cI: 4, c: [Array], uI: -1, u: [], dI: -1, d: [], common: [Object] } } }
END RequestId: a012efad-3c01-4a75-8b08-615978c5f177
made an issue for this: https://github.com/Datomic/ion-starter/issues/5
Did you examine your Datomic system logs? https://docs.datomic.com/cloud/operation/monitoring.html#searching-cloudwatch-logs
you need to determine why the instances are not starting up - usually caused by an error in the ion code that is preventing it from loading
I was using the starter ion code
I posted the messages from cloudwatch logs above
Take a look at the link to the docs i posted. It includes details about finding the datomic stack logs
any idea how to troubleshoot?
we’ve been running with a shared valcache for about a week in production now. When deploying, valcache is briefly accessed by two instances which we heard from @jaret is not offically supported but should work. We’re now seeing EOFException pop up originating in datomic.index/reify
every now and then. Is this a setup you plan to support or should we stop running like this?
this is the stacktrace if it helps. Also happy to report this elsewhere if that’s better.
Another issue we’ve seen is this stackoverflow error. This occured only once and due to recursion we don’t have the full stacktrace but there’s datomic.query/fn
in the stacktrace. We’re thinking to increase the number of frames printed in our bug tracker. Any other advise on how to track this down? Thanks!
@mkvlr we do not currently have plans to specifically support valcache being accessed by two instances. I theorized that it should work based on seeing multiple separate services share valcache, but it appears to affect indexing with that EOFException. Re: your other error. I’d be happy to look at your Datomic logs to see the error in query. If you’d like to open a case with support (<mailto:[email protected]|[email protected]>) we can use that to share files and we won’t lose our communication to slack archiving. In general, I think it would be useful to look at the entire datomic log for both errors.
@jaret thanks. Talking to my colleagues we believe the EOFException did occur before we were running with two nodes. I guess we’ll reconfigure our nodes to use different valcaches and let you know if it does happen again. And will get in touch with support for the query error, thanks again!