This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-02-18
Channels
- # announcements (43)
- # aws (28)
- # babashka (32)
- # beginners (80)
- # calva (13)
- # chlorine-clover (2)
- # cider (11)
- # clj-kondo (15)
- # cljs-dev (1)
- # clojure (151)
- # clojure-dev (11)
- # clojure-europe (11)
- # clojure-italy (3)
- # clojure-losangeles (3)
- # clojure-nl (4)
- # clojure-spec (20)
- # clojure-uk (58)
- # clojured (3)
- # clojuredesign-podcast (2)
- # clojurescript (37)
- # core-async (4)
- # core-typed (1)
- # cursive (53)
- # datascript (5)
- # datomic (26)
- # duct (23)
- # emacs (3)
- # fulcro (22)
- # graalvm (1)
- # jobs (2)
- # joker (11)
- # juxt (24)
- # lumo (1)
- # mid-cities-meetup (2)
- # nyc (1)
- # off-topic (54)
- # parinfer (1)
- # reagent (13)
- # shadow-cljs (16)
- # sql (9)
- # tree-sitter (9)
- # vim (9)
Datomic Cloud instance suddenly giving
{
"errorMessage": "Connection refused",
"errorType": "datomic.ion.lambda.handler.exceptions.Unavailable",
"stackTrace": [
"datomic.ion.lambda.handler$throw_anomaly.invokeStatic(handler.clj:24)",
"datomic.ion.lambda.handler$throw_anomaly.invoke(handler.clj:20)",
"datomic.ion.lambda.handler.Handler.on_anomaly(handler.clj:171)",
"datomic.ion.lambda.handler.Handler.handle_request(handler.clj:196)",
"datomic.ion.lambda.handler$fn__3841$G__3766__3846.invoke(handler.clj:67)",
"datomic.ion.lambda.handler$fn__3841$G__3765__3852.invoke(handler.clj:67)",
"clojure.lang.Var.invoke(Var.java:399)",
"datomic.ion.lambda.handler.Thunk.handleRequest(Thunk.java:35)"
]
}
I've tried from lambda as well as bastion. The EC2 instance is up and running. It's been a while since I've touched this. If I redeploy master with no changes, will I lose the handles I have connecting api gateway and my lambda ions? I just need to get this service back up and running@brian.rogers can you log a case by emailing <mailto:[email protected]|[email protected]>? I would like to gather more information on this failing service before offering concrete next steps and it would be best to share that information over a case. Useful starting info: -Cft version. -solo or prod? -other services are working? -did you deploy before the error? Anything change before you saw this error?
@jaret I just redeployed master and it's fixed itself. Would it be useful for me to still submit a case to cognitect?
If it helps: cft version I don't know (what is cft?), solo topology, we only are running that one datomic service so I couldn't test anything else, lat deployment was in the summer and it's been running ever since until a day or two ago
Yes. I am very interested in tracking this down and would like to provide you potential steps to gather a thread dump should this issue occur again.
@brian.rogers obviously no urgency on the case, but if you get a chance please do log one. If we have a bug here I’d like to address it.
in that semantically they are too different for datascript to act as a cache that way?
the reason I’m asking is because I’ve been thinking about building a datalog API in front of our microservices (a la GraphQL), and started thinking that it might make sense to use datascript as a client-side cache to reduce the queries that have to actually hit the backend. my thinking was that a query could respond with not only the result, but the datums that were resolved in the processing of the query. that way the client side could transact those into a client-side cache and future queries could potentially query the local db instead of sending a request to the server. however, populating the cache for a query could end up accidentally being quite a lot of datums that need to be sent over the wire, even if the query result is small. so I was wondering if this same idea had been solved by some datomic <-> datascript integration.
This sounds a lot like Meteor's minimongo! (along with all the hard problems that came with it)
IME the problems were 1) determining dependencies (i.e., do I have the thing I need to query or not) efficiently 2) expressing those things at the right granularity or even overlapping granularity 3) updating those things efficiently
a datomic peer can be really sloppy with this by just having lots of ram and a fast network and very large granularity (i.e. “giant seqs of sorted datoms”
you need to send a lot less, and you need to make sure they can’t see “nearby” data which may not be theirs