This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (1)
- # babashka (15)
- # beginners (152)
- # calva (28)
- # circleci (1)
- # clj-kondo (24)
- # cljsrn (8)
- # clojure (137)
- # clojure-berlin (3)
- # clojure-czech (2)
- # clojure-dev (20)
- # clojure-europe (69)
- # clojure-finland (5)
- # clojure-france (3)
- # clojure-italy (11)
- # clojure-my (1)
- # clojure-nl (4)
- # clojure-uk (15)
- # clojuredesign-podcast (1)
- # clojurescript (13)
- # conjure (15)
- # cursive (13)
- # datomic (41)
- # deps-new (50)
- # events (1)
- # fulcro (9)
- # graalvm (27)
- # joker (2)
- # kaocha (11)
- # off-topic (22)
- # pathom (48)
- # rdf (6)
- # reagent (25)
- # reitit (47)
- # reveal (10)
- # ring-swagger (1)
- # rum (4)
- # sci (27)
- # shadow-cljs (73)
- # tools-deps (49)
- # vrac (2)
- # xtdb (4)
I just updated Datomic from 1.0.6165 to 1.0.6202, both the transactor, and the peer library in my program. Now, "nothing works anymore". Interestingly, Datomic Console can still connect fine. But my program cannot anymore, giving me `AMQ212037: Connection failure has been detected: AMQ119011: Did not receive data from server for org.a[email protected]3145e5fe[local= /127.0.0.1:36065, remote=localhost/127.0.0.1:4334 ] [code=CONNECTION_TIMEDOUT]. AMQ119010: Connection is destroyed` https://termbin.com/wnhzu
Hi, I need some clarification about Datomic Cloud vs. OnPrem setup.
Difference 1 in the cloud guide (https://docs.datomic.com/on-prem/moving-to-cloud.html#aws-integration) states:
> Datomic apps that do not run on AWS must target On-Prem
Is this because of technical reasons or a licence thing?
Our clients are all in on Azure, but we need a Datomic database. Should I convince them to bite the bullet and let us deploy on AWS, or do we have to deploy Datomic OnPremise on a Azure environment?
I suppose that is a bit draconian You certainly could run Datomic Cloud in AWS and run your application in Azure
That's what I thought. Just plug in the client config pointing to AWS, but the main app runs on Azure. So it must be a licensing issue then
but if you're OK with the cross-cloud configs/tradeoffs, there is no reason you can't do that
Yeah it would be like a datomic onPrem installation targeting a postgres DB on azure, but even typing this sounds a bit stupid
i mean, it's not that bad; I've definitely talked to several customers using Cloud and hosting their apps elsewhere
When datomic reports an anomaly :cognitect.anomalies/busy with category :cluster.error/db-not-ready, what is exactly the problem that datomic is having (cpu load?) and how could I go about mitigating this in the short term? Or is it just that my peers are severly overloaded and I need to add more 😛?
the set of "active" databases on each node (query group or primary compute group instance) is dynamic. Datomic 'unloads' inactive databases after a period of time
if you issue a request to connect to a db that's not currently 'active', the serving node has to load that DB's current memory index/context/etc
if you have only a few DBs in your system, you can use the
preload db parameter in your compute group (or query group) cloudformation to automatically load those DBs on any node at startup
(This is an on prem peer server btw). But thats unique databases, or also database values at different
Is there a way we can deal with this? We have around 12 databases in total, with only 1 (production) db being largely hit (and 2 little production dbs). How come it changes the active database after all?
We did plug a second peer server today, so we have 2 now. Loadbalanced by haproxy, no sticky sessions
Predominantly, yeah. We did see an ops limit reached exception before, but I can’t confirm right now when I last saw that
Just for posterity/googlers: We ended up severing datomics connection to a badly provisioned memcached, which reduced these errors significantly. Can’t say for sure thats the problem, tho
Running datomic on-prem+dynamodb with a very large database (>6 billion datoms). I’m noticing large amounts of data (3-5GB) written to the data directory that appear to be lucene fulltext indexes. Is this a scratch space for the transactor’s fulltext indexing?
I ask because I see three items with old timestamps and I’m wondering if I can delete them.