Fork me on GitHub

I just updated Datomic from 1.0.6165 to 1.0.6202, both the transactor, and the peer library in my program. Now, "nothing works anymore". Interestingly, Datomic Console can still connect fine. But my program cannot anymore, giving me `AMQ212037: Connection failure has been detected: AMQ119011: Did not receive data from server for org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection@3145e5fe[local= /, remote=localhost/ ] [code=CONNECTION_TIMEDOUT]. AMQ119010: Connection is destroyed`


Any ideas what could be causing this?


What Java version?


Hi, I need some clarification about Datomic Cloud vs. OnPrem setup. Difference 1 in the cloud guide ( states: > Datomic apps that do not run on AWS must target On-Prem Is this because of technical reasons or a licence thing? Our clients are all in on Azure, but we need a Datomic database. Should I convince them to bite the bullet and let us deploy on AWS, or do we have to deploy Datomic OnPremise on a Azure environment?


I suppose that is a bit draconian You certainly could run Datomic Cloud in AWS and run your application in Azure


you'd have to handle the network stuff to make sure it was secure


and you'd be paying the cross-cloud latency


That's what I thought. Just plug in the client config pointing to AWS, but the main app runs on Azure. So it must be a licensing issue then


it's not a licenseing issue


it's a 'we need to write that sentence better` issue


you are definitely free to do that ^


Ahh okay got it, thank you 🙂


there is no way to run Datomic Cloud in Azure


but if you're OK with the cross-cloud configs/tradeoffs, there is no reason you can't do that


Yeah it would be like a datomic onPrem installation targeting a postgres DB on azure, but even typing this sounds a bit stupid


i mean, it's not that bad; I've definitely talked to several customers using Cloud and hosting their apps elsewhere


I see, fair enough. I talk to my client then. Thanks again!

Lennart Buit14:09:14

When datomic reports an anomaly :cognitect.anomalies/busy with category :cluster.error/db-not-ready, what is exactly the problem that datomic is having (cpu load?) and how could I go about mitigating this in the short term? Or is it just that my peers are severly overloaded and I need to add more 😛?

parrot 3

the set of "active" databases on each node (query group or primary compute group instance) is dynamic. Datomic 'unloads' inactive databases after a period of time


if you issue a request to connect to a db that's not currently 'active', the serving node has to load that DB's current memory index/context/etc


that's what the anomaly you're seeing indicates


if you have only a few DBs in your system, you can use the preload db parameter in your compute group (or query group) cloudformation to automatically load those DBs on any node at startup

Lennart Buit14:09:24

(This is an on prem peer server btw). But thats unique databases, or also database values at different t?


unique databases


Is there a way we can deal with this? We have around 12 databases in total, with only 1 (production) db being largely hit (and 2 little production dbs). How come it changes the active database after all?


does this only occur on starting up a new peer server?

Lennart Buit14:09:40

No, it appears to occur randomly every few seconds or so


is it always db-not-ready?


how many peer servers do you have running?

Lennart Buit14:09:14

We did plug a second peer server today, so we have 2 now. Loadbalanced by haproxy, no sticky sessions

Lennart Buit14:09:54

Predominantly, yeah. We did see an ops limit reached exception before, but I can’t confirm right now when I last saw that


do you have cpu and memory metrics from the peer server?

Lennart Buit15:09:10

Just for posterity/googlers: We ended up severing datomics connection to a badly provisioned memcached, which reduced these errors significantly. Can’t say for sure thats the problem, tho


Running datomic on-prem+dynamodb with a very large database (>6 billion datoms). I’m noticing large amounts of data (3-5GB) written to the data directory that appear to be lucene fulltext indexes. Is this a scratch space for the transactor’s fulltext indexing?


I ask because I see three items with old timestamps and I’m wondering if I can delete them.


Also, that seems really big, is this normal?


should I be provisioning a separate or faster disk for this?


To be clear, this is 3-5gb per directory under fulltext and I currently have 3 of them