Fork me on GitHub
#datomic
<
2020-09-18
>
zilti09:09:52

I just updated Datomic from 1.0.6165 to 1.0.6202, both the transactor, and the peer library in my program. Now, "nothing works anymore". Interestingly, Datomic Console can still connect fine. But my program cannot anymore, giving me `AMQ212037: Connection failure has been detected: AMQ119011: Did not receive data from server for org.a[email protected]3145e5fe[local= /127.0.0.1:36065, remote=localhost/127.0.0.1:4334 ] [code=CONNECTION_TIMEDOUT]. AMQ119010: Connection is destroyed` https://termbin.com/wnhzu

zilti09:09:00

Any ideas what could be causing this?

Nassin15:09:25

What Java version?

xceno14:09:53

Hi, I need some clarification about Datomic Cloud vs. OnPrem setup. Difference 1 in the cloud guide (https://docs.datomic.com/on-prem/moving-to-cloud.html#aws-integration) states: > Datomic apps that do not run on AWS must target On-Prem Is this because of technical reasons or a licence thing? Our clients are all in on Azure, but we need a Datomic database. Should I convince them to bite the bullet and let us deploy on AWS, or do we have to deploy Datomic OnPremise on a Azure environment?

marshall14:09:42

I suppose that is a bit draconian You certainly could run Datomic Cloud in AWS and run your application in Azure

marshall14:09:52

you'd have to handle the network stuff to make sure it was secure

marshall14:09:59

and you'd be paying the cross-cloud latency

xceno14:09:03

That's what I thought. Just plug in the client config pointing to AWS, but the main app runs on Azure. So it must be a licensing issue then

marshall14:09:11

it's not a licenseing issue

marshall14:09:23

it's a 'we need to write that sentence better` issue

marshall14:09:35

you are definitely free to do that ^

xceno14:09:39

Ahh okay got it, thank you 🙂

marshall14:09:41

there is no way to run Datomic Cloud in Azure

marshall14:09:01

but if you're OK with the cross-cloud configs/tradeoffs, there is no reason you can't do that

xceno14:09:17

Yeah it would be like a datomic onPrem installation targeting a postgres DB on azure, but even typing this sounds a bit stupid

marshall14:09:50

i mean, it's not that bad; I've definitely talked to several customers using Cloud and hosting their apps elsewhere

xceno14:09:27

I see, fair enough. I talk to my client then. Thanks again!

Lennart Buit14:09:14

When datomic reports an anomaly :cognitect.anomalies/busy with category :cluster.error/db-not-ready, what is exactly the problem that datomic is having (cpu load?) and how could I go about mitigating this in the short term? Or is it just that my peers are severly overloaded and I need to add more 😛?

parrot 3
marshall14:09:57

the set of "active" databases on each node (query group or primary compute group instance) is dynamic. Datomic 'unloads' inactive databases after a period of time

marshall14:09:19

if you issue a request to connect to a db that's not currently 'active', the serving node has to load that DB's current memory index/context/etc

marshall14:09:26

that's what the anomaly you're seeing indicates

marshall14:09:58

if you have only a few DBs in your system, you can use the preload db parameter in your compute group (or query group) cloudformation to automatically load those DBs on any node at startup

Lennart Buit14:09:24

(This is an on prem peer server btw). But thats unique databases, or also database values at different t?

marshall14:09:08

unique databases

Twan14:09:43

Is there a way we can deal with this? We have around 12 databases in total, with only 1 (production) db being largely hit (and 2 little production dbs). How come it changes the active database after all?

marshall14:09:10

does this only occur on starting up a new peer server?

Lennart Buit14:09:40

No, it appears to occur randomly every few seconds or so

marshall14:09:00

is it always db-not-ready?

marshall14:09:08

how many peer servers do you have running?

Lennart Buit14:09:14

We did plug a second peer server today, so we have 2 now. Loadbalanced by haproxy, no sticky sessions

Lennart Buit14:09:54

Predominantly, yeah. We did see an ops limit reached exception before, but I can’t confirm right now when I last saw that

marshall14:09:21

do you have cpu and memory metrics from the peer server?

Lennart Buit15:09:10

Just for posterity/googlers: We ended up severing datomics connection to a badly provisioned memcached, which reduced these errors significantly. Can’t say for sure thats the problem, tho

favila16:09:16

Running datomic on-prem+dynamodb with a very large database (>6 billion datoms). I’m noticing large amounts of data (3-5GB) written to the data directory that appear to be lucene fulltext indexes. Is this a scratch space for the transactor’s fulltext indexing?

favila16:09:26

I ask because I see three items with old timestamps and I’m wondering if I can delete them.

favila16:09:39

Also, that seems really big, is this normal?

favila16:09:19

should I be provisioning a separate or faster disk for this?

favila16:09:23

To be clear, this is 3-5gb per directory under fulltext and I currently have 3 of them