Fork me on GitHub
#datomic
<
2023-02-02
>
jdkealy19:02:40

My prod database is about 5GB but when I look at Memcached, it appears to only be using 67MB, would this alone indicate that Memcached is misconfigured? How would I best verify it's correct ?

favila20:02:06

memcached is only populated by reads (peer, transactor) or new segments (transactor while it indexes)

favila20:02:12

you can look at logs and metrics, but maybe the easiest way is to just do a (count (d/datoms db :eavt)) (read the entire :eavt) and see if memcached usage goes up

jdkealy21:02:55

hmm it appears to not have budged

jdkealy22:02:00

Can I set memcached on only the peer and ignore the transactor or do I need to do both for either to work ?

favila22:02:30

they are completely independent

favila22:02:42

so you can set on peer only

jdkealy22:02:01

cool thanks

jdkealy12:02:02

It turns out that the max cache was only set to 67MB. I bumped it higher and it seems to have peaked around 80BM. Still low, but it does appear to be working and when i turn memcached off, i see dynamo getting slammed.

jasonjckn20:02:34

I have periodic batch process currently doing 77 transactions every 15 mins with d/transact, every 15 mins our transactor fails fast and restarts with

System started
Terminating process - Indexing retry limit exceeded.
I realize for batch you’re suppose to use d/transactAsync, but even if it works, it seems there’s an underlying investigation that needs to happen, we can’t have our transactor failing so easily, when a peer sends 77 transactions. Anyone seen this? My understanding of this error from reading online is there’s a throughput problem wiith the storage backend, but t’s also a bran new DB, with zero traffic, unusual but looking into it..

favila20:02:51

Have you looked at transactor metrics? It will tell you if storage puts are failing or taking a really long time.

favila20:02:54

What is the storage backend?

favila20:02:32

also, how big are your transactions? some storage backends can’t handle enormous blobs

favila21:02:30

(it’s not good to write very large transactions to datomic anyway. a rule of thumb is ~1000 datoms, a few kb in size)

👍 2
jasonjckn21:02:27

postgresql backend

jasonjckn21:02:38

we’re doing 1000 maps per transactions

jasonjckn21:02:00

maybe that’s ~5000 datoms not sure

jasonjckn21:02:57

i haven’t looked at cloud watch metrics , is that the only way… i’ll see what’s involved in hooking that up on prem

jasonjckn21:02:08

are the metrics handlers registered in peers covering the transactor metrics too ?

favila21:02:29

no, that’s the metrics for the peer, which will only be reads, not writes

👍 2
favila21:02:47

but if it’s that serious it may be obvious from the logs

favila21:02:08

the default logback.xml will include datomic.process-monitor, which has the important ones