This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-02-02
Channels
- # announcements (3)
- # asami (29)
- # babashka (62)
- # beginners (131)
- # biff (7)
- # calva (31)
- # cider (5)
- # clerk (14)
- # clj-kondo (3)
- # cljsrn (12)
- # clojars (18)
- # clojure (72)
- # clojure-austin (17)
- # clojure-dev (6)
- # clojure-europe (31)
- # clojure-indonesia (1)
- # clojure-nl (1)
- # clojure-norway (18)
- # clojure-sweden (11)
- # clojure-uk (6)
- # clr (47)
- # conjure (42)
- # cursive (88)
- # datalevin (2)
- # datomic (25)
- # emacs (42)
- # exercism (1)
- # fulcro (10)
- # funcool (8)
- # gratitude (2)
- # honeysql (16)
- # introduce-yourself (5)
- # jobs-discuss (26)
- # leiningen (5)
- # lsp (31)
- # malli (21)
- # matcher-combinators (14)
- # missionary (2)
- # nbb (1)
- # off-topic (40)
- # pathom (38)
- # portal (2)
- # re-frame (7)
- # reagent (18)
- # reitit (1)
- # releases (5)
- # shadow-cljs (62)
- # sql (12)
- # testing (4)
- # xtdb (37)
My prod database is about 5GB but when I look at Memcached, it appears to only be using 67MB, would this alone indicate that Memcached is misconfigured? How would I best verify it's correct ?
memcached is only populated by reads (peer, transactor) or new segments (transactor while it indexes)
you can look at logs and metrics, but maybe the easiest way is to just do a (count (d/datoms db :eavt))
(read the entire :eavt) and see if memcached usage goes up
Can I set memcached on only the peer and ignore the transactor or do I need to do both for either to work ?
It turns out that the max cache was only set to 67MB. I bumped it higher and it seems to have peaked around 80BM. Still low, but it does appear to be working and when i turn memcached off, i see dynamo getting slammed.
I have periodic batch process currently doing 77 transactions every 15 mins with d/transact, every 15 mins our transactor fails fast and restarts with
System started
Terminating process - Indexing retry limit exceeded.
I realize for batch you’re suppose to use d/transactAsync, but even if it works, it seems there’s an underlying investigation that needs to happen, we can’t have our transactor failing so easily, when a peer sends 77 transactions. Anyone seen this? My understanding of this error from reading online is there’s a throughput problem wiith the storage backend, but t’s also a bran new DB, with zero traffic, unusual but looking into it..Have you looked at transactor metrics? It will tell you if storage puts are failing or taking a really long time.
also, how big are your transactions? some storage backends can’t handle enormous blobs
(it’s not good to write very large transactions to datomic anyway. a rule of thumb is ~1000 datoms, a few kb in size)
i haven’t looked at cloud watch metrics , is that the only way… i’ll see what’s involved in hooking that up on prem
the default logback.xml will include datomic.process-monitor, which has the important ones