This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-12-16
Channels
- # adventofcode (43)
- # announcements (31)
- # aws (2)
- # babashka (58)
- # babashka-sci-dev (4)
- # beginners (107)
- # calva (11)
- # cider (25)
- # clj-commons (8)
- # clj-kondo (24)
- # clojure (35)
- # clojure-argentina (1)
- # clojure-europe (25)
- # clojure-italy (5)
- # clojure-nl (11)
- # clojure-norway (39)
- # clojure-spec (11)
- # clojure-uk (3)
- # conjure (2)
- # core-async (19)
- # cursive (33)
- # data-science (2)
- # datomic (50)
- # deps-new (1)
- # emacs (3)
- # events (4)
- # figwheel-main (10)
- # fulcro (63)
- # graalvm (7)
- # holy-lambda (17)
- # introduce-yourself (1)
- # java (15)
- # jobs (1)
- # jobs-discuss (7)
- # malli (24)
- # meander (16)
- # nextjournal (19)
- # off-topic (2)
- # polylith (4)
- # portal (10)
- # re-frame (3)
- # reagent (19)
- # reitit (14)
- # releases (2)
- # remote-jobs (1)
- # reveal (19)
- # shadow-cljs (1)
- # sql (21)
- # testing (4)
- # xtdb (22)
this feels like a very long shot, but has anyone found very strange behaviour related to string deduplication with jdk version over 11? short story: we were running on jdk 11 with G1 garbage collector and string deduplication, all was fine. upgraded to jdk 17 and our processes kept being killed for using too much memory. the heap was fine, but it turned out that the string deduplication table (off heap) was growing perpetually. i think it's related to httpkit usage but can't confirm. when we turn off string deduplication the heap remains stable (slightly higher than before, but stable i.e. no leaks)
I haven't seen this phenomenon, but I can say that having looked at string deduplication I came to the conclusion I shouldn't use it. Did you do it to save on memory?
Yeah it seems like a bit of a slam dunk to save memory. We have observed it reducing a heap that was roughly 2250mb down to about 1750
If it saved 80-90% of your heap I'd say go ahead, for ~20% I wouldn't bother, not worth hitting a synchronized cache for every new string
Did you measure the perf, or find an article that did? All I've seen is 'there is a cost' but as our app's GC time is so small anyway I didn't think it could be much
I thought it just adjusted some char array pointers at GC time if it found it to be a duplicate, that all string creation on young gen is unaffected (deduplication only happens on old gen)
It does. I can try digging up those articles, but it's worth to compare the throughput and not just heap space
Interesting thank you, I'll do a bit more reading. At this point having stared at this problem for two days without making much progress I'm tempted to just turn it off to get our app stable again
You should turn on GC logs or use jfr in any case if you want to get to the root of the problem
Yeah been doing that, NMT and GC logs and heap analysers and all the things, made my head spin a bit. The answer is probably there but I'm quite inexperienced at reading it all
Would it be possible to temporarily replace httpkit with clj-http or something to confirm your hypothesis? I would prioritise stability over memory in this case.