This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-07-28
Channels
- # announcements (33)
- # aws (2)
- # babashka (14)
- # beginners (128)
- # calva (34)
- # cestmeetup (3)
- # clj-kondo (12)
- # cljdoc (3)
- # clojure (114)
- # clojure-europe (31)
- # clojure-italy (3)
- # clojure-nl (7)
- # clojure-uk (6)
- # clojurescript (35)
- # conjure (20)
- # cursive (3)
- # data-science (3)
- # datomic (16)
- # docker (13)
- # events (1)
- # figwheel-main (22)
- # fulcro (109)
- # jobs (1)
- # kaocha (8)
- # keechma (1)
- # lambdaisland (5)
- # malli (1)
- # meander (8)
- # mid-cities-meetup (1)
- # off-topic (6)
- # overtone (7)
- # pathom (6)
- # re-frame (2)
- # reitit (9)
- # ring (1)
- # shadow-cljs (92)
- # specter (1)
- # tools-deps (311)
- # xtdb (76)
CMD ["java", "-XX:InitialRAMPercentage=70", "-XX:MaxRAMPercentage=70", "-jar", "api.jar"]
But docker stats
show MEM USAGE much lower 113MiB / 3.848GiB
.
It looks like this flags are ignored while -XX:+PrintFlagsFinal
show them. It is Java 14
. I tried this also with Java 11
.
What is happening here?
-Xmx1024m -Xms1024m
don’t change this too.
I have the newest docker on OS X.
Can you try adding a runner script, rather than passing flags in the Dockerfile? - I found that sometimes it munges the args and the final command is not quite what you need YMMV of course
size_t InitialHeapSize = 2894069760 {product} {ergonomic}
Everything looks fine, but in practice this memory is not consumed. At least from point of view docker stats
and top
I don't think it works like the old Xmm/Xmx settings where you statically allocate the memory. I have a process running in a container on a 8GB host and allocated 10% of all available RAM - heap min and max size is reported to be ~700M (10% of 8GB, but current usage is around 400M. I'd say this works as expected. Here's a graph from our production dashboard: https://www.dropbox.com/s/iy8zv2f49qwfmnb/cleanshot%202020-07-28%20at%2009.00.27%402x.png?dl=0 - heap % is set to 70% of the available Fargate instance. Actual % of RAM used is higher because we have a couple of monitoring sidecars running. Reported memory usage for that service container is around 80% as the JVM uses more RAM than just the heap. If you look at what happens during the deployment - some of the instances take a while to get to the full utilization of the set heap size https://www.dropbox.com/s/m1c0qig4s5l7mq2/cleanshot%202020-07-28%20at%2009.02.03%402x.png?dl=0 (red markers are deployments). We had this sort of setup running for a while now and is stable (assuming we don't do dumb stuff like buffer a 8GB stream in memory because of bad code reading data from S3 etc).
No idea, https://pl.wikipedia.org/wiki/Standardowa_odpowiedź_administratora ;-) - my only guess here would be that there's still something up with the flags, or the assumption is wrong: despite setting the min/max - the memory usage will be variable, but guaranteed to not go over the Max. Keep in mind: this is only about the heap size, JVM uses memory beyond just the heap
> JVM uses memory beyond just the heap yes, but I have too low usage, not to high what is even more strange
I'm failing to see how is that a concern - best strategy for production deployments is to always over-provision, monitor usage and adjust as necessary.
yes but I have the issue about memory usage only on specific environment and so far nobody solved this
wow it doesn’t also work even directly in my system
ps auwx|egrep "MEM|70964"|grep -v grep
USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
kwladyka 70964 0.0 `0.5` 24216860 133284 s005 S+ 4:56PM 0:05.99 /usr/bin/java -XX:+PrintFlagsFinal -XX:InitialRAMPercentage=70 -XX:MaxRAMPercentage=70 -XX:MinRAMPercentage=70 -jar api.jar
It doesn’t make sense. Does it?I don't think it works like the old Xmm/Xmx settings where you statically allocate the memory. I have a process running in a container on a 8GB host and allocated 10% of all available RAM - heap min and max size is reported to be ~700M (10% of 8GB, but current usage is around 400M. I'd say this works as expected. Here's a graph from our production dashboard: https://www.dropbox.com/s/iy8zv2f49qwfmnb/cleanshot%202020-07-28%20at%2009.00.27%402x.png?dl=0 - heap % is set to 70% of the available Fargate instance. Actual % of RAM used is higher because we have a couple of monitoring sidecars running. Reported memory usage for that service container is around 80% as the JVM uses more RAM than just the heap. If you look at what happens during the deployment - some of the instances take a while to get to the full utilization of the set heap size https://www.dropbox.com/s/m1c0qig4s5l7mq2/cleanshot%202020-07-28%20at%2009.02.03%402x.png?dl=0 (red markers are deployments). We had this sort of setup running for a while now and is stable (assuming we don't do dumb stuff like buffer a 8GB stream in memory because of bad code reading data from S3 etc).