Fork me on GitHub
#docker
<
2020-07-28
>
kwladyka10:07:31

CMD ["java", "-XX:InitialRAMPercentage=70", "-XX:MaxRAMPercentage=70", "-jar", "api.jar"] But docker stats show MEM USAGE much lower 113MiB / 3.848GiB. It looks like this flags are ignored while -XX:+PrintFlagsFinal show them. It is Java 14. I tried this also with Java 11. What is happening here? -Xmx1024m -Xms1024m don’t change this too. I have the newest docker on OS X.

lukasz13:07:34

Can you try adding a runner script, rather than passing flags in the Dockerfile? - I found that sometimes it munges the args and the final command is not quite what you need YMMV of course

kwladyka14:07:38

I tried also bash into image and run the app manually. Effect is the same.

kwladyka14:07:26

And > -XX:+PrintFlagsFinal show them. So I see flags passed correctly.

kwladyka14:07:40

size_t InitialHeapSize = 2894069760 {product} {ergonomic} Everything looks fine, but in practice this memory is not consumed. At least from point of view docker stats and top

lukasz16:07:04

I don't think it works like the old Xmm/Xmx settings where you statically allocate the memory. I have a process running in a container on a 8GB host and allocated 10% of all available RAM - heap min and max size is reported to be ~700M (10% of 8GB, but current usage is around 400M. I'd say this works as expected. Here's a graph from our production dashboard: https://www.dropbox.com/s/iy8zv2f49qwfmnb/cleanshot%202020-07-28%20at%2009.00.27%402x.png?dl=0 - heap % is set to 70% of the available Fargate instance. Actual % of RAM used is higher because we have a couple of monitoring sidecars running. Reported memory usage for that service container is around 80% as the JVM uses more RAM than just the heap. If you look at what happens during the deployment - some of the instances take a while to get to the full utilization of the set heap size https://www.dropbox.com/s/m1c0qig4s5l7mq2/cleanshot%202020-07-28%20at%2009.02.03%402x.png?dl=0 (red markers are deployments). We had this sort of setup running for a while now and is stable (assuming we don't do dumb stuff like buffer a 8GB stream in memory because of bad code reading data from S3 etc).

kwladyka16:07:25

How does it work then?

kwladyka16:07:49

I found Xmx / Xms also don’t work

lukasz16:07:47

No idea, https://pl.wikipedia.org/wiki/Standardowa_odpowiedź_administratora ;-) - my only guess here would be that there's still something up with the flags, or the assumption is wrong: despite setting the min/max - the memory usage will be variable, but guaranteed to not go over the Max. Keep in mind: this is only about the heap size, JVM uses memory beyond just the heap

kwladyka16:07:33

> JVM uses memory beyond just the heap yes, but I have too low usage, not to high what is even more strange

lukasz17:07:01

I'm failing to see how is that a concern - best strategy for production deployments is to always over-provision, monitor usage and adjust as necessary.

kwladyka17:07:01

yes but I have the issue about memory usage only on specific environment and so far nobody solved this

kwladyka15:07:59

wow it doesn’t also work even directly in my system

ps auwx|egrep "MEM|70964"|grep -v grep
USER               PID  %CPU %MEM      VSZ    RSS   TT  STAT STARTED      TIME COMMAND
kwladyka         70964   0.0  `0.5` 24216860 133284 s005  S+    4:56PM   0:05.99 /usr/bin/java -XX:+PrintFlagsFinal -XX:InitialRAMPercentage=70 -XX:MaxRAMPercentage=70 -XX:MinRAMPercentage=70 -jar api.jar
It doesn’t make sense. Does it?

lukasz16:07:04

I don't think it works like the old Xmm/Xmx settings where you statically allocate the memory. I have a process running in a container on a 8GB host and allocated 10% of all available RAM - heap min and max size is reported to be ~700M (10% of 8GB, but current usage is around 400M. I'd say this works as expected. Here's a graph from our production dashboard: https://www.dropbox.com/s/iy8zv2f49qwfmnb/cleanshot%202020-07-28%20at%2009.00.27%402x.png?dl=0 - heap % is set to 70% of the available Fargate instance. Actual % of RAM used is higher because we have a couple of monitoring sidecars running. Reported memory usage for that service container is around 80% as the JVM uses more RAM than just the heap. If you look at what happens during the deployment - some of the instances take a while to get to the full utilization of the set heap size https://www.dropbox.com/s/m1c0qig4s5l7mq2/cleanshot%202020-07-28%20at%2009.02.03%402x.png?dl=0 (red markers are deployments). We had this sort of setup running for a while now and is stable (assuming we don't do dumb stuff like buffer a 8GB stream in memory because of bad code reading data from S3 etc).