# Why does MaxRAMPercentage stop working at heaps of ~ 30 GB?

The other day at work a Clojure program on a machine with 100 GB ran into OutOfMemory errors. A quick look at the metrics showed that the heap was only at about 30 GB though. Checking the JVM flags I found a configuration that is commonplace:

-XX:MaxRAMPercentage=80

So one would assume that the JVM could have sized the heap much larger than it actually did. Thanks to docker this behavior can be observed quite easily by doing a few tests like this one:

docker run -m 1GB adoptopenjdk/openjdk11:ubuntu-slim \
java -XX:MaxRAMPercentage=80 -XshowSettings:vm -version \
2>&1 >/dev/null | grep Heap

This returns:

Max. Heap Size (Estimated): 792.69M

So quite the expected outcome. 80 percent of 1GB is about 800MB.But what if we had a hundred times more to spare for the JVM?

docker run -m 100GB adoptopenjdk/openjdk11:ubuntu-slim \
java -XX:MaxRAMPercentage=80 -XshowSettings:vm -version \
2>&1 >/dev/null | grep Heap

This returns:

Max. Heap Size (Estimated): 29.97G

80 percent of 100GB is about - 30GB?!

Why does this happen? It turns out that at least with OpenJDK version 11 -XX:MaxRAMPercentage does not deactivate compressed ordinary object pointers and so limits the overall Heap size as stated in this ticket.

There is one obvious solution to this problem: Deactivating the compression by adding the -XX:-UseCompressedOops flag. But as that compression is a useful performance optimization, it should not be turned off without a good reason (e. g. a heap much larger than 30 GB). Turns out that is what the current LTS version (17) of the OpenJDK does:

docker run -m 100GB openjdk:17-slim \
java -XX:MaxRAMPercentage=80 -XshowSettings:vm \
-version 2>&1 >/dev/null | grep Heap

returns:

Max. Heap Size (Estimated): 80.00G