WebSphere Application Server & Liberty

 View Only

Lessons from the Field #21: Sizing container memory limits

By Kevin Grigorenko posted Tue September 20, 2022 10:00 AM

Kubernetes and OpenShift have an optional feature to limit the memory usage of a container with spec.containers[].resources.limits.memory. This is based on actually used physical memory (i.e. resident set size or RSS). If the container tries to exceed its memory limit, it is forcefully killed (and then the pod may be automatically re-created). Container logs may include the message "Killed" and the worker node will have the following message (as seen with oc debug node/$NODE -t followed by chroot /host journalctl):

kernel: Memory cgroup out of memory: Killed process​

This post will discuss various aspects of sizing a container memory limit especially as applied to Java- and WebSphere-based workloads.

Java maximum heap size

Classically, the Java maximum heap size is specified with -Xmx. However, in container environments, it's useful to size the Java maximum heap size based on the container memory limit instead. This is done with the option -XX:MaxRAMPercentage which takes a percentage of available physical memory rather than a fixed number of bytes. By default, Java detects that it's in a container environment, and the available physical memory is calculated as the container memory limit. By specifying a percentage of the memory limit, the maximum heap size may be tuned by changing the container memory limit instead of re-building the image.

Java native memory usage

A natural question after the previous section is: why would -XX:MaxRAMPercentage be any less than 100%? That's because the Java process needs memory beyond the Java heap. The Java heap is where the application allocates its objects, but there is also memory besides the Java heap that is used to implement the Java Virtual Machine and its many functions and this is called "native" memory. These native memory areas include:

  • Jit-In-Time (JIT) compiler caches including the code cache which can reach up to 256MB by default
  • Native backing of classes and classloaders which can run into hundreds of megabytes for large applications
  • Native backing of thread stacks (stack size controlled by -Xss)
  • Native backing of DirectByteBuffers
  • JIT compilation threads' scratch spaces
  • And more

Therefore, it's safe to assume that you'll need at least about 512MB for Java native memory in addition to your Java heap size. Actual needs will depend on your application.

Understanding actual Java native memory usage

To better understand how much Java native memory is being used, IBM Java and IBM Semeru Runtimes provide rich details in the javacore.txt thread dump file. First, exercise your application as realistically as possible and for as long as a process is expected to be up without restarting. Next, take a javacore. This is most easily done by logging into a container Terminal through the OpenShift web console or using oc rsh, find the PID of the Java process (most often 1) and then request a thread dump with kill -3 $PID. The javacore will go to the current working directory which defaults to /opt/ibm/wlp/output/defaultServer or /opt/ol/wlp/output/defaultServer for WebSphere Liberty and OpenLiberty, respectively. Download the javacore using oc cp and then open in a text editor. Scroll down to the NATIVEMEMINFO section. For example:

0SECTION       NATIVEMEMINFO subcomponent dump routine
NULL           =================================
1MEMUSER       JRE: 1,042,516,240 bytes / 21954 allocations
1MEMUSER       |
2MEMUSER       +--VM: 679,357,912 bytes / 15018 allocations
2MEMUSER       |  |
3MEMUSER       |  +--Classes: 191,497,712 bytes / 6531 allocations
3MEMUSER       |  |  |
4MEMUSER       |  |  +--Shared Class Cache: 94,371,936 bytes / 2 allocations
3MEMUSER       |  |  |
4MEMUSER       |  |  +--Other: 97,125,776 bytes / 6529 allocations
2MEMUSER       |  |
3MEMUSER       |  +--Memory Manager (GC): 280,170,296 bytes / 2419 allocations
3MEMUSER       |  |  |
4MEMUSER       |  |  +--Java Heap: 268,496,896 bytes / 1 allocation
3MEMUSER       |  |  |
4MEMUSER       |  |  +--Other: 11,673,400 bytes / 2418 allocations
2MEMUSER       |  |
3MEMUSER       |  +--Threads: 57,024,000 bytes / 945 allocations
3MEMUSER       |  |  |
4MEMUSER       |  |  +--Java Stack: 3,288,776 bytes / 174 allocations
3MEMUSER       |  |  |
4MEMUSER       |  |  +--Native Stack: 52,035,584 bytes / 175 allocations
3MEMUSER       |  |  |
4MEMUSER       |  |  +--Other: 1,699,640 bytes / 596 allocations
2MEMUSER       |  |
3MEMUSER       |  +--Trace: 1,534,704 bytes / 742 allocations
2MEMUSER       |  |
3MEMUSER       |  +--JVMTI: 17,776 bytes / 13 allocations
2MEMUSER       |  |
3MEMUSER       |  +--JNI: 1,032,928 bytes / 2925 allocations
2MEMUSER       |  |
3MEMUSER       |  +--Port Library: 145,665,512 bytes / 138 allocations
3MEMUSER       |  |  |
4MEMUSER       |  |  +--Unused <32bit allocation regions: 145,644,400 bytes / 1 allocation
3MEMUSER       |  |  |
4MEMUSER       |  |  +--Other: 21,112 bytes / 137 allocations
2MEMUSER       |  |
3MEMUSER       |  +--Other: 2,414,984 bytes / 1305 allocations
1MEMUSER       |
2MEMUSER       +--JIT: 354,816,608 bytes / 5006 allocations
2MEMUSER       |  |
3MEMUSER       |  +--JIT Code Cache: 268,435,456 bytes / 1 allocation
2MEMUSER       |  |
3MEMUSER       |  +--JIT Data Cache: 25,166,592 bytes / 12 allocations
2MEMUSER       |  |
3MEMUSER       |  +--Other: 61,214,560 bytes / 4993 allocations
1MEMUSER       |
2MEMUSER       +--Class Libraries: 8,341,720 bytes / 1930 allocations
2MEMUSER       |  |
3MEMUSER       |  +--VM Class Libraries: 8,341,720 bytes / 1930 allocations
3MEMUSER       |  |  |
4MEMUSER       |  |  +--sun.misc.Unsafe: 8,199,672 bytes / 1855 allocations
4MEMUSER       |  |  |  |
5MEMUSER       |  |  |  +--Direct Byte Buffers: 8,095,064 bytes / 1853 allocations
4MEMUSER       |  |  |  |
5MEMUSER       |  |  |  +--Other: 104,608 bytes / 2 allocations
3MEMUSER       |  |  |
4MEMUSER       |  |  +--Other: 142,048 bytes / 75 allocations​

The native memory allocations that are most often the largest are:
  • Classes: Native memory backing of classes and classloaders
  • Native Stack: Native memory backing of threads
  • JIT: The JIT code and data caches
  • Direct Byte Buffers: Native memory backing of DirectByteBuffers
Note that all of the values here are virtual sizes rather than resident sizes, but these are a useful approximation to better understand native memory requirements.

Java heap size defaults

After all of that, you might be surprised to discover that you might not need to know any of what you just read. That's because, by default in containers, if you do not specify -Xmx nor -XX:MaxRAMPercentage, Java has very good defaults that you might consider using instead:

  • If the container memory limit is less than 1 GB, set the maximum Java heap size to 50% of the container memory limit. This lines up with the observation above that you'll usually need at least about 512MB for native memory to be safe.
  • If the container memory limit is between 1-2GB, set the maximum Java heap size to the container memory limit minus 512MB. Again, this gives a generally safe amount of breathing room for native memory.
  • Otherwise, if the container memory limit is greater than 2GB, set the maximum Java heap size to 75% of the container memory limit.

JIT Server

If you are tight on memory, and you'd rather not be so generous with at least 512MB of native memory for each container outside of the Java heap, then you may consider the Semeru JIT Server technology which performs most JIT work in a remote container, thus eliminating some of the native memory needed for JIT.


In summary, if you are using container memory limits in Kubernetes or OpenShift:

  1. Consider simply not specifying -Xmx nor -XX:MaxRAMPercentage and size the Java heap indirectly with the container memory limit.
  2. If you do specify a maximum heap size:
    1. Consider using -XX:MaxRAMPercentage instead of -Xmx so that the heap size can be changed using the memory limit rather than re-building the image.
    2. Take care to give the container enough space for native memory needed outside of the Java heap. To be safe, this should be at least 512MB. To fine-tune this amount, take thread dumps after realistic exercising of the application and review the NATIVEMEMINFO section.
  3. If available physical memory is low, consider utilizing the Semeru JIT Server technology to reduce per-container JIT native memory demands for highly vertically stacked pods.
  4. If you experience the Linux OOM Killer terminating your pods, you'll most often need to increase your container memory limit.