WebSphere Application Server & Liberty

 View Only

Lessons from the field #8: Liberty in containers part 1: Java performance

By Brent Daniel posted Wed August 25, 2021 01:21 PM

  
As you begin to containerize Liberty applications, it's important to keep in mind how running in a container environment can affect performance. Performance tuning a Liberty container begins with tuning the JVM. The standard mechanisms that java uses to understand system resources such as memory and CPU do not take into account that these resources may be limited by the container environment. For example, the JVM will use the amount of physical memory on a system to determine the default heap limit, but that value may be much higher than the memory limit imposed by the container. In this situation, the JVM would be killed when it tries to access memory beyond the container limit.

To address this limitation, you could specify the minimum and maximum heap value using the -Xms and -Xmx options. However, this would not be the recommended approach given that the values would be static and would need to be changed if the container memory changed. Instead, you can use methods that Java provides to set the initial and maximum heap sizes as a percentage of the memory available to a container. The –XX:InitialRAMPercentage option is a replacement for -Xms and will set the initial heap as a percentage of container memory, and the -XX:MaxRAMPercentage option replaces -Xmx with a percentage based option.

In other situations, a JVM tuned to a bare metal or hypervisor environment may be too conservative. In a traditional environment, the java process is likely sharing
resources with many other processes, but in the container environment it is likely to be the only process using significant resources. If the physical memory on a system is
above 2 GB, the default heap size will be 1/4 of the physical memory. This will likely result in underutilization of memory resources.

Fortunately, the  JVM has ways to address these limitations. Using the -XX:+UseContainerSupport option will tell the JVM that it is running in a container environment and should use default values appropriate for that environment. This option is the default when running in a container on recent versions of both HotSpot and OpenJ9, but you should ensure that it is being used (you can check using the options -XX:+UnlockDiagnosticVMOptions -XX:+PrintFlagsFinal). On OpenJ9 JVMs when UseContainerSupport is enabled, the default heap size when physical memory is greater than 2 GB would be 3/4 of physical memory rather than 1/4.

Java shared class cache
Java makes use of a feature called Class Data Sharing to improve performance. This feature allows information about loaded classes to be used across several JVMs.

In a containerized environment there will most likely be only one JVM per container instance, so you will not automatically reap the benefits of the cache across multiple JVMs. You can configure your environment to take advantage of this improved performance by using shared storage. The WebSphere Liberty image will look for the shared class cache in the directory /opt/ibm/wlp/output/defaultServer/.classCache, so you can take advantage of the cache across containers by using common storage for that location. Here are a few days to accomplish that:

Example 1 - Share a volume on the host
If you are running multiple docker container instances on a host machine, you can run the container image using a volume mount. The following command will run the container image "app" with the shared class cache directory mapped to the host directory /tmp/websphere-liberty/classCache:

docker run -d -p 80:9080 -p 443:9443 \
-v /tmp/websphere-liberty/classCache:/opt/ibm/wlp/output/defaultServer/.classCache app

Example 2 - Create a named volume container
You can also create a container that exposes a volume to other containers. Use the following command to create a volume named "classcache":

docker run -e LICENSE=accept -v /opt/ibm/wlp/output/defaultServer/.classCache \
--name classcache websphere-liberty true

You can then make use of the volume in your application container "app" using the following command:

docker run -d -p 80:9080 -p 443:9443 --volumes-from classcache app

Note that the shared class cache can not be stored on a networked file system, so it would not be possible to use the cache from an NFS persistent volume in a Kubernetes environment.

In cases where the class cache will not be shared across containers, it's important to make sure the cache is pre-populated with your application classes when the container is built by running the "configure.sh" script as the last step in your Dockerfile.

Some of the content in the shared class cache is sensitive to the heap geometry. If the server is started with heap options that differ significantly from when the cache was created it's possible that the cache will not be used. If you notice fluctuations in container startup performance you may want to tune the maximum heap size using the -Xmx option. Specifying smaller values for the maximum heap size can help to ensure that the shared class cache remains compatible.

Keep an eye out for future blog posts on Liberty in containers, and check out our team's previous entry in the "Lessons from the field" series: Logging WebSphere Application Server traditional performance statistics


#app-platform-swat
#automation-portfolio-specialists-app-platform
#Java
#WebSphere
#WebSphereApplicationServer(WAS)
#WebSphereLiberty
2 comments
58 views

Permalink

Comments

Fri September 03, 2021 02:04 PM

Great article Brent. The only way it might have been better <for me> is with pictures. I am trying to understand the scope of a JVM to the "system" vs container instance when it comes to determining the heap size it has. I'll come find you...very interesting!

Thu August 26, 2021 03:44 PM

Excellent and very relevant information!  Thanks Brent!