In previous blog posts we explored optimizations for running Liberty in containers. Part 1
covered optimizations for the JVM and Part 2
covered Liberty containers basics. In this post, we will cover deployment options and performance optimizations for Liberty containers in Kubernetes environmentsDeployment
There are many choices for deploying a Liberty container into a Kubernetes environment. In this post we will cover a few of them: the Open Liberty Operator, direct configuration from YAML files, and deploying to OpenShift using the new-app command.
Open Liberty Operator
The Open Liberty operator can be used to deploy Open Liberty applications in a Kubernetes environment. It creates several Custom Resource Definitions (CRD) for deploying applications,managing logs and trace, and troubleshooting.
The OpenLibertyApplication (OLA) custom resource (CR) can be used to create an Open Liberty application from an image stream. The following is a simple example using an application image stream from quay.io/my-repo/my-app:1.0:
To expose an OpenLibertyApplication using HTTPS, set service.port to 9443 and set route.termination to reencrypt and provide an SSL certificate to the pod. For example:
The certificate will be managed and re-created periodically; however, the pods must be restarted with the new certificate to reload and recreate the keystore.
The Open Liberty Operator contains many customization options that are beyond the scope of this post. For more information, see the operator's user guide here
Logging and Troubleshooting with the Open Liberty Operator
The Open Liberty Operator can be used to gather trace and server diagnostics using the OpenLibertyTrace and OpenLibertyDump custom resource definitions (CRD). To use either CRD, you will first need to enable serviceability storage on the OpenLibertyApplication (OLA). For example:
The above definition will cause the Open Liberty Operator to define a PersistentVolumeClaim (PVC) with the specified size (1Gi) and access modes ReadWriteMany and ReadWriteOnce. The claim will be mounted in the `/serviceability` directory in every pod of the OpenLibertyApplication instance. Alternatively, you can specify an existing PersistentVolumeClaim using the
Once storage has been enabled on the OpenLibertyApplication, you can create an OpenLibertyTrace custom resource (CR) to gather trace from an existing Open Liberty server. Note that this can only be used to gather trace from an Open Liberty server that was created using the operator. Trace will be dynamically enabled on the Open Liberty pod that you specify without the need to update the configuration on the pod or restart it.
The above definition would enable detailed web container trace for the pod `Specify_Pod_Name_Here` and set the maximum number of trace files and the maximum size of the files.
In some cases, Liberty support may request more advanced debugging information. In a normal Liberty environment, you would generate a server dump that contains detailed information on the current java stack and system internals using the `wlp/bin/server dump` command. With the Open Liberty Operator, you can generate the same diagnostics using the OpenLibertyDump CR:
For more information on the Open Liberty operator, see the documentation on OperatorHub here
Deployment through YAML Configuration
A second approach for deploying Liberty containers into a Kubernetes environment is manual deployment. This approach is a good choice when you are automating deployment through Tekton pipelines.
The following YAML definition would be used to deploy a simple OpenLiberty instance to a Kubernetes cluster.
- name: example-container
- containerPort: 9080
Deployment in OpenShift
OpenShift allows you to simplify some steps of deployment. To create a new application from a container image, you can use the
command. For example:
oc new-app openliberty/open-liberty
The above command will create a new deployment of Open Liberty and a corresponding Service. You can then create a route using the following command:
oc expose route svc/open-liberty
These commands are exposing a Liberty image, but you can use a similar process to deploy a container that you have built.
Deployment in OpenShift using Source to Image (S2I)
Source to Image (S2I) is a toolkit that allows you to deploy a container directly from a source code repository. Liberty provides both Open Liberty and WebSphere LIberty S2I images that can be used to build a source repository using Apache Maven and then create a Liberty container with that application deployed. For example, to build a container from a GitHub repository and run it using docker, you would use the following commands:
$ s2i build https://github.com/WASdev/sample.ferret.git ibmcom/websphere-liberty-s2i:126.96.36.199-java11 websphere-liberty-test
$ docker run -p 9080:9080 websphere-liberty-test
In OpenShift, you can again simplify this process using the new-app command:
oc new-app openliberty/open-liberty-s2i:188.8.131.52-java11~https://github.com/WASdev/sample.ferret.git
This will use the Open Liberty S2I builder to build the code in the git repository. It will then create a Deployment from the resulting container image and create a Service. You can again expose the application by creating a route using the command:
oc expose svc/sampleferret
There are many advanced options for using S2I for application deployment that are beyond the scope of this blog. For more information, see the OpenShift new-app command documentation
, the OpenShift S2I documentation
or the Open Liberty S2I documentation
Performance in a Kubernetes Environment
The JITServer is an interesting new capability in OpenJ9 JVMs that can provide potential performance and cost improvements in a Kubernetes environment. The traditional JIT (Just In Time) compiler improves performance by dynamically compiling Java bytecode to machine code at runtime. However, this performance improvement involves some tradeoffs.
The JIT compiler can require large, variable amounts of memory and processing resources, which means container sizing will have to take those resources into account. The JITServer allows the dynamic compilation to be moved to a separate container so that the JVM for the Liberty container has smaller and more predictable CPU and memory needs. This allows you to save resources while maintaining throughput.
Using a separate JITServer container could also provide more consistent quality of service and application robustness. With less "spiky" resource demands, it's possible that your application can provide more consistent throughput, and thus provide a more consistent experience for end users. Also, although it is somewhat rare, we have seen cases in the field where the JIT can lead to problems including native out of memory errors or JVM crashes. With a separate JITServer, your application is isolated from such failures. In a Kubernetes environment, the JITServer container will be automatically restarted after a crash and the application container will continue to run.
When considering whether to use the JITServer function, it's important to consider that the JITServer will perform best in an environment with low network latency. If your network is constrained, or if your application has plenty of CPU and memory resources relative to its compilation needs, the traditional JIT may be a better choice.
For a deep dive on the potential cost savings from the JITServer, see the OpenJ9 blog here
The Open Liberty Guides site (https://openliberty.io/guides) contains several great resources for deploying Liberty microservices using docker containers, Kubernetes environments, IBM Cloud, Azure, Google CloudPlatform, and Amazon Web Services.
IBM has provided a workshop for transitioning from traditional WebSphere environments to OpenShift in a GitHub repository here: https://github.com/IBM/openshift-workshop-was
Microsoft has a guide for deploying both Open Liberty and WebSphere Liberty applications on Azure here: https://docs.microsoft.com/en-us/azure/aks/howto-deploy-java-liberty-app#app-platform-swat #WebSphereApplicationServer #WebSphere #WebSphereLiberty #Java