WebSphere Application Server & Liberty

 View Only

Lessons from the field #10: Liberty in Containers Part 3: Kubernetes

By Brent Daniel posted Wed October 27, 2021 10:23 AM

  
In previous blog posts we explored optimizations for running Liberty in containers. Part 1 covered optimizations for the JVM and Part 2 covered Liberty containers basics. In this post, we will cover deployment options and performance optimizations for Liberty containers in Kubernetes environments

Deployment
There are many choices for deploying a Liberty container into a Kubernetes environment. In this post we will cover a few of them: the Open Liberty Operator, direct configuration from YAML files, and deploying to OpenShift using the new-app command. 

Open Liberty Operator


The Open Liberty operator can be used to deploy Open Liberty applications in a Kubernetes environment. It creates several Custom Resource Definitions (CRD) for deploying applications,managing logs and trace, and troubleshooting.

The OpenLibertyApplication (OLA) custom resource (CR) can be used to create an Open Liberty application from an image stream. The following is a simple example using an application image stream from quay.io/my-repo/my-app:1.0:

apiVersion: openliberty.io/v1beta1
kind: OpenLibertyApplication
metadata:
name: my-liberty-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
service:
type: ClusterIP
port: 9080
expose: true
storage:
size: 2Gi
mountPath: "/logs"

To expose an OpenLibertyApplication using HTTPS, set service.port to 9443 and set route.termination to reencrypt and provide an SSL certificate to the pod. For example:

apiVersion: openliberty.io/v1beta1
kind: OpenLibertyApplication
metadata:
  name: demo-app
spec:
  expose: true
  route:
    termination: reencrypt
  applicationImage: '$IMAGE'
  service:
    annotations:
      service.beta.openshift.io/serving-cert-secret-name: demo-app-svc-tls
    certificateSecretRef: demo-app-svc-tls
    port: 9443

The certificate will be managed and re-created periodically; however, the pods must be restarted with the new certificate to reload and recreate the keystore.

The Open Liberty Operator contains many customization options that are beyond the scope of this post. For more information, see the operator's user guide here


Logging and Troubleshooting with the Open Liberty Operator

The Open Liberty Operator can be used to gather trace and server diagnostics using the OpenLibertyTrace and OpenLibertyDump custom resource definitions (CRD). To use either CRD, you will first need to enable serviceability storage on the OpenLibertyApplication (OLA). For example:

apiVersion: openliberty.io/v1beta1
kind: OpenLibertyApplication
metadata:
name: my-liberty-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
serviceability:
size: 1Gi
storageClassName: nfs

The above definition will cause the Open Liberty Operator to define a PersistentVolumeClaim (PVC) with the specified size (1Gi) and access modes ReadWriteMany and ReadWriteOnce. The claim will be mounted in the `/serviceability` directory in every pod of the OpenLibertyApplication instance. Alternatively, you can specify an existing PersistentVolumeClaim using the serviceability.volumeClaimName parameter.

Once storage has been enabled on the OpenLibertyApplication, you can create an OpenLibertyTrace custom resource (CR) to gather trace from an existing Open Liberty server. Note that this can only be used to gather trace from an Open Liberty server that was created using the operator. Trace will be dynamically enabled on the Open Liberty pod that you specify without the need to update the configuration on the pod or restart it. 

apiVersion: openliberty.io/v1beta1
kind: OpenLibertyTrace
metadata:
name: example-trace
spec:
podName: Specify_Pod_Name_Here
traceSpecification: "*=info:com.ibm.ws.webcontainer*=all"
maxFileSize: 20
maxFiles: 5

The above definition would enable detailed web container trace for the pod `Specify_Pod_Name_Here` and set the maximum number of trace files and the maximum size of the files.

In some cases, Liberty support may request more advanced debugging information. In a normal Liberty environment, you would generate a server dump that contains detailed information on the current java stack and system internals using the `wlp/bin/server dump` command. With the Open Liberty Operator, you can generate the same diagnostics using the OpenLibertyDump CR:

apiVersion: openliberty.io/v1beta1
kind: OpenLibertyDump
metadata:
name: example-dump
spec:
podName: Specify_Pod_Name_Here
include:
- thread
- heap
- system


For more information on the Open Liberty operator, see the documentation on OperatorHub here.

Deployment through YAML Configuration


A second approach for deploying Liberty containers into a Kubernetes environment is manual deployment. This approach is a good choice when you are automating deployment through Tekton pipelines.

The following YAML definition would be used to deploy a simple OpenLiberty instance to a Kubernetes cluster.

apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deployment
labels:
app: example
spec:
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- name: example-container
image: openliberty/open-liberty
ports:
- containerPort: 9080

Deployment in OpenShift


OpenShift allows you to simplify some steps of deployment. To create a new application from a container image, you can use the new-app command. For example:

oc new-app openliberty/open-liberty

The above command will create a new deployment of Open Liberty and a corresponding Service. You can then create a route using the following command:

oc expose route svc/open-liberty 

These commands are exposing a Liberty image, but you can use a similar process to deploy a container that you have built. 

Deployment in OpenShift using Source to Image (S2I) 


Source to Image (S2I) is a toolkit that allows you to deploy a container directly from a source code repository. Liberty provides both Open Liberty and WebSphere LIberty S2I images that can be used to build a source repository using Apache Maven and then create a Liberty container with that application deployed. For example, to build a container from a GitHub repository and run it using docker, you would use the following commands:

$ s2i build https://github.com/WASdev/sample.ferret.git ibmcom/websphere-liberty-s2i:21.0.0.8-java11 websphere-liberty-test
$ docker run -p 9080:9080 websphere-liberty-test

In OpenShift, you can again simplify this process using the new-app command:

oc new-app openliberty/open-liberty-s2i:21.0.0.8-java11~https://github.com/WASdev/sample.ferret.git

This will use the Open Liberty S2I builder to build the code in the git repository. It will then create a Deployment from the resulting container image and create a Service. You can again expose the application by creating a route using the command:

oc expose svc/sampleferret

There are many advanced options for using S2I for application deployment that are beyond the scope of this blog. For more information, see the OpenShift new-app command documentation, the OpenShift S2I documentation or the Open Liberty S2I documentation

Performance in a Kubernetes Environment

JITServer


The JITServer is an interesting new capability in OpenJ9 JVMs that can provide potential performance and cost improvements in a Kubernetes environment. The traditional JIT (Just In Time) compiler improves performance by dynamically compiling Java bytecode to machine code at runtime. However, this performance improvement involves some tradeoffs.

The JIT compiler can require large, variable amounts of memory and processing resources, which means container sizing will have to take those resources into account. The JITServer allows the dynamic compilation to be moved to a separate container so that the JVM for the Liberty container has smaller and more predictable CPU and memory needs. This allows you to save resources while maintaining throughput.

Using a separate JITServer container could also provide more consistent quality of service and application robustness. With less "spiky" resource demands, it's possible that your application can provide more consistent throughput, and thus provide a more consistent experience for end users. Also, although it is somewhat rare, we have seen cases in the field where the JIT can lead to problems including native out of memory errors or JVM crashes. With a separate JITServer, your application is isolated from such failures. In a Kubernetes environment, the JITServer container will be automatically restarted after a crash and the application container will continue to run. 

When considering whether to use the JITServer function, it's important to consider that the JITServer will perform best in an environment with low network latency. If your network is constrained, or if your application has plenty of CPU and memory resources relative to its compilation needs, the traditional JIT may be a better choice.

For a deep dive on the potential cost savings from the JITServer, see the OpenJ9 blog here

Further Information

The Open Liberty Guides site (https://openliberty.io/guides) contains several great resources for deploying Liberty microservices using docker containers, Kubernetes environments, IBM Cloud, Azure, Google CloudPlatform, and Amazon Web Services.

IBM has provided a workshop for transitioning from traditional WebSphere environments to OpenShift in a GitHub repository here: https://github.com/IBM/openshift-workshop-was

Microsoft has a guide for deploying both Open Liberty and WebSphere Liberty applications on Azure here: https://docs.microsoft.com/en-us/azure/aks/howto-deploy-java-liberty-app


#app-platform-swat
#automation-portfolio-specialists-app-platform
#Java
#WebSphere
#WebSphereApplicationServer(WAS)
#WebSphereLiberty
0 comments
37 views

Permalink