Introduction
HATS applications can seamlessly transition to Docker and Kubernetes platforms with version 9.7.1, empowering users to package their code into a single container image for efficient deployment. As organizations embark on mainframe modernization journeys, the cloud strategy emerges as a cornerstone. Offering HATS support on the cloud amplifies this revolution, enabling organizations to leverage cloud advantages while advancing their mainframe modernization efforts. In this context, you will explore deploying HATS in local Kubernetes environments and address key challenges like license management and data synchronization, which are ushering in a new era of cloud-powered innovation.
Steps to setup HATS Project
- Create the Project:
- Create a single or multiple HATS project.
Figure 1. Create a project.
- Add the required JAR files (mentioned below) to enable Cloud support for the project. These jars are part of the Product installation and can be found from the installation path below:
IBM\IMShared\plugins\com.ibm.hats.core_9.7.1000.XXXXXXXXX\lib\adminCloud
Figure 2. Jars List
- Then, navigate to the manifest file and enable the jars.
- Perform necessary project customization and export the project as an EAR (Enterprise Archive) project.
Figure 3. Export as EAR Project
Steps to Setup Configuration files from HATS Documentation:
- Download the configuration files from the IBM HATS documentation. These files help to set up the HATS deployment in a Kubernetes environment.
- Place the exported EAR files into the Docker folder.
- Update the EAR file names to match the project.
- Make necessary changes to the server.xml, jvm.options, runtime.properties, and copyruntime.sh files as needed.
- Change the service type to LoadBalancer and ensure that ingress and JMX (Java Management Extensions) are enabled in values.yaml.
Below is the description of the configuration files.
· Dockerfile
Figure 4. Dockerfile
We start by specifying a base image and switching to the root user for installation tasks.
Essential utilities like iputils-ping and jq are installed to support network utilities and JSON processing.
The HATS project EAR files are copied to the Liberty server's drop-in folder, making them available for deployment.
Key configuration files such as server.xml and jvm.options are copied to their respective locations to configure the server.
A specific directory is created to store the runtime.properties file, and additional directories with appropriate permissions for project-related files are set up.
Script files necessary for runtime operations are copied into the container and granted executable permissions.
For security, switch to a non-root user before setting the default command to run the wrapper.sh script upon container startup.
This Dockerfile ensures that HATS applications are properly containerized with all required configurations and ready to be deployed in a Kubernetes environment using Helm charts.
· Copyruntime.sh: Script for Copying Runtime Properties
In our Docker setup, ensure the runtime.properties file is available in the appropriate directories for each project. This is achieved using a shell script named copyruntime.sh.
Figure 5. Copyruntime.sh
These commands copy the runtime.properties file from the directory mentioned (here, somepath) to the specified environment variable paths (/home/conn/HATSProj1 and /home/conn/HATSProj2).
The -R option ensures the copy operation is recursive, preserving the directory structure.
· JVM Options for JMX Remote Monitoring
Figure 6. jvm.options
Our Docker setup uses the JVM.options file to specify JVM arguments that enable and configure Java Management Extensions (JMX) remote monitoring. This is crucial for monitoring and managing the Java application inside the container (using the HATS Admin Console).
· Updatepod.sh: Script for Updating JVM Options with Pod IP
Figure 7. updatepod.sh
The updatepod.sh script should be executed as part of the container’s startup process to dynamically configure the JVM options with the pod’s IP address.
· Wrapper.sh: Wrapper Script for Initializing and Running the Server
Figure 8. wrapper.sh
In our Docker setup for HATS applications, the wrapper.sh script performs initial configuration tasks and then starts the server. This script completes all necessary setup steps before the server begins running.
· Values.yaml
Figure 9. replicaCount from Values.yaml
The replicaCount parameter is crucial for ensuring the application's high availability and load balancing. By setting the replicaCount to 2, Kubernetes will maintain two instances of the pod always running.
Figure 10. Docker image section from Values.yaml
repository: Specifies the Docker image repository to pull the application image. In this case, it is set to adminsupport.
pullPolicy: Determines when the image should be pulled from the repository. The value IfNotPresent means the image will only be pulled if it is not already present on the node.
tag: Specifies the tag of the Docker image to be used. The default tag here is ‘latest’.
Figure 11. service section from Values.yaml
type: Specifies the type of Kubernetes service to create. LoadBalancer creates an external load balancer, routing traffic to the exposed service.
port: The port on which the service is exposed. Here, it is set to 9080.
Figure 12. jmx section from Values.yaml
enabled: When set to true, this enables JMX for monitoring and management.
name: The name assigned to the JMX service.
port: The port number on which the JMX service will be exposed externally. Here, it is set to 8888.
targetPort: The port number on which the application listens for JMX connections inside the pod. Here, it is set to 8888.
type: Specifies the type of service for JMX. ClusterIP exposes the service on a cluster-internal IP.
Figure 13. Ingress section from Values.yaml
enabled: When set to true, this enables the Ingress resource.
className: Specifies the Ingress class to use. It is set to nginx, indicating using a NGINX Ingress controller.
annotations: Customizes the behavior of the Ingress controller using specific annotations.
hosts: Defines the host and path rules for routing traffic.
Deploy HATS project in a Kubernetes Environment:
Build and create the Docker Image:
docker build -t [name:tag] .
|
Builds an image using a dockerfile located in the same folder, '.' denotes as same folder
|
docker build -t [name:tag] -f [filename]
|
Builds an image using a dockerfile located in a different folder.
|
Figure 14. Docker Build cmd
Figure 15. Docker image
Figure 16. Docker Image viewed from Docker Desktop
Install the Helm Chart using the following command:
helm install <name> <chart> # Install the chart with a name
Figure 17. helm chart install cmd
Figure 18. Containers started by Helm.
Figure 19. Helm Chart List
Figure 20. Deployment details
Figure 21. Server Log from the container in Docker Desktop
Figure 22. HATSProj1
Figure 23. HATSProj2
Admin Console with management scope enabled:
Figure 24. Admin console view
In Figure 24, the admin console displays two applications deployed in separate pods. Users can manage and access details for all applications across the various pods.