WebSphere Application Server & Liberty

 View Only

GitOps SVT Series #1: What is GitOps and how are we using it in SVT?

By Monica Tamboli posted Fri September 09, 2022 11:49 AM

What comes to your mind when you hear the term GitOps? For me, it is the ability to manage your infrastructure resources the same way you manage your application code in Git. Any changes to the cluster are done via creating a pull request (PR) which is reviewed and merged. With this, your resources get created on the cluster without even needing to log on to the cluster. This allows version controland rollback of any configuration updates in addition to many other benefits.

I am the System Verification Test (SVT) lead for WebSphere Hybrid Edition (WSHE) and would like to share how our team got started with GitOps and benefiting from it. We will have a series of these blogs to show how we gradually extended this framework. Similar approach can be beneficial for anyone who is looking to automate continuous deployment in their CI/CD pipeline.

Our team is always looking to mimic customers and automate as much as possible to stay efficient. That was the motivation to explore GitOps as it gained popularity. We have many Java Enterprise applications which we want to deploy to Kubernetes clusters to test our runtimes (Open Liberty, WebSphere Liberty and traditional WAS) and Operators (WebSphere Liberty Operator, Open Liberty Operator, Runtime Component Operator). We also want to deploy prerequisites like DB2 and Jmeter using this framework.

We started our investigation by using the OpenShift GitOps operator which implements Argo CD as a controller. ArgoCD is responsible for watching a specified Git repo (desired state) and keeping it synchronized with the cluster (actual state). Git repository holds all the infrastructure-as-code (e.g. yamlfiles) for configuring your cluster resources. Ideally, you want your entire cluster to be configured by the yaml files from the Git repository but you can try this approach for only a few projects.

This framework should work with the following 2 types of scenarios which could be compared to development and production scenarios in a typical environment:

Scenario 1 (Development): Configure short lived Kubernetes clusters to deploy a few applications as needed. Application owners have created declarative artifacts for their applications to be deployed to any cluster. Anyone interested in deploying the application can have their ArgoCD synchronize with the Git repository and the application with all its prerequisites gets deployed. This has reduced applications deployment time tremendously as users don’t have to know about application prerequisite, setup and deployment details.

Scenario 2 (Production): Long running Kubernetes clusters where applications run continuously. The long running clusters have their own repo with any customizations for long running. If cluster needs to be recreated due to failures, all the resources (defined declaratively in Git and configured with ArgoCD) are created automatically.

It is easy to get started but it gets complicated as the scope of the project grows: there are many design decisions around managing yaml files from repos, about how to structure, about how to avoid duplication. Success of this depends on making sure that you are able to convert your manual processes to some declarative language like yaml.

It is not any magic but your organization can definitely see improved productivity, repeatability, stability, consistency and standardization if you invest time in this approach. Please look out for the future blogs in this series to get started with this GitOps approach.