Originally written by William Federkiel.
Containerization is everywhere these days, and technologists are scrambling to adopt it in their organizations. But what exactly is it? Is it actually beneficial, or just a fad? What’s the best way to leverage it? And most importantly, how does it relate to IBM UrbanCode Velocity?
Containers: Not Your Parents’ Tupperware
In a way, a container can be thought of as similar to a virtual machine: an isolated environment in which software can be run without affecting, and without being affected by, anything external to its environment. Traditional VMs, though, have a number of disadvantages, primarily stemming from the overhead of running a complete operating system in each VM; this duplication consumes resources and requires configuration and support systems to be individually managed in each. Containers, by contrast, share the kernel (almost always Linux) of the host machine, i.e., the machine running the containerization platform. This minimizes the aforementioned duplication and is therefore considerably more efficient in its use of resources.
More specifically, though, what benefits does containerization bring? There are many ways to answer this, but the most fundamental advantage is that the environment in which an application is deployed becomes predictable and absolutely consistent. Application developers specify everything that is needed to run their software, from dependencies to system configuration, and all of it is built into the container image. When it comes time for a user to deploy the application, all they need to do is deploy the image, and everything needed is taken care of automatically. There is no more need to configure application servers, set up classpaths, or worry about port mapping. Another benefit, not to be underestimated, is that containers prevent different applications from interfering with each other. Each is isolated in its own container, running on its own tailored set of dependencies, eliminating conflicts arising from differing requirements.
Container Orchestration: Bringing Order to Chaos
So, we know about the benefits that containerized application deployment brings. But if a more complex application requires multiple containers, or if you want to deploy multiple containerized applications at once, how do you manage them all? The answer lies in container orchestration platforms. Put simply, such platforms manage the deployment, monitoring, and interaction of multiple containers, abstracting the complexity of these tasks so that administrators can think in terms of applications, rather than individual containers.
Docker Compose: The Developer’s Playground
Some of you who have previous experience with containerized applications may be thinking, isn’t this Docker Compose? The answer, unfortunately, is no. Docker Compose does allow a developer to specify the basic skeleton of an application, sufficient to allow it to run in a basic capacity, but it is only intended for use in limited development, prototyping, and automated development scenarios. It does not provide any of the robust deployment, monitoring, security, configuration, or advanced networking tools that characterize container orchestration platforms. Compose has no concept of authentication or authorization, and is not capable of efficient resource management.
Kubernetes: Getting Everyone Rowing Together
Kubernetes, an English transliteration of the Greek κυβερνήτης, literally means helmsman. Just as a helmsman steers a ship, navigating a massive assembly of wooden planks, canvas sails, and a crew of sailors to its destination, so too does Kubernetes marshal its resources to run and manage the applications deployed on it. A powerful authorization framework governs access to the platform and regulates any configuration changes. Advanced resource allocation tools allow developers and administrators alike to ensure optimal distribution of computational loads. Abstractions like deployments, pods, and replica sets allow fine-grained specification of behavior and reproducible deployments. And robust concepts of health and liveness allow the platform to detect and in many cases recover from aberrant states.
Accelerate Your Value Stream with Kubernetes
The product at the core of this discussion, IBM UrbanCode Velocity, ties together data from a huge array of different sources. From issue-tracking tools like Jira, to source repositories like GitHub, and even CI infrastructure like Jenkins or IBM UrbanCode Deploy, UrbanCode Velocity ingests, and digests, information from your entire delivery pipeline to give you insight into your development lifecycle. Doing all of this work, however, requires considerable resources. This is where Kubernetes has particularly notable advantages as a deployment platform.
Kubernetes doesn’t just allocate resources on a single compute node; it can orchestrate the deployment of an application across multiple nodes, allowing every container in an application to get the resources it needs to work with optimal efficiency. Our documentation suggests that four nodes should be allocated for a production installation of IBM UrbanCode Velocity. The first node is for inter-service communication media like RabbitMQ for its own dedicated space to keep up with all of your data. The second is for complex value stream calculations, such as cycle time, lead time, throughput, and deployment frequency. The third is for plugins to fetch new data and keep IBM UrbanCode Velocity in sync. And finally, the fourth is for the MongoDB deployment (provided and managed by the user) which is shared by all services and should also be allocated to its own node to best allow it to perform its heavy workload. The 4 node distribution ensures that resource-intensive actions do not impact UI and API which may cause poor experiences. This is made possible by adding the “workload-class” labels (background, transactional, and external) to at least one node. On startup, Kubernetes will automatically schedule each service across the available nodes in a way that will minimize resource competition and maximize performance. Results may vary, but internal testing has shown that the proper node allocation has a 30% increase in performance and stability across the values stream, pipeline, and release functional areas.
The Right Choice for Production
Robustness, security, and performance: these are virtues that any IT manager wants from their enterprise tooling. Kubernetes provides them, and Docker Compose does not. It is therefore not surprising that Kubernetes is the de facto standard for automated container orchestration, and its dominant position in the space will only continue to grow going forward. If you want the best possible experience with IBM UrbanCode Velocity, it’s not just the right choice, it’s really the only choice.