AIOps on IBM Z - Group home

Containers on z/OS and why you should care

  
Containers


About 18 months ago, IBM provided a Statement of direction as part of the software announcement 220-033, that expressed the intent to deliver containers and Kubernetes orchestration support for IBM z/OS. When I saw this statement of direction, I realized again, how unique this platform is and it motivated me to learn more about containers and container orchestration. As I dug deeper, I got very excited about the technologies that are out there and I am even more excited when I imagine that they can be used on z/OS. So, let me take you with me and let me share what I have learned along this journey and what this technology means for z/OS.

There are plenty of resources available on the internet where you can read about containers and container orchestration and learn about how containers are built, about tools that help you to adopt containers in your application development processes to build what is called a continuous integration and continuous delivery (CI/CD) pipeline and more. I am not going to repeat all that and leave it to the reader to do the detailed research based on specific interests.

However, I do want to focus on the aspect of operating a containerized IT-environment. In particular, I like to focus on z/OS of course, because containers are new on this platform and because well-established processes already exist how clients operate this environment, today. Hence, this blog is written for those of you, who are not the application development experts, who are not that familiar with CI/CD and how containers are orchestrated, but who do take care about running their z/OS environments 24x7x365 and that ensure that the systems and sysplexes are up and running to serve the mission-critical business applications.

So, in a series of blogs, I would like to talk about the impact that containers will make on the people, the tools and the processes that existing z/OS shops will likely notice when they start to introduce containers and Kubernetes in their environment, once all of this is available. As needed, I will explain the concepts at a high level, from mainframer to mainframer, so-to-speak, so that you don't have to be a Linux/Kubernetes expert to follow along.  Also note that I do not mean "impact" in a negative sense at all to cause fear but rather as some points of thought for being well prepared to adopt what has become a common software delivery method and industry standard in the last couple of years.

So why should you care?

In my first blog, let me talk about why containers have become so popular and adopted by so many industries, today. To better understand this, it might be good to take a view back to the roots of cloud computing.  Back then, the focus was on providing "unlimited" compute power by leveraging virtualization and the acronym IaaS (Infrastructure-as-a-Service) was crafted. Your application needs more compute resources? Well, just get another virtual machine (VM) and you have it. Virtual machines can be provisioned in seconds and clients only pay for the resources that they actually consume.

While virtual machines run isolated from each other one big disadvantage comes by the fact, that each virtual machine needs to provide its own operating system (OS) instance. This leads to overhead, both from a system resource point of view but also from the management point of view. Think of disk space, processor cycles, multiple OS instances to patch, etc.

In the Linux world, which always has been the predominant OS in cloud compute environments, a form of OS-virtualization has risen that we refer to as containerization technology. This technology leverages specific features of the OS to isolate processes from each other. While all processes share the same OS-instance, and hence produce less overhead, each process has its own view on processor, memory, and file system and can be controlled individually.

The pictures below compare virtualization using virtual machines and containers:

Other important enablers of containers are the container runtime and the concept of a container format, a so-called image. The company Docker pioneered in building both and was the first to make this technology accessible to a broader community. Together with other companies, Docker launched the Open Container Initiative (OCI) in 2015 and donated its container format and its runtime, runC to the community.

The OCI maintains two specifications:

  1. The Runtime Specification dealing with how to unpack a downloaded image into a runtime filesystem bundle that any OCI-compliant container runtime can run.
  2. The Image Specification that describes how the content is structured, what immutable layers exist, where to find them and how to unpack them, how the application is started and what arguments and environment variables are used, to name a few.

Any container runtime that follows these specifications can deal with any OCI-compliant image and hence, can run containers based on these images. In one instance, this can be the developer's PC using, for instance rkt (pronounced "rocket") as the container runtime and in another instance, it can be the production environment that leverages Redhat's OpenShift using runC under the covers.

Unless you are in the container technology development business, container runtimes are probably not so much of interest to you. You can take them for granted and accept that they are delivered with whatever solution you have picked. As said before, you just want to ensure that you've select an OCI-compliant solution. On the other side, you do care about images, because that's what your application developers, or better, the CI/CD pipeline, produces at the end of the day and that you have to deal with when deploying containers. Therefore, let me describe briefly what an image is. 

The application developer decides what the image contains. When using Docker, they generate a so-called dockerfile. It describes what the base software is that the application requires, e.g. a very slim Linux kernel with its runtime libraries. The developer adds the application on top and finally specifies how to run the application. So, the dockerfile might look like this:

FROM alpine:3.7
ADD hello-world /
CMD /bin/sh /hello-world

As part of the build process (e.g. using the Docker build-command), the instructions in the dockerfile are executed and an image like shown in the picture below is built:

Every instruction in the dockerfile results in a new layer to be added to the resulting image. If a layer, for instance alpine:3.7, is not cached locally, it will be downloaded from the image registry and put into the cache. Other images may use the very same layer too and during deployment this layer physically only resides once on disk, regardless how many images may refer to it. This is possible, because these layers are immutable.

At the end of the build process, an image exists that consists of several immutable layers that are stacked one over the other. Because each layer is immutable also the resulting image is immutable. It can now be pushed into a private or public image registry from where it can be pulled whenever needed.

At deployment time, when you want to run a container, the container runtime interrogates from the image manifest how to unpack the image and how to run the contained application as a process. Before the container is run, an individual read/write layer is added on top of all the immutable layers coming from the image. This is where the container might write application logs or other output produced by the application. With the help of OS-virtualization, each container can only see the layers that come from its own image.

And so?

Using these technologies, containerization and standardized container runtimes able to handle OCI-compliant images, containers become a portable cloud and you can now start to see why this technology is so appealing in the software industry. It basically enables software or application developers to just focus on the application and the necessary pre-requisites in form of stacked layers in the image while not having to worry about where and how the application is deployed. Latter task is delegated to the CI/CD pipeline and to the IT-operations team supervising this pipeline, which ensures that application releases are promoted in a very controlled manner from one stage to the next after passing all the quality assurance tests in the respective stage until the release is available in the production environment. All this happens without changing the application, i.e., the image, at all.

Rolling out applications on z/OS, however, is not done this way, today. At least not for the majority of the clients I know about. z/OS customers still install applications following traditional procedures:

  1. SMP/E install of the software package.
  2. Distributing the software to the various runtime environments.
  3. Performing individual adaptations to fit the software to the target runtime environment.
  4. Following thorough test procedures to ensure that the software doesn't break and meets the criteria for production.
  5. Hand-over the software to the mainframe infrastructure team that finally implements the software in production.

These are time consuming steps and often require a good number of manual effort to ensure everything is running nice and smooth at the end. But it also slows down the frequency of how often software can be released in the production environment.

Containers are looked at as the solution for the dilemma where the lines of business strive for agility and frequent software releases just to be competitive, and where the operations teams strive for stability and little change only just to ensure that the service level objectives are met as demanded by their clients. With containers, stability comes out of the box, more or less as the image isn't changed along the way from development to production.

This trend doesn't stop in front of a mainframe shop. In fact, I believe that every mainframe shop should embrace this technology and take it as an opportunity to stop the erosion of applications that are deployed on other platforms when there is so much more synergy effects to realize by running them on the platform, close to where the data sits, i.e. z/OS.

So, this is why I think the statement of direction referred to at the beginning is so important and why you should start to explore this technology. Containerization is not a hype anymore. It already gained a lot of acceptance in the open distributed world. And because z/OS is not Linux and is rooted back into the mid-60s, when its grand-grand predecessor OS/360 became available, I consider this statement as even more important as it underlines how much IBM values this platform for itself, of course, but also for all the customers that rely on it. It is a long and hard journey, no doubt. But at the same time, it is also a strong sign that IBM is very serious in pursuing its hybrid cloud strategy with z/OS being a first-class citizen next to other platforms.

In my next blog, I'll dive a bit into Kubernetes and how containers are orchestrated. If you like this series or if you have comments and like to discuss, please leave your comments below.

So far, I've published the following articles in this z/OS container blog series.  There are more to come as I go through this journey, so please stay tuned and come back occasionally, if you are interested to travel along with me.

  1. Containers on z/OS and why you should care (link)
  2. What you should know about Kubernetes (link)
  3. Understanding z/OS Automation (link)
  4. Kubernetes meets System Automation (link)