IBM Cloud Pak for Applications

How easy will the Red Hat OpenShift on IBM Cloud resulting from the purchase of Red Hat by IBM make it to construct an OpenShift cluster for complex multi-zones?

By Daisuke hiraoka posted Fri November 01, 2019 03:16 AM

This article is an English translation of my article posted on Japan’s most famous IT news site ITmedia on October 18, 2019, translated with the permission of ITmedia.


 On July 9th, 2019, there was a historic announcement that IBM had completed its acquisition of Red Hat.  On August 1st, just 3 weeks after this announcement, IBM announced that it would be renewing its software portfolio to be more cloud-native and optimized (and further optimized) to run on Red Hat OpenShift.

 This new cloud-native product family is known as IBM Cloud Paks and it containerizes and packages IBM’s Middleware in order to support OpenShift.  These facts show that IBM is focusing on simplifying the way that companies can update their applications to the latest versions and in particular, the migration to the cloud for mission-critical systems which have previously been a barrier to cloud migration. 

 Red Hat OpenShift, which has become IBM’s main container platform was originally flexibly designed to operate on-premises as well as on public or private clouds.  Therefore, once an enterprise builds an application on OpenShift, that application can be run on any environment that can run OpenShift, such as all major public clouds (AWS, Microsoft Azure, Google Cloud etc.) as well as private clouds and on-premises.  

 On this occasion, using IBM Cloud’s OpenShift managed service “Red Hat OpenShift on IBM Cloud” which was both announced and started service on August 1st, I evaluated the multi-zone clusters that are using the three data centers located in the Tokyo region (TOK2, TOK04 and TOK05). 

  IBM considers integrated services with Red Hat to be of the upmost importance now and evaluating this is a most exciting and interesting experience that I want to share with everyone.

 As an aside, Red Hat OpenShift on IBM Cloud was originally included in the service group that can be used with the Kubernetes managed service “IBM Cloud Kubernetes Service (IKS)”.  In the case that a Kubernetes native environment is needed, a Kubernetes cluster can be configured and if an OpenShift environment is needed, an OpenShift cluster can be configured, so it’s convenient to use the different clusters as needed.  

What are Multi-zone Clusters?

 This refers to a configuration where worker nodes for an OpenShift Cluster are placed in multiple data centers. This ensures that even if there is a failure at a data center level, the impact on systems is kept to a minimum. Worker nodes are managed on the IBM Cloud side and the user can specify the fault tolerance of the worker nodes.

Creating OpenShift Clusters

 I will explain the process to create OpenShift clusters.


 Log in to the IBM Cloud Portal and from the dashboard screen, go to “Catalog” and then “Container” and click “Red Hat OpenShift Cluster”.


The “Red Hat OpenShift Cluster” service explanation will be displayed.  Click “Create” and proceed.

The “Create a New OpenShift Cluster” screen will be displayed.

  • Cluster Name : jp-tokcluster

  • Resource group : Default

  • Geography : Asia Pacific

  • Availability : Multizone

  • Metro : Tokyo

  • Worker zones : Tokyo 02, Tokyo 04, Tokyo 05


“VLAN Spanning” needs to be set as active only when using multiple zones.

This time, multiple zones in Tokyo are selected.  However, IBM Cloud has multi-zones regions around the world and high-availability clusters can be configured using multiple data centers within the same region.    It’s a good idea to configure the clusters within the region closest to the end user.  Also, although there is no availability, it is also possible to configure using only one data center (example: Tokyo 02) by selecting only a single zone.  

IBM Cloud Docs - Red Hat OpenShift on IBM Cloud - Locations

  • OpenShift version : 3.11.135 (2019/09/02)

  • Flavor : 4 vCPUs 16GB RAM

 Depending on the how it is used, the flavors “Virtual - Shared”, “Virtual - Dedicated” and " Bare metal (physical server)” can be selected.  Also, various combinations for CPU, memory and disk space are available for a particular use case.  These many possible combinations are convenient because they can be changed, added and deleted according to the load situation after a cluster is created.  I believe that being able to select different flavors of worker nodes, for example “using Bare metal for worker nodes doing thoroughly heavy processing”, is a good point about IBM Cloud since it is not possible to choose on other clouds. 

  • Encrypt local disk : selected

  • Worker nodes : 1

 If the number of worker nodes is set to “1”, a total of 3 worker nodes, 1 in each zone (TOK02: 1 node, TOK04: 1 node, TOK05: 1 node) will be deployed.

If the infrastructure access permission checker does not meet the access permission requirements and all of the recommendations are not met, please review the “IBM Cloud IAM Access Policy”.  If you are not a contracted IBM Cloud administrator, please confirm with the administrator.  

Then, when the “create cluster” button is clicked, the cluster provisioning will start.  In about 30 minutes, the clusters and worker nodes creation will complete and be ready to use.

When the provisioning completes successfully, the cluster and worker node status will change to green in the “outline” tab.

On the “worker node” tab, you can see that 3 worker notes have been deployed in the 3 data centers (TOK02, TOK04, TOK05) in the Tokyo region.  

If you click the “OpenShift Web Console”, the OpenShift Web Console will open and you can confirm that OpenShift is operating normally.

An OpenShift Cluster can be created with 1 click

 As we have seen, “Red Hat OpenShift on IBM Cloud” allows you to boot a multi-zone cluster using the 3 data centers in the Tokyo region (TOK02, TOK04, TOK05) using a simple operation, entering the required information and clicking the “create cluster” button.  On other cloud environments, you need to set up multi-zone clusters yourself, but on IBM Cloud, I was very surprised that this could be done simply by entering some information and clicking a button.  

In particular, an on-premises environment requires several months to complete, with learning, settings and construction needed before the setup is complete.  By using IBM Cloud’s managed service, companies can be freed from the need for platform construction.

Easily Build and Deploy Applications in a Browser

In OpenShift, you can easily build and deploy applications in a browser, even if you don’t understand difficult CLI operations.  With OpenShift, various languages, middleware, CI/CD service catalogs (templates, hereinafter referred to as “catalogs”) can be used to accelerate application development.

For example, in the case of PHP development, if you prepare the application source code and use the PHP service catalog, the application will be automatically deployed using the OpenShift build strategy known as “s2i (source to image)” which builds a container image from source code.

Let’s try deploying an application!

Choose “PHP” from the catalog.

The PHP catalog information is displayed.  Click the “Next>” button to continue.

The Settings screen is displayed.  Click the “Create” button after inputting the required information to create and application.

  • Add to Project : Select [ Create Project ]

  • Project Name : phptest

  • Version : Select [ 7.1 – latest ] (PHP Version)

  • Application Name : helloworld

  • Git Repository : ( Enter Source Code Repositry)


The application creation results are displayed.  As the application created on this occasion (helloworld) has been created successfully, click the “Close” button.

The newly created application (helloworld) has been created in OpenShift’s Web console.

When the URL that is displayed is accessed, the execution results are output.

 Up to this point, we have been able to build and deploy applications by preparing the source code and inputting the required items into the browser, however, in order to do the same with Docker, in addition to preparing the source code, you would need to undergo the following procedure:

  1. Create a Dockerfile

  2. Create a container image using the “docker build” command

  3. Boot the container image using the “docker run” command

However, knowledge of Docker is required for this procedure. 

In OpenShift, the developer only needs to understand the minimum container security.  OpenShift strengthens the functionality to build and deploy applications.

In particular for large-scale application development, the roles is limited such that “program writers only write programs”, so When OpenShift is set as the development environment, it is not necessary to install Docker on the computer of newly joined people and there is no need to teach how to use Docker, so  it is effective since it is possible to start writing a program immediately.

Scaling Applications Up and Down

Scaling the application Pod (container) up and down can also be realized by clicking a button.

Making Application Changes

When making a change to an application, all the developer needs to do is update the source code, push it to the Git repository and press the “Start Build” button in the OpenShift Web console. 

Clicking the “Start Build” button will automatically build the application by obtaining the new source code from the Git repository and then deploy it. 

In addition, it is convenient to use the Webhook function.   When the developer saves the source code in the Git repository, OpenShift receives a change notification from the Git repository and automatically starts building and deploying the application.  By using this Webhook function, the developer can confirm changes made to the source code without having to use OpenShift.  

The Pod (container) release strategy deploys using rolling updates by default.  This is a mechanism which allows updates to a new version or reverting back to a previous one without any inaccessibility occurring.

Application Deployment to Multi-Zones

In the IBM Cloud, when a multi-zone is configured, clusters are configured by region and Pods (containers) are designed to be distributed to worker nodes with resources available, without regard to zones (data centers).  

 In this design, when an application is deployed, Pods (containers) are distributed to worker nodes with available resources from the three zones (data centers) TOK02, TOK04 and TOK05 deployed in this case.  This specification allows for disaster countermeasures for zones (data centers) as standard, even without the user being particularly conscious of zones.  I was very surprised to realize that the specification automatically takes disaster countermeasures as it was assumed that “users needed to specify zones (TOK02, TOK04 and TOK05) and deploy pods (containers) to take disaster countermeasures”.


Pods (containers) can be placed by specifying zones (data centers) as well.  Strictly speaking, the OpenShift scheduler deploys resources (CPU, memory) in available worker nodes.

In actual operations, it can be expected to have cases where nodes run out of resources or where a failure occurs.  In such cases, since worker nodes cannot be placed in Pods (containers) when there is a resource shortage or a failure, it is necessary to avoid operations in these cases.

Now, let’s check whether the 3 pods (containers) for the application that were created this time were actually distributed to each of the zones (data centers).  The list of Pods (containers) is easy to understand when output using the CLI, so we will run the “oc get pod” command to output the Pod (container) list.

Comparing the IP Addresses in the list of worker nodes and the list of Pods (containers) above, we can see that they have been distributed to TOK02, TOK04 and TOK05. 

Next, let’s actually access an application.  This application is designed to output the host name so that the Pod (container) can be identified.

We will try to access the application using the curl command three times.   If you compare the host names in the boxes in the figure below with the Pods (containers) in the figure above, you can see that the load is distributed.

 Up to now when operating with Virtual Machines (VMs), additional load balancer settings were required, for example, in busy periods or during a sudden increase in access when the number of load balanced Virtual Machines (VMs) needed to be increased.   Depending on the company, there were cases in which the lead time was long, with the network administrator needing several days or 2 weeks to make changes to the settings.

OpenShift is designed to automate network settings when externally published, and automatically distributes the load when there are multiple pods (containers). Using OpenShift can shorten the lead time for publishing applications even with regards to network settings.


 You may be wondering why the IP address output by the curl command is not the same as the Pod (Container)’s IP address.  It is the IP address of the “ibm-system” project Network Load Balancer (NLB) that is output.  Load balancing is done from the NLB to Endpoint (the Pod’s IP address) via the Openshift route. 

 OpenShift’s Route and Router correspond to “Ingress” and “Ingress Controller” in Kubernetes and are implemented independently.  The router uses “HAProxy” as the front-end router, and load balancing is done from the Router to the EndPoint (the Pod’s IP address) to the Pod (container) through the HAProxy settings.

 For those that want to are interested, refer to the HAProxy settings by using the rsh command in the Router’s Pod in the “default” project.


 “Red Hat OpenShift on IBM Cloud” can be created in approximately 30 minutes, even for complex multi-zone OpenShift clusters, simply by entering the required information and clicking the “Create Cluster” button. This frees companies from building platforms.

 Also, by selecting the required language from the catalogs (templates) prepared by IBM and Red Hat and saving the source code to OpenShift once, the application developer can simply execute the build (Start Build) in order to confirm the source code changes. Furthermore, if by using the Webhook function, the source code can be confirmed without having to use OpenShift. Application developers can focus on writing source code, leaving everything else aside from application development to IBM, Red Hat, or a system administrator.  

In this way, the strength of OpenShift's is that it is equipped with an environment that can demonstrate its performance in each specialized field of developers and operators.

This allows you to use multi-zone clusters that use the three data centers (TOK02, TOK04, and TOK05) in the Tokyo region. 

The following topics are planned to be covered in the upcoming articles

  1. OpenShift’s modernization of Java EE applications

  2. Container monitoring using IBM Cloud’s monitoring system (“SysDig” and “LogDNA”)

  3. A tutorial on the practice of DevOps using OpenShift

daisuke hiraoka

Daisuke Hiraoka

IBM Champion for Cloud 2019

I have worked as an engineer since 2002 in various functions such as business and enterprise core systems, as well as product design, development, and operations.  In the summer of 2016, I constructed an OpenShift environment on IBM Cloud’s Bare Metal Server in 2 months which was moved to production in January 2017, through which I became the first commercial operator that utilizes Red Hat OpenShift in the Asia-Pacific region. Following this, I gained much further experience in containers and OpenShift.  

In 2018, I co-authored the first book on Container Orchestration (Kubernetes) in Japan. I was in charge of the OpenShift chapter. Book title [ Container-based Orchestration: Building System Foundation in the Age of Cloud using Docker/Kubernetes (Shoeisha) ]