App Connect

App Connect

Join this online user group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.

 View Only

Overview of IBM Integration Bus and Kubernetes 

Fri July 10, 2020 07:29 AM

Kubernetes is an open source software solution for orchestrating application containers that run across a cluster of physical machines. The cluster provides compute power, networking and storage through a set of organized worker nodes, which can keep applications running in a highly available manner. This post describes some basic Kubernetes concepts, and in particular explains their relevance for readers coming from an IBM Integration Bus background.

Originally based on technology created at Google, Kubernetes has now been donated to the open source community under the stewardship of the Cloud Native Computing Federation (of which IBM is a contributing member). IBM also provides its own forms of the Kubernetes technology supported for production workloads, both in the cloud and on-premises:

IBM Integration Bus has traditionally been marketed as a product which provides users with an Enterprise Service Bus integration pattern. This pattern is typically characterized as a centralised component, running on a range of platforms in a private data center positioned behind a corporate firewall, which sits between applications and Systems of Record. Increasingly however, the product’s versatility has seen our users deploy IIB as a lightweight integration engine, running inside one or more containers as part of a microservices architecture. In this guise, IIB still plays to its traditional functional strengths providing routing and transformation capabilities which can expose Systems of Record as convenient REST APIs and messaging streams. Under some circumstances, IIB can also be used to implement a microservice. If you are planning to use an integration engine like IIB inside a microservice architecture, you will probably be interested in how it functions as a compliant 12-factor application. Further background on these ideas is available in this article which expands on the idea of how IIB can be used as a lightweight integration engine and shows how integrations built using IBM Integration Bus (IIB) can be compliant with the 12-factor approach.

In this post, we will dive deeper into this area and discuss how IIB can be run inside one or more Docker containers which are orchestrated using the open source Kubernetes framework. In future posts we will show worked examples of IIB operating in conjunction with Kubernetes. As future articles in this series are published we will come back and update the article list below:

Kubernetes (v1 was announced and released in 2015), provides an open source system for helping run, deploy and automate the management of applications running in container systems such as Docker. It can help you package applications and manage them through the “Kubernetes Control Plane” which is a collection of processes runing in the Kubernetes cluster whose job is to try and make the cluster’s current state match with a desired state expressed by the user. As a user of the system, and particularly as an IIB user who is entirely new to Kubernetes clusters, you will typically be interacting with objects defined to Kubernetes either using the command-line interface called kubectl, or using the Kubernetes dashboard. These interfaces use the Kubernetes API to exchange information with the cluster and configure it for your purposes. In this regard, you can treat large parts of the Kubernetes system as a black box.


Kubernetes Master
The Master node provides management services for centrally controlling and monitoring the rest of the Kubernetes cluster. Specifically the Master is responsible for orchestrating Worker nodes and making choices about how to run the cluster. This includes scheduling your containerized applications so that they are suitably provisioned to take advantage of the available compute resources in the cluster, and in order to meet the desired state of the deployed applications.

Kubernetes Worker
The role of a Worker node is to run containerized applications. It could be a virtual machine or an actual physical machine, and will have a defined maximum CPU and memory capacity. Typically a Kubernetes cluster will have multiple workers, and you can also always add further worker nodes as the need to add further compute resources comes along. Workers will:

  • Manage the networking between the running containers (running inside “pods” – see below)
  • Communicate with the Master node
  • Provide the container runtime (e.g. Docker) which is responsible for downloading images and running the associated containers.

Worker nodes run a kubelet service which talks to the Master (including interacting with the etcd store of configuration information), and is responsible for starting and stopping containers on the worker. Worker nodes can be marked as “unschedulable” to prevent new pods from being placed on the worker in question. The kubectl command (the CLI for running commands to manage Kubernetes clusters) has a cordon option for this purpose.

Kubernetes Pods
Pods are defined to be a group of one or more containers which have access to the same shared storage and network. Applications in a traditional IT architecture which were required to be co-located on the same physical host, will typically be pushed into the same Kubernetes pod because here they will share an IP address and port space, and can communicate using inter-process communication methods such as semaphores and shared memory. Just like Docker containers, for cloud native applications, pods should really be considered to be relatively short lived and will typically operate in a stateless fashion. They can be stopped, destroyed and a new one started in its place very quickly with a minimum of configuration. In this regard, IIB users should really think very carefully before departing from the most common IIB architectural model of each Kubernetes pod, containing one IIB integration node which is configured to own one IIB integration server.

In cases where you are trying to make IIB behave in a cloud-native fashion for running in Kubernetes, we encourage you to follow a suggested best practice of using a single IIB node definition, owning a singe IIB integration server inside each container. We also encourage you to use a single IIB container in each Kubernetes pod. This keeps your architecture very simple, and will encourage your architecture to follow the well-known mantra of treating your IIB integration servers as “cattle not pets”. To scale IIB in a Kubernetes environment, we encourage you to rely on Kubernetes’ auto-scaling capabilities to deploy extra pods, rather than attempting to add extra integration server processes into the existing container(s) (which is the way you might typically first think of scaling IIB if you come from a background of treating IIB as an on-premise ESB implementation pattern). So with this advice in mind, let’s next consider how to describe IIB when deploying to Kubernetes…

IIB can be administered remotely by sending instructions over the public IIB administrative REST API. This involves connecting through the web administration port (by default this is port 4414). Typically you might also want to expose a port for sending HTTP traffic into an IIB integration server (by default using port 7800). You might expose other ports for the use of other transport protocols, but these two ports are the most common that a user might start by thinking of exposing using IIB deployments to a Kubernetes cluster. So, even a basic IIB Kubernetes service definition will probably require more than one port to be exposed, even if you choose to follow our suggested best practice of using a single IIB node, owning a single IIB integration server.

Kubernetes describes the concept of a service as “an abstraction which defines a logical set of Pods and a policy by which to access them – sometimes called a micro-service”. Many Kubernetes services need to expose more than one port. For this case, Kubernetes supports users defining multiple port definitions on a Service object. When using multiple ports, they are given names, so that endpoints can be disambiguated. The easiest way to define a Kubernetes service is to use a tool called Helm, and a packaging format called a Helm chart. A chart is a collection of files that describe a related set of Kubernetes resources. The excellent following description is taken from the Kubernetes Helm readme:

  • Helm has two parts: a client component (commonly referred to as helm) and a server component (commonly referred to as tiller)
  • Tiller runs inside of your Kubernetes cluster, and manages releases (installations) of your charts.
  • Helm can run on your laptop, or as part of a Continuous Integration / Continuous Delivery pipeline
  • Charts are Helm packages that contain at least two things:
    • A description of the package (Chart.yaml)
    • One or more templates, which contain Kubernetes manifest files
  • Charts can be stored on disk, or fetched from remote chart repositories

Below, is a small snippet from an example IIB Helm chart. As you can see it includes the specifications for ports 4414 and 7800 which an IIB system would expect to be enabled:

apiVersion: v1
kind: Service
metadata:
  name: iibv10
  labels:
    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
  type: LoadBalancer 
  ports:
  - port: 4414
    targetPort: 4414    
    name: webui
  - port: 7800
    targetPort: 7800
    name: serverlistener 
  selector:
    app: iibv10 

In future posts we will dive deeper into an example IIB Helm chart and demonstrate how it can be used to control Kubernetes deployments.

7 comments on"Overview of IBM Integration Bus and Kubernetes"

  1. Vj July 03, 2018

    How do we handle configurable services/policies in containers if we use IIB v10 Image

    Reply (Edit)
    • BenThompsonIBM August 07, 2018

      Hi Vj,
      Generally speaking it is a good practice to build into your container image all of the settings which are required by IIB. This includes associated configuration settings such as configurable services for example. When building your image, if you are using IIBv10, then you can script the definition of your configurable services. A good place to do this would be in your docker file or one of the associated scripts which it executes. If you’re using our published examples then a logical place to add your configurable service definitions would be in the iib_manage script (https://github.com/ot4i/iib-docker/blob/master/10.0.0.11/iib/iib_manage.sh). Since March of this year, we have also released App Connect Enterprise v11 which changes this technology area significantly. ACEv11 allows you to run standalone integration servers which have not been affiliated with a node process, and this would be the logical thing to use in a container based architecture (there is no need for a node to look after your servers because the integration server process’ lifecycle is affiliated with its owning container lifecycle and your orchestration framework (Kubernetes for example) monitors the containers. We have published an example dockerfile showing how to build ACEv11 (https://github.com/ot4i/ace-docker). As part of ACEv11, configurable services have become “policies” and can now be created in policy projects in the toolkit, and deployed in a BAR file. This makes it much easier to configure policy definitions into your docker container when using ACEv11 compared to IIBv10 as you no longer need to run a script and use the mqsiconfigurableservice command… instead define a policy within your BAR file and use the mqsibar command to “unzip and go” … an approach which is demonstrated in the repo here: https://github.com/ot4i/ace-docker/blob/master/11.0.0.0/ace/ubuntu-1604/demo/Dockerfile
      Cheers,
      Ben

      Reply (Edit)
  2. Mohamed Abed March 22, 2018

    Hello,

    In above architecture, which IIB Edition can be installed in the Docker container?
    Can we install the advanced Edition of IIB in such case or we have limitation in the integration protocols that can be used?

    Reply (Edit)
    • BenThompsonIBM March 26, 2018

      Hi, yes you can use Advanced Edition. The example Dockerfile and Helm charts which we provide are available at github.com/ot4i/iib-docker and github.com/ot4i/iib-helm … Here we utilise developer edition of IIB (this is free to download and is accessible to all), but we would expect production users to create their own image using Advanced Edition.

      Reply (Edit)
  3. PRichelle_IBM September 12, 2017

    Hello,
    you can deploy multiple applications within the same Integration Server.
    I think that you would create a pod with one Integration Node with one Integration Server having multiple applications that are related to the same business solution/application.

    Reply (Edit)
  4. Guptap2 September 01, 2017

    “suggested best practice of using a single IIB node, owning a single IIB integration server”

    Ben – Are you suggesting to have 200+ pod each running 1 iib node with one integration server if we have 200+ apps in our landscape.

    Reply (Edit)
    • BenThompsonIBM September 12, 2017

      In the world of cloud and 12-factor applications, containers are treated as ephemeral instances which are highly disposable, like cattle … if one dies you just start another with the minimum of configuration. With this in mind I think it is wrong to start building pod definitions which include multiple nodes / servers which have different characteristics which must all be running to provide the overall “application”. This was the thinking behind the suggested best practice of using a single IIB node owning a single IIB integration server.

      If you have a large number of applications, you could choose to deploy each app in a separate integration server, or group together applications in a single integration server so long as you happy with the idea that if the container is brought down and restarted, everything within that server goes away and comes back. As Pierre’s comment above suggests, if you have multiple IIB applications which are related to the same business solution/application then it becomes logical to group them all in an integration server.

      Looking to the future, IIB is taking a serious look at the role of the node and the role of the integration server in the context of cloud frameworks like Kubernetes. With this in mind, pushing some IIB node features down into the integration server level of the IIB product hierarchy is a likely future step.

      We also plan further articles in this series which will discuss in more detail possible build pipelines for IIB deployments in the context of Kubernetes. This will include consideration of the kind of configuration which should be built into the Docker image itself versus applied at start-up time.

      Reply (Edit)

Statistics
0 Favorited
24 Views
0 Files
0 Shares
0 Downloads
Global message icon