This is the first in a series of posts looking at the adoption of App Connect Enterprise (ACE) from previous versions such as IBM Integration Bus. In this post we will consider the differences between the traditional topology and the alternative cloud-native deployment.
IBM® App Connect Enterprise V11 combines the existing, industry-trusted IBM Integration Bus (IIB) software with new cloud based composition capabilities, including connectors to a host of well-known SaaS applications. A more fundamental change however is the continued focus on enabling container based deployment of the on-premises software runtime. V11 does not mandate a move to containers however; customers can continue deploying workloads in the more centralized ESB pattern if that is their preference. In this post, we focus on deployment options for the on-premises software, comparing the traditional centralized ESB topology with the alternative containerized deployment in order to understand the pros and cons of each path.
Agile Integration Architecture
In other material we have discussed agile integration architecture (AIA); an approach to modernizing an integration landscape based on modern techniques and technology. The diagram below shows the progressive phases of integration modernization defined by AIA.
AIA has the potential to dramatically improve the velocity at which connectivity is delivered, and the discrete resilience and elastic scalability of isolated integrations. Much of this is achieved initially by moving to a more fine-grained deployment of the integrations themselves making it possible to change integrations more independently. Once integrations can be deployed independently, this lays the foundations for decentralized ownership whereby the ownership of the integrations moves from a central integration specialist team out to the application teams. Enterprises vary in how far down this road they need to travel and at what pace.
Adoption path options
So, what does this mean for users who have been running workloads on previous versions of IBM Integration Bus. When we upgrade to App Connect Enterprise, how can we best prepare ourselves to leverage AIA benefits? Some amount of fine-grained deployment can be achieved using existing capabilities but to gain the most effective isolation between integrations implies the need to move to container-based technology. However, as we’ll see, this is more than just a re-platforming exercise. To gain the most significant benefits we need to move to a truly cloud-native style of deployment with implications on everything from the build, deployment, administration, monitoring and more. Some enterprises will want to take more gradual steps, staging their way to cloud-native, rather than jumping in with both feet.
We’ll start with the conceptually simplest upgrade we could do, raising the level of the runtime to App Connect Enterprise v11, but still on our existing traditional topology (Path A). We’ll then see how much we can push that toward the benefits of AIA, and at what point we need to make the more significant changes toward a true cloud-native deployment on containers (Path B).
Path A: Runtime upgrade only (preserve existing topology)
On this path we simply upgrade the runtimes of the Integration Server and Integration Node, and the developer Toolkit. The core topology remains the same.
ACE v11 does not mandate a move to container infrastructure. It can still be deployed in this traditional topology using Integration Nodes to administer the Integration Servers just as we did in prior versions of IBM Integration Bus. Let’s also assume that at least some of the workloads require a local MQ server, so that also needs to be present in the topology.
Even this simple upgrade path will bring core runtime and tooling enhancements. Examples depend on what version you are moving from, but might include
- Simpler file system-based installation (since v10)
- Removed hard dependency on local MQ server (since v10)
- New capabilities such as the Group node
- Toolkit now supported on MacOS
- New admin console
- New web UI for Record and Replay functionality
- Consolidated configuration through properties files: server.conf.yaml, policies
- Access to a wide range of cloud connectors to access well known software as a service applications
… and many more.
Path A therefore has the following benefits in comparison to Path B
- runtime only upgrade
- minimal learning curve on new technology
- minimal mandatory changes to the build and deploy pipeline
A traditional integration topology looks something like the following. Interestingly, a similar diagram could be drawn for just about any product deployed in a non-cloud native way such as an application server for instance:
Perhaps the first thing to notice is just how much of the diagram is dark blue (labeled “product components” in the key). These are the elements that require product specific skill to install and administer. It’s also worth noting that it is a fixed topology – it defines a specific high availability pair.
These traditional topologies have some limitations in comparison to their cloud-native equivalents. Let’s have a look at some key aspects:
- Isolation: Any changes that affect the running server, whether fixpack upgrades, introduction of new integrations, changes to existing integrations, or configuration changes to the server, carry risk as they affect all integrations running on the shared servers. We have to either carry that risk or mitigate it with potentially significant amounts of regression testing across our integrations.
- Scalability: The topology can only be extended through manual configuration. More CPU could be added to server HA1 and HA2, but it would likely still careful scheduling and there is clearly a physical limit to the amount of CPU that can be added. Adding a “server HA3” would typically be a manual exercise, as would removing it when the extra load is no longer present.
- High availability: There is a significant amount of the topology to be built beyond that of the installation of runtimes and much of this relates to high availability. Consider how much of the above diagram is dark blue ( labeled “Product component” in the key), which means it needs to be explicitly installed and configured. This include setting up of load-balancing within and across nodes, enabling high availability and disaster recovery. This must be repeated for each environment – development, test, production etc. Not only is it significant custom work to create each environment, but there is also a genuine risk of environments configurations’ becoming out of sync with one another. For sure, there are ways to automate and patternize installations, but this in itself is additional work.
It is worth recognizing that even with this traditional topology it is possible to make some moves towards the benefits of AIA. For example, it is already possible to split a large installation into a number of separate Integration Servers, each containing a subset of the integrations and administered via a single Integration Node. Many large installations are likely to be already using this method of grouping. This provides some level of isolation between sets of integrations, but not the deep decoupling that would be offered by containers described later in Path B. They also enable some degree of independent scaling of integrations, but not to the extent of automatically and elastically provisioning new hardware resources that you would get in a container orchestration framework.
Perhaps the most striking point is that in this traditional installation, administration and deployment all require specialist skills. Container based environments aim to standardize those skills and make those skills transferable across technologies such that the only specialist skill required, is the one that matters – how to build artifacts. In our case that would mean the building of integration flows. Everything else should be done using common tools and capabilities that are common across all the technologies.
We’ll now take a look at the cloud-native path and a deeper look at what benefits a more fine-grained deployment on containers might bring if we adhere to a true cloud-native deployment style.
Path B: Cloud-native deployment (including runtime upgrade)
On this path we switch completely to a container platform, but more than that, we embrace true cloud-native style with associated benefits. Below is a simplistic example of the key elements of a cloud-native topology:
When comparing this with the diagram for the traditional topology from Path A, the first thing we should notice is that there is much less of the dark blue product specific components because much of their role is now performed in a standardized way by the container orchestration platform. Indeed the remaining dark blue boxes are simply references to standardized container image templates (and Helm Chart templates) from which they were built. This demonstrates the level of consistency provided by a containerized approach.
Let’s begin by comparing those same three aspects we discussed at the end of the previous section, then we’ll look more broadly at other benefits of a cloud native approach.
- Isolation (fine grained deployment): Containers are truly isolated from one another, almost as if they were separate operating system instances. The integrations can be split across multiple containers so that changes to the integration flows, or indeed changes to fix pack versions of the ACE runtime, only affect a very small number of integrations within a given container.
- Scalability (policy-based elastic auto-scaling): Container orchestration frameworks provide elastic scaling capabilities out of the box in a standardized way. They enable automated dynamic changes to the number of replicas of containers based on defined workload policies.
- High availability (auto re-instatement): In a containerized world there are standardized ways to declaratively define an HA topology (Helm Charts). Furthermore, the components that enable the high availability such as load balancers and service registries do not need to be installed or configured since they are a fundamental part of the platform. Kubernetes has high availability policies built in, and these can be customized through standard configuration.
- Visibility (platform-based monitoring): With fine-grained deployment comes a larger number of components deployed. Container orchestration systems offer a single way to view the health of components across all types of runtime deployed (i.e. not just integration). Furthermore, common standards such as ELK stacks and Prometheus are coming to the surface as effective ways to add deeper monitoring capabilities. These are often built-in to commercial container orchestration offerings such as IBM Cloud Private.
- Repeatable, rapid topology creation (infrastructure as code): In a container orchestration environment you don’t build topologies yourself. The topology requirements are defined declaratively in files that can be stored alongside the code. This ensures that the integrations are always deployed onto a topology that suites their needs. Helm Charts are currently the most common mechanism for providing the logical definition of the requirements from the deployment topology. They define for example how it should respond to changes in workload, then leave the orchestration framework to work out how to build that out and keep it running.
- Cross-environment consistency (image-based deployment, declarative configuration): Containers enable us to draw together the operating system, product binaries, configuration and the (integration) code into a single immutable image. Furthermore, we can combine that with the infrastructure-as-code to define the topology. We are then assured that we are deploying exactly the same thing to every environment from development right through to production.
- Pipeline automation (filesystem-based runtime installation and artifact deployment): The ACE integration runtime can be installed, and integration flows deployed simply by laying their files on the filesystem. This significantly reduces the specialism required in creation and maintenance of an automated build pipeline to create container images, and significantly improves the image preparation time.
- Operational consistency (common platform administration): A broader benefit of container platforms is that the skills required for operation of the environment are the same across all product and language runtimes – not just integration runtimes such as ACE. One of the aims of containers is encapsulation: that they should all look essentially the same from the outside. This allows their deployment, scaling, monitoring, and administration in general to be done using standard container platform capabilities with little need for knowledge of what’s inside the container. This encourages consistent practices on operations across the landscape and reduces the number of skill sets that need to be maintained.
- Portability (standards-based container technology): It could be argued that the level of standardization around container technology will make it simpler to move components between platforms, and indeed between your own infrastructure and other cloud infrastructure. However, it should be noted that in this rapidly changing technology space, careful choices would need to be made to ensure that components remain portable.
- Decentralization (technical autonomy to business teams): A truly cloud-native approach allows developers to focus on the creation of artifacts by simplifying build and deployment, and standardizing the surrounding administration and operational needs. This in turn makes it possible for teams to get up to speed on new languages and technology more quickly. In the past, we tended to have centralized teams, specialized on specific technologies. Relying on more standardization of the surrounding platform it is now reasonable to allow teams to become multi-skilled across a range of build technologies rather than relying solely on centralized teams.
- Shift-left (automated, production aligned testing): Immutable containers and the consistency of topology build enable developers to build tests and environments that more closely resemble the non-functional aspects of the production environment such that these qualities of service of built in and tested right from the start.
This list really just provides a taster and we will expand on these benefits in future posts with deeper practical insight into how they are achieved.
Conclusion
Containerization is not mandatory for moving to v11 of App Connect Enterprise. You can upgrade the runtime only, retaining your existing Integration Node/Integration Server topology, and benefit from many fundamental enhancements in the new version.
However, containerization and the associated move to a more cloud-native approach has many advantages, including simpler deployment build pipeline, isolation/decoupling between integrations, consistency across environments, portability, standardized administration and monitoring, and common capabilities to enable non-functional characteristics such as scaling and high availability.
For more information on migration paths from specific versions of the product see App Connect Enterprise V11 migration approach
Acknowledgements
Many thanks to Tony Curcio, Andy Garratt, Ben Thompson, and Len Thornton for their various inputs into this post.