Sterling Managed File Transfer

 View Only

The IBM Sterling B2B/MFT Portfolio for Hybrid Cloud – Containerization of IBM Sterling B2B Integrator/File Gateway

By Vince Tkac posted Mon October 12, 2020 10:42 AM

  
Cross posting from https://www.ibm.com/cloud/architecture/files/IBM-Sterling-B2B-Containerization.pdf for visibility.

 

 

The IBM Sterling B2B/MFT Portfolio for Hybrid Cloud – Containerization of IBM Sterling B2B Integrator/File Gateway

 

An IBM Solution Brief

 

Vince Tkac

IBM Senior Technical Staff Member and Architect

IBM Sterling

 

 

 

 

 

September, 2020

Moving from on-premises enterprise software to hybrid cloud

Through the years, software distribution, specifically how application vendors get application features to users, has evolved and expanded with the overarching intent of providing users with options. From preloaded hardware, installable software packages, software-as-a-service, virtual machine images, Docker repositories, Cloud Paks and now an app store / marketplace – you need flexibility to run the application where you want and update infrastructure as you need.  

That flexibility can result in many different run-time combinations of environment, Java runtime, operating system (OS) and libraries or other dependencies that are installed at the OS level. Containers allow us to isolate the application, reduce the runtime combinations and, thus, errors resulting from environment differences and/or inconsistencies. Containers also make installation and patching significantly faster.

Lift and Shift

Many enterprises already have or are in the process of creating a container and cloud strategy. “Lift and shift” is often the first step in that journey, allowing you to take existing enterprise software and run it in a cloud virtual machine (VM) or container. Below [see Figure 1] is an example of a “lift and shift” deployment using traditional software installations and VMs. This picture would look very similar regardless of whether it is deployed in an on-premises data center or in a public cloud; making “lift and shift” an easy first step.


Figure 1. “Lift and shift” deployment example using traditional software installs and VMs.


Custom Containers and Vendor Supplied Containers

Containers are the next refinement from VMs and provide many more benefits. Container and cloud strategy is driven by the need to reduce both human and infrastructure costs while increasing agility. You are likely already using containers somewhere in your business. Custom containers are containers that you build yourself using standard software install practices. Vendor-supplied containers are containers that are supplied by a vendor that has the software pre-installed. 

You could be using custom containers with IBM Sterling B2B Integrator or IBM Sterling File Gateway. If you are, the process you follow is to get the latest base OS, install the software, upgrade to the latest fix level and then push that image. You might even be using VMs in the cloud (like above) where you have to rebuild the entire VM. There is value in both of these options, but the process isn’t trivial. IBM Sterling B2B Integrator Certified Containers and IBM Sterling File Gateway Certified Containers provide a better option for achieving your container and cloud goals. 

Certified Containers and Container Management

The Sterling B2B Integrator/File Gateway 6.1 Certified Containers release comes with Red Hat certified container images and IBM certified helm charts that help to install or upgrade the application faster with greater ease and flexibility.

 

Helm charts are used for package management to help define, install and upgrade all the components in a Sterling B2B Integrator/File Gateway Certified Containers release. Helm is a tool used to generate and deploy manifests for the objects used in Kubernetes. The manifests themselves describe the expected final state of the cluster (i.e., “3 replicas”) and Kubernetes does what is necessary to make that a reality.

 

IBM Certified Containers are enterprise-grade, secure product editions running on Red Hat® OpenShift®. They are built and packaged following IBM and Red Hat’s recommended best practices to meet security and support requirements. IBM Certified Containers allow customers to quickly obtain software from a catalog, walk through a simple installation experience guided by logical defaults and helper text, and easily deploy production-ready enterprise software. You can choose to run on private, hybrid, or public infrastructure and will see improved efficiencies and flexibility.

 

The Red Hat OpenShift Container Platform (RHOCP) is a platform for developing and running containerized applications. Through its console and tools [see Figure 2], it allows for:

 

  • Simplified application deployment at scale
  • Maintenance with zero down-time upgrades
  • Auto-scaling with standardized deployment across all environments
  • Faster start-up time and new instance creation
  • Log aggregation and monitoring
  • Improved resiliency by monitoring and restarting failed instances
  • Infrastructure optimization by capacity scaling & reducing compute resources
  • Support of various cloud providers by acting as an abstraction layer between the applications and cloud providers

 

 

 

Figure 2. The RHOCP console.

 

Container Architecture

As we move to container deployments, we change the architecture to be more in line with microservices. There are a number of requirements to call something a microservice, but the important ones in this context are:

  • Independence – fully independent services from a code and data store perspective. Interactions are done only over well-defined APIs (REST or message queue).  We aren’t there yet, but more on that later.
  • Scalability – the service must be able to seamlessly scale up to meet demand.
  • Observability – the service must provide a view into its health, readness and ability.

The first step is to take an inventory of processes running inside a traditional install and break those processes out. Processes become containers which we replicate for high availability via Kubernetes deployments and jobs. Deployments have many replicated instances to cover failure and scaling scenarios. An individual container in a deployment needs to be small and startup fast. This is a shift from the traditional install world where you had to vertically scale up and down all of the processes in a full cluster node.

In the Sterling B2B Integrator 6.1 Certified Containers release, we are breaking processes out into containers for Adapter Container (AC), ASI, purge, Perimeter Server, REST and myFileGateway. All of these existed prior to 6.1 and could be used outside of certified containers. The 6.1 architecture is leveraging them to better take advantage of Kubernetes and Red Hat OpenShift.

A traditional software install comprises one or more cluster nodes containing processes for ASI (this is where the engine, UI, translation, services and adapters run), Ops Server, AC, Command Line Adapter, External Purge, REST APIs, myFileGateway. If you found yourself needing another AC, you had to add an additional cluster node with duplicates of many of these same processes. You may not have needed more engine or translation capacity, but that was the only way to scale up the AC.

In contrast, a microservice deployment comprises loosely coupled deployments that can be scaled up and down independently. If you need additional sftp capacity, for example, you scale up only the AC. This reduces your costs, simplifies your deployment topology and increases your ability to react quickly.

Install, patch, update, security patches, security updates, emergency fixes…repeat

So, how does this relate to software distribution? Whether you are doing infrastructure and security patches or software updates to Sterling B2B Integrator, the container model helps you get the latest releases and fixes faster and with less downtime. Staying current with security fixes is a critical activity for any enterprise.

 

With containers, software is pre-installed in the image. There are no install or patch steps. When a new version of a container is ready (for example with monthly security updates), you simply update the deployment to the new container image/version and start it up. If there is an issue, you can shutdown the pod pointing to the new version of the image and start start one pointing to the previous version. Updates and patches are easier, faster and generally occur without interruption. Keeping the system up-to-date with monthly security patches won’t require significant effort. OS and infrastructure patches can be done with the cluster live in a rolling manner. 

Scaling and resiliency


Figure 3. Deployment architecture example.

Another benefit of the container architecture is the ability to scale and handle outages. We talk in the previous section about having multiple instances of a container through the use of replicas (many instances). Those replicas are spread across multiple worker nodes [see Figure 3] to allow for failures or maintenance on individual worker nodes without causing an outage. This is a function of the Kubernetes container management system and not something you have to consciously deal with.

Additionally, with cloud providers like IBM Cloud, workers can be spread across multiple datacenters (called availability zones). Each zone has fully independent network, power and disk infrastructure, again allowing for seamless maintenance without interruption. A resilient cloud deployment would include multiple workers per zone and multiple zones per region.

The Sterling B2B Integrator 6.1 reference architecture uses a single region to keep latency down and allow database connectivity to be fast.

Services and routes

Figure 4. Deployment architecture with service layers highlighted in green.

We use Kubernetes services and RHOCP on IBM Cloud to achieve scaling and resiliency, though these patterns can be applied to Red Hat OpenShift running on any cloud platform. Deployments are scaled through the use of a replica count. Replica configuration is done through min and max settings allowing the container platform to adjust as needed. If a replica count is increased, a new pod is created.  If a replica count is decreased, a pod is shutdown. As we increase or decrease replicas to scale or even during maintenance, the container management platform is responsible for keeping track of which pods are up and ready. The platform only sends requests to pods that are healthy and ready. The green boxes [see Figure 4] highlight the service layers setup in the Software Defined Network (SDN). This service layer groups together all the pods serving as replicas for a given deployment. Anyone needing to connect to that deployment is load balanced to any of the available pods. 

For example, if as a result of load, the container management platform recognizes the need to scale up the AC, it would create a new pod with the AC image. When that pod starts up and reports ready, the pod would be added to the AC service. Anyone needing to connect to an AC pod would seamlessly start connecting to that pod via the load balancer rotation. 

Each deployment in the system that needs to scale follows this same pattern. By relying on this model in the container management platform, we eliminate the need for manual firewall setup and load balancer configuration as the system scales. 

The cluster network is closed and isolated except for specific entry points configured to allow access. These entry points are called ingress or routes. Load balancing for external connections is done at the container manager level via an ingress or route. An external firewall and load balancer can also be used. 

Component Connectivity
Figure 5. Component Connectivity



Architecture changes to note

A couple of things you likely noticed in the reference architecture if you are familiar with IBM Sterling B2B software:

  1. Addition of the adapter container (AC)
  2. Removal of the remote perimeter server (PS)

More use of Adapter Container (AC)

The AC existed prior to 6.1 certified containers and can be used outside of a certified container. The certified container architecture leverages ACs to better take advantage of Kubernetes and RHOCP. With the desire for replicas, we need to reduce the size of containers. One way we are doing that is to split off any server adapters into the adapter container. This change provides a separate lifecycle for the protocol endpoints, allows them to start and be available faster than the full ASI node and reduces the ASI node start up. 

The AC talks directly to the database for most activity and only depends on the ASI node to execute business processes in the engine. Business process (BP) execution is considered a separate stage of the overall process and the interactions between AC and ASI is done asynchronously via a message queue. The AC, and any protocol endpoints running in the container, will keep running even if the ASI service is unavailable or slow.

Removal of Perimeter Server (PS)

Historically, remote PSs provide three things:

  1. the ability to fan-in or multiplex connections,
  2. the ability to create new listening adapters without new firewall rules
  3. the ability to internally initiate connections from the secure zone to the DMZ

Each of these is covered by the microservice architecture and container management system.

Fan-in of connections came from the need to have a single node handle thousands of connections. In a traditional architecture, a singe node struggled to do this as it would run out of open connections. Multiple PS nodes were setup to handle those connections and multiplex them down over a tunnel to an ASI process. In the microservice architecture, we already have multiple ASI and AC nodes ready to deal with the connection load. PS functionality is replaced by having multiple instances of AC and through RHOCP self-service network configuration.

New adapters without firewall updates are driven by the separation of the Sterling B2B Integrator/File Gateway admin from the infrastructure network team. In a traditional deployment, different teams and often different parts of the enterprise managed the infrastructure layer.  Responses could be slow. In the container model and with a container management platform like RHOCP, this setup is in the hands of Sterling B2B Integrator/File Gateway deployer through the use of the Software Defined Network (SDN) via routes. Inbound ports can be opened through the SDN without involving the network team.

Similarly, the need for separate physical DMZ hardware and strict connection controls can be handled by the SDN using network policies to control which pods can connect to which other pods at a granular level in the container network.

As a result, the reference architecture removes remote PS from both IBM Sterling Secure Proxy for inbound traffic and ASI for outbound traffic. By removing the remote perimeter server requirement, we greatly simplify the deployment and enable simpler auto-scaling.

If you are deploying on-premises or in a network architecture that requires multiple layers, a remote PS is still fully supported and can be used. PS is available as a container. Some additional work is needed to scale in this environment as the remote PS needs to be known to the system via registration.

Pods are Immutable

Unlike traditional installs, the runtime in the container environment should be considered immutable. Changes are not made to pods directly. You can no longer directly update a property/classpath or add a third-party jar (as examples). Those changes must be done in such a way that the container management system can apply them to new containers as those new containers are needed. As a result, all configuration is done either through the helm chart or via configuration stored in the shared database. 

Much of the Sterling B2B Integrator configuration was already stored in the database. That has been expanded to include all property files, third-party jars and Sterling B2B Integrator installable packages that should be applied to a pod as it starts up. A new customization UI and REST APIs are available for this configuration.  Performance tuning configuration and settings are provided in the helm charts.

IBM Cloud Deployment

Sterling B2B Integrator and Sterling File Gateway Certified Containers can be utilized as standalone containers or on top of the Red Hat OpenShift Container Platform in any cloud environment. Red Hat OpenShift Kubernetes Services (ROKS) on IBM Cloud has all of the components required to deploy Sterling B2B Integrator for high performance and resiliency:

 

Future Architecture

Future enhancements planned for Sterling B2B Integrator and Sterling File Gateway Certified Containers include:

  • RHOCP Operators in addition to the helm charts that are supported today
  • Decompose the system into more microservices
  • Fully independent microservices allowing for more resiliency of the overall system
  • Continue to reduce container sizes and startup times
  • Move from expensive block storage to cheap and easily replicated object storage

Licensing Considerations

Licensing is done via Virtual Process Core (VPC) and there is a conversion rate from Processor Value Unit (PVU) to VPC. Work with your account team to get specifics.

You can not only use RHOCP to limit VPC usage, allowing for scale out, but also for working within your VPC license limit. Some containers, like purge and REST, are not chargeable VPCs.

Additional notes when sizing for IBM Cloud

When sizing for an IBM cloud deployment, you must size for the entire infrastructure need including non-chargeable components like purge, REST and SEAS.  Also for external components like MQ and DB and Portworx (if running containers) or MQaaS and DB2aaS and COS (if running as a service).

Infrastructure Sizing




Conclusion

As organizations move from on-premises enterprise software to hybrid cloud, flexibility is a must. If you are a new customer to Sterling B2B Integrator or Sterling File Gateway and don’t have scaling and high availability needs, start simple with two workers in a single zone. Get value quickly and you will have the ability to scale up later. 

If you are an existing enterprise customer with high scalability needs, a multi-zone deployment is right for you and provides room to grow. 

Wherever you are in your cloud and container journey, Sterling B2B Integrator Certified Containers and Sterling File Gateway Certified Containers can help you on your next steps. Keep up to date on infrastructure and application patches while improving scaling and resiliency. Talk to your sales representative or business partner today.

 

Contributors

Special thanks to Ryan Wood, Steve McDuff and Nikesh Midha for contribution and review of this document.


 

 


© Copyright IBM Corporation 2020.

 

IBM, the IBM logo, and ibm.com are trademarks of

International Business Machines Corp., registered in

many jurisdictions worldwide. Other product and

service names might be trademarks of IBM or other

companies. A current list of IBM trademarks is

available on the Web at

https://www.ibm.com/legal/us/en/copytrade.shtml,

and select third party trademarks that might be

referenced in this document is available at

https://www.ibm.com/legal/us/en/copytrade.shtml#se

ction_4.

 

This document contains information pertaining to the

following IBM products which are trademarks and/or

registered trademarks of IBM Corporation:

  • IBM® Sterling B2B Integrator
  • IBM Sterling File Gateway
  • IBM Cloud
  • IBM DB2
  • IBM MQ
  • Red Hat OpenShift Container Platform

 

 


 

All statements regarding IBM's future direction and

intent are subject to change or withdrawal without

notice, and represent goals and objectives only.


#DataExchange
#IBMSterlingB2BIntegratorandIBMSterlingFileGatewayDevelopers
3 comments
119 views

Permalink

Comments

Mon February 15, 2021 07:16 PM

Hi @chad petrie, as a reference architecture, this would look the same for AWS.  As we have specific deployment docs for different cloud infrastructures, we will publish additional article with more details.  I expect an IBM Cloud detailed doc this quarter.  ​​

Mon February 15, 2021 06:19 PM

Is it possible to modify this to an Azure deployment? Specifically with AKS? If not, where would one find the reference architecture for an Azure deployment?

Mon November 23, 2020 05:17 AM

is it possible to increase the quality of the image?

text is not always readable