B2B Integration

 View Only

Sterling Openshift Certified Containers in the Cloud series: Part 3 - Autoscaling (Second Part)

By Carlos Chivardi posted Wed December 23, 2020 04:04 PM

  

Co-authored by Adolfo Loustaunau and Brian Hall

Make your business more agile, efficient and resilient with IBM Sterling OpenShift Certified Containers

Autoscaling – Part 2

As shared in the previous blog posts about Sterling Openshift Containers, this is the third part of a series of blog posts, that describe the different use cases and set up for the Sterling Red Hat OpenShift (RHOS) Certified containers. This series will provide you with an overview of the different technology aspects of these containers, and how you can take advantage of the new capabilities provided by the RHOS platform for Sterling B2B Integrator (B2Bi) and Sterling File Gateway (SFG).  

This is the second part of two posts that focus on Autoscaling, which is one of the major advantages of running Sterling solutions in a Kubernetes environment. In the previous post, we focused on how to set up Openshift to auto-scale B2Bi/SFG. In this post we will focus on how to configure B2Bi/SFG to enable auto-scalability, the different methods to test the auto-scale functionality with Kubernetes; and summarize what is the business value of using this functionality.

Let’s start this blog post by configuring B2Bi to enable auto-scaling, which is very easy to do.

Sterling B2B Integrator configuration 

OpenShift’s HPA (Horizontal Pod Autoscalers) allow us to auto scale the B2Bi containers. In addition, with Sterling B2B Integrator version 6.1.0.0 you get the added Autoscaling support for the server adapters, including the FTP; FTPS; SFTP and C:D adapters. To pre-configure autoscaling on the communication protocol server adapters you need to do the following:

 

  1. Go to the server adapter (i.e. SFTP Server Adapter), under Deployment – Services - Configuration.
  2. Check the new “All AC” option to run this server adapter on all of the Adapter Containers in the Openshift cluster.
  3. Ensure you have a Host Identity Key.
  4. Specify the SFTP Server Listen Port. This port will be used by the Openshift AC Service.

 

Simulating the High Volumes Usage scenarios

The main driver of this Autoscaling exercise is the ability to send a stimulus to the B2BI RHOC Cluster CPU usage with a conditioned response (Scale Up or Scale Down pods).

Sending this controlled stimulus can be achieved in different ways. This document describes four different approaches to demonstrate OpenShift’s auto scalability functionality with B2Bi containers. The autoscaling method that we’re using is based on CPU utilization, as this is easier to simulate. The four approaches we used to simulate high volumes are the following:

  • Satya’s endless loop method
  • JMeter stress test method
  • Arif’s autoscaling method
  • Landa SFTP script

Here's a general description on how to implement each of these methods:

 

1. Satya’s Endless Loop Method

This is a very effective and simple way to increase the CPU usage.  By just accessing the terminal in one of the B2Bi pods, and issuing the following endless loop command:

 

yes > /dev/null &

 

Usually calling these commands 3 or 4 times combined with lowering the threshold in the horizontal auto scale configuration to a 40% of CPU time, will allow the pods to scale up within a couple of minutes. Make sure to configure a reasonable number of maximum pods in your Openshift’s HPA.

 

To allow your system to scale down, you can use the following commands to get the PIDs of the endless loop processes and then stop them. Once the CPU utilization goes down, OpenShift will reduce the number of pods, back to the original number, within a few minutes.

 

ps -ef grep yes

kill -9 <# Process ID>

 

Note: Be very cautious while using this method, as it will result in 100% CPU utilization and could potentially affect your whole cluster, so make sure that you terminate these processes or pods.

 

 

2. JMeter stress testing method

Using a stress testing tool like Apache’s Open-Source JMeter helps simulate the calls for both file transfer and EDI dvelopment processes, running to simulate high volumes of data and observe how the system reallocates resources internally. This simulation helps  establish benchmarks for auto scaling with data and processes that are similar to what the current system is processing. Please click here to download JMeter. Through this process, ideally, you will create an SFTP Server adapter along with a corresponding user (SFG Partner) and Mailbox. If you need to  replicate more intensive CPU usage, you can configure a business process that is triggered as files are uploaded into the SFTP mailbox (Figure 1). This business process can be a simple passthrough or can re-route or transform a file.

Figure 1 JMeter SFTP Stress Test UI

3. Arif’s Autoscaling setting method

This is the simplest method for a quick view of autoscaling functionality. You can configure the Horizontal Pod autoscaler to a threshold that is below the current utilization rate. This triggers Openshift to immediately scale up the number of pods until the average utilization rate is below the CPU utilization minimum threshold (Figure 2)

Figure 2 HPA Settings to change manuallty

4. Landa’s SFTP script

This is another effective method to stress an B2Bi Openshift cluster. You need to set up an SFTP server adapter, a user, and a mailbox to perform this method. Figure 3 shows the setting in a Microsoft Windows 10 Professional environment, but you can use this method on other OS platforms as well.

Figure 3 Landa SFTP Script to perform Stress Test

The master script (launchBig.cmd) setups a loop cycle based on the number of files being written in the folder “Hugefiles”. Within this loop, another script is executed (sendFile). This script will upload a file into the SI SFTP Server Mailbox causing an increase of CPU Usage. As more files are uploaded, the Cluster HPA eventually kicks another pod.

Results

You can see the autoscaling results, once you execute any of the stress methods listed above, on the Grafana or the native RHOS dashboards. Figure 4 shows the average CPU usage of the “my-release-b2bi-asi-server-0” pod. The green area on the graphic shows the CPU usage with the initial pod running. Once the stress test progresses, a second pod’s CPU utilization is represented in yellow, as the workload is now shared by the two pods. After the stress test is finished, there is a grace period where the second pod is still up and running and eventually, the HPA will return to the initial state (equal to the configured minimum pod replicas). 

Figure 4 Autoscaling CPU Usage during a Stress Test

Conclusion

In conclusion, Sterling B2Bi/SFG Openshift certified containers open the door to a whole new set of possibilities, by providing ways to improve and expand its current functionality to behave like a cloud native solution. They enable the adoption of a hybrid cloud model supported by the Openshift platform.

 

In the last two blogs we covered the autoscaling functionality, which is the most obvious advantage of running in a Kubernetes environment that is smart enough to allocate resources dynamically in the most efficient way, without the limitation of manual processes and static configuration set ups.

Autoscaling provides not only a better allocation of the underlying infrastructure resources, but also reduces any slowdowns in the system due to unexpected peak loads, and it allocates the resources where they’re mostly needed.

 

As a result of containerizing the solution, the individual pods are more targeted to specific types of loads and processes which makes them more efficient to run and manage. Containerization also reduces the size of the software components and reduces the time it takes for their loading, readiness and availability.

 

The B2Bi/SFG Openshift containers need to be more flexible to allow autoscaling, so static remote perimeter servers can now be replaced with more dynamic Openshift services. This makes the system more efficient and it does not require manual intervention to expand the available resources to meet high throughput demand. 

 

So, in short, by demonstrating autoscaling you can demonstrate the following business values to your company:

 

  1. More efficient utilization of resources and infrastructure, which reduces costs.
  2. Better ability to scale the available resources to meet demand dynamically, which reduces potential bottlenecks or failed processes/transfers and any corresponding missed SLAs or potential revenue.
  3. Less human intervention to adapt to utilization, which reduces costs and time to market.

 

 




#SupplyChain
#B2BIntegration
#Featured-area-1-home
#Featured-area-1
0 comments
118 views

Permalink