Sterling Managed File Transfer

Sterling Managed File Transfer

Come for answers, stay for best practices. All we're missing is you.

 View Only

Cloud native and Microservice Journey for B2Bi, SFG, Global Mailbox and B2B Advanced Comms Functionality

By Vince Tkac posted Mon June 07, 2021 03:01 PM

  

Disclaimer

  • IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion.
  • Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision.
  • The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code or functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any future features or functionality described for our products remains at our sole discretion.

Introduction

This document lays out a rough direction and steps for the Sterling B2B portfolio to progress to a cloud native / hybrid cloud architecture.  This will happen across multiple releases and provide customer value at each release. 

There will be two major tracks of work.  First, enablement and adoption of various cloud technologies (Object Storage, Operator, EFK, Vault…).  Second, progression to a microservice based architecture. 

The goal is a scalable, resilient architecture that is easy to maintain and upgrade, and that utilizes the best cloud infrastructure options.

The intent is not to rebuild the portfolio from the ground up.  B2Bi will not be “micro-service first”. We are following a "monolith first" pattern (here: https://www.martinfowler.com/bliki/MonolithFirst.html).  B2Bi falls under the moniker of a “monolith that [has gotten] too big” and needs to progress towards microservice for individual components.  Over time, the majority of B2Bi as we know it today will be replaced with microservices.  The remaining monolith components will be greatly reduced and can be fully replaced at that point.  All of this will reduce existing customer pain points around resilience, scaling, patching, monitoring and management.

Enablement and Adoption of Cloud Technologies
Certified Containers and Red Hat Open Shift Container Platform provide us a foundation for our cloud technology adoption.  Container management, docker and Kubernetes are all implied and supported.  Additional cloud technology must be purposefully adopted by the application.  These technologies can make a huge difference is how successful an application is in the cloud.  A great example is the ability to use object storage instead of a mounted file system.  Object storage is less expensive, more resilient, and easier to setup.  B2Bi/SFG will be providing this as a payload storage alternative to database and filesystem.  It will also be a storage option in Global Mailbox.

 

Technologies on the roadmap:

  • EFK (Elasticsearch, Fluentd and Kibana) stack for logging
    EFK will provide cluster-wide consolidated logging across all the containers in the cluster, a UI for log searching/viewing as well as visualizing through dashboards.
  • Object Storage (IBM Cloud Object Storage or Amazon S3 or others)
    (mentioned above)
  • Helm for install and upgrade in any Kubernetes platform
  • Operators for install, upgrade, maintenance, and monitoring in Red Hat OCP
    (see below)
  • Kafka for queueing of visibility data and workflow jobs
    Horizontal scaling, multi-zone, producer and consumer scaling, queue persistence outside of the workflow node.
  • NoSQL / Doc Centric DBs
    Horizontal scaling, multi-zone, multi-region replication options and faster performance for non-relational patterns.
  • Vault for key, password and cert management
  • CI/CD capabilities for resource deployment and promotion post install

 

Operators
Operators are an important item on our technology roadmap.  A Kubernetes operator is an application-specific controller that extends the functionality of the Kubernetes API to create, configure, and manage instances of the application.

An Operator allows us to provide a standard programming interface for managing our application suite.  That programming interface is then used by the OCP Operator Console to help users easy manage and maintain our application.  This is done through a Kubernetes Custom Resource (CR). 

The Operator interface provides a number of functions that are grouped into levels (source: https://operatorframework.io/operator-capabilities/)

 

Level 1. Install and Configure - provisioning and configuration management

  1. Install and configure the application
  2. Applies DB schema
  3. Creates default accounts and certificates
  4. Change application configuration settings

Level 2. Seamless Upgrade - patch and minor upgrades supported

  1. Upgrade the application to a new container version
  2. Upgrade individual components without outage where possible (rollout)

Level 3. Lifecycle - Adoption lifecycle (backup, failure, recovery)

  1. Backup/restore (export/import) of config data
  2. Automatically adjust visibility, workflow or adapter threads
  3. Automatically adjust database connections
  4. Implement failover or failback for DR in B2B/SFG and GM and SSP
  5. Drain a processing/proxy node or entire zone to take offline
  6. Enable new product features (dark launch)
  7. Add custom packages
  8. Adjust application workflow balancing across nodes
  9. Automatically increase/decrease replicas based on protocol or translation load

Level 4. Insights - Metrics, alerts, log processing and workload analysis

  1. Metrics and health statistics for B2Bi engine and queue depth
  2. Metrics and health statistics for SSP engine and adapters
  3. Metrics and health of visibility data
  4. Metrics for inbound and outbound transfers
  5. Alerts for failed processes and transfers
  6. Alerts for DB connection failures
  7. Health check warnings for common/known issues
  8. Metrics of processes by state at a given time
  9. Metrics of transfer by state and protocol at a given time
  10. Metrics and warnings on file transfer failures

Level 5. Auto Pilot - Horizontal / vertical scaling, auto config tuning, abnormal detection, scheduling tuning

  1. Increase or decrease number of nodes based on workflow queue depth or delivery backlog.
  2. Alerts for certs or other components nearing expiry. 
  3. Automatically take adapters offline after a period of time if DB connection is lost after.
  4. Automatically tune for translation workloads vs file transfer workloads.
  5. Autocorrect issues raised as health check warnings.
  6. Automatically retry batches of files transfers if a partner has failed and is now back online.

 

 Microservices Journey
Before we talk about how we will get there, let’s take a moment and review what microservices and their goals.  Microservices aren’t a strict standard but do follow a set of best practices.  

  1. Independent code base
  2. Independent & isolated data storage (no sharing)
  3. Stable boundaries (REST API interaction only) with strong contract and versioning
  4. Observable/monitoring
  5. Reusable
  6. Scalable and Stateless

 

The current state of the portfolio
A couple of large code bases and data stores.  No separation.  Brittle systems.

 

Current State - monolithic code and data
Figure 1: Current state - monolithic code and data storage.

 

Our goal:

 

Future State - Microservice Archiecture
Figure 2: Future state - microservice code with strong contracts and data separation

How do we get to our goal?

Each release of B2Bi/SFG will make incremental progress towards this microservice architecture providing value and new functionality along the way.  Starting with some coarse grained services like Global Mailbox, Storage and AS4.  These won’t be the ideal of microservice design but will set us on the path.  We then peel off functionality from the monolith in chunks and implement as new/independent microservices.

Below are examples of the first couple of steps that could be adopted.  These steps focus on some of the foundational services which will allow us and partners to build other service.  This shows how the system would look and where data would live. 

In this first step, a document service is introduced that allows B2Bi/SFG and GM to store payload data in an object storage system such as IBM Cloud Object Storage or Amazon S3. Object storage is less expensive, more resilient, and easier to setup than a mounted persistent file system.  B2Bi/SFG will be providing this as a payload storage alternative to database and filesystem.  It will also be a storage option in Global Mailbox. 

Step 1 - Mailbox and Payload
Figure 3: Step 1 - mailbox and payload are separated out

 

In this second step, a fully independent visibility and tracking service is introduced to receive/manage event tracking data and isolate that data from the existing B2Bi/SFG relational database.  This reduces the load on the relational DB, provides for longer storage of tracking data and allows for the use of NoSQL or doc-centric database for higher performance and scaling.  

Step 2 - Visibility
Figure 4: Step 2 - visibility separated out



Contributions and Acknowledgements
Thanks to Ryan Wood, Mark Murnighan and Vijay Chougule for review and edit.

Special thanks to the Precisely engineering team for all the time spent discussing, reviewing and refining this architecture: Sreedhar Janaswamy, Scott Guminy, Nikesh Midha.







#DataExchange
#IBMSterlingB2BIntegratorandIBMSterlingFileGatewayDevelopers
1 comment
100 views

Permalink

Comments

Tue June 08, 2021 10:26 AM

@Vince Tkac Great post. Love the visual representation and the various stages in the roadmap for Micro-services and Operators in the portfolio.  @APARNA MANI​​​