App Connect

 View Only

Software to SaaS Best Practices

By Martin Ross posted Sat December 09, 2023 11:46 AM

  

Rules of the road for any migration to SaaS

There are many reasons why teams using self-managed software may look to SaaS for new or existing workloads, common reasons often include:

  1. Taking advantage of greater infrastructure flexibility

  2. The availability of skills within the organisation

  3. Making savings on infrastructure maintenance and costs

  4. Or, simply providing more time for the team to focus on new features for competitive differentiation

Whatever the reason why, it is key to understand more about SaaS and how exactly it differs from self-managed software. The main things I recommend understanding first are:

  • Responsibilities: What are your responsibilities and what are the service providers responsibilities.

  • Solution Architecture: The managed service will typically be a opinionated deployment of the software, so it is key to understand this alongside the responsibilities.

Starting with responsibilities, the first step is to understand the options available and the differences - see illustration below for IBM App Connect Enterprise.

Reading from left to right:

  1. The first columns shows that for running IBM App Connect Enterprise Certified Containers in your own data centers, you are responsible for everything from the infrastructure availability and the OpenShift or Kubernetes platform through to the applications layer.

  2. The second column details the responsibilities if you are leveraging a managed Kubernetes platform (such as ROSA on AWS) and installing IBM App Connect Enterprise Certified Containers to manage yourselves.

  3. Lastly, IBM App Connect Enterprise as a Service is shown in the third column where the service provider (IBM in this example) here is responsible for everything from the infrastructure provisioning and availability up to and including the IBM App Connect Enterprise Certified Containers installation and maintenance. However, shared responsibilities for the runtimes will remain.

Responsibilities for IBM and customers for IBM App Connect Enterprise Certified Containers and IBM App Connect Enterprise as a Service.

 

The solution architecture for IBM App Connect Enterprise as a Service below shows a high level view on the SaaS solution - it details concepts from the components to understand the networking setup and how applications interact with the service to the tenancy model, and outlines that the service has data planes for the runtimes that are provisioned into an AWS region spanning multiple availability zones. More information is available in the managed-service documentation.

The diagram represents the IBM App Connect Enterprise as a Service architecture.

 

For what use cases might this make sense

As discussed, there are many reasons that teams adopt SaaS - for IBM App Connect Enterprise as a Service, a common theme is modernisation. Many enterprises have traditionally deployed integration solutions using a large, centrally managed “enterprise service bus” (ESB) pattern that may contain hundreds or even thousands of integrations. The modern approach to performing integration takes advantage of modern DevOps practices using runtimes that are optimised to run on container orchestration platforms such as Kubernetes. We call this agile integration. You can learn more about this agile integration approach here. Often, teams lack skills relating to containers and look to take the opportunity to start to move workloads to the cloud to save on infrastructure and maintenance costs and take advantage of more flexible compute options. SaaS is a great option to move a lot of the responsibility for your application technology stack to the service provider to enable the team to focus on what they need to be competitive in their respective market.

What are options for taking a hybrid approach

Although organisations are looking to move to the cloud and adopt SaaS, there are still many workloads and applications that need to remain in the customer data centers. Reasons for this can range from data residency and regulations / security to having legacy systems not suitable to move to cloud requiring integration, remaining part of a broader solution that is moving to cloud.

Support for such hybrid patterns is supported within IBM App Connect Enterprise as a Service through our Secure Agent and Callable Flow technologies.

Port forwarding

The solution architecture diagram in the previous section details a “Switch” component of which each tenant of the service has a dedicated copy. This communicates with runtimes hosted on IBM App Connect Enterprise as a Service. The Secure Agent is a component that you download and run in your private network - it creates a connection outbound to the Switch associated to your service instance that enables bi-directional traffic over the connection without the need to open any inbound ports in your firewall that is secured with mutual TLS. Leveraging this capability, you can configure port-forwarding to applications and systems within your private network to enable security connectivity within your solution.

Common use-cases for port-forwarding include enabling integration flows running within data planes hosted on SaaS to integrate with application endpoints within private networks, where it is not desirable to open inbound ports in your firewalls. Commonly, we see this occur for interactions with a database or IBM MQ systems that are on-premise, but can also be used to integrate with various other applications and systems in other networks, commonly business applications offered as SaaS or cloud-native applications hosted on public clouds.

Callable Flows

Callable Flows also leverage the same Switch component in a similar way, but where it differs is in allowing you to run IBM App Connect Enterprise flows in your private network and call those flows directly from flows hosted in our SaaS offering (and vice-versa). When you leverage the Callable Flow Input Node within a flow and deploy the flow to a runtime with connection details for the Switch, the flow is registered and made available for use by other flows and tools connected to the same Switch component.

Common use-cases for Callable Flows include SAP integration (where SAP licenses require the client libraries to run in the same network), and also for more complex integrations with applications on the private network, where transactional requirements, for example, make it a more effective choice. Callable flows also support flexibility for the flow deployment - for example, with a REST API that has a defined endpoint clients need to integrate with, if this application is moved to the cloud or re-hosted in another location, the client applications typically need to update, however, the registration and usage pattern for callable flows means that the callable flow could be deployed anywhere, and can be moved without needing to update any client applications.

Both the Designer and Toolkit authoring tools have callable flow nodes and connectors, enabling these common patterns independent of your choice of authoring tool - and also enables Designer flows to call Toolkit flows (or vice-versa) where more complex integration may have been implemented in Toolkit and made available to other users that are building their flows in our no-code authoring experience, Designer.

It should also be noted that we publish the IP addresses that the managed service uses for outbound connectivity - you can use these to configure firewalls to enable connectivity into private networks (if you are not using the Secure Agent) or to applications that use IP allowlisting to restrict access: https://www.ibm.com/docs/en/app-connect/saas?topic=information-ip-addresses

What steps should I take when rehosting

It is key to understand the differences between hosting and managing IBM App Connect Enterprise software and associated infrastructure yourself versus running your flows in IBM App Connect Enterprise as a Service. Some of these are outlined previously in this article, however, naming may vary.

Firstly on IBM App Connect Enterprise software, you would likely have created integration nodes and associated integration servers (or brokers and execution groups for previous versions of the product) - within SaaS there is no concept of the integration node, instead you just create integration runtimes that are essentially standalone integration servers (or execution groups), but under the covers, it is the same runtime. Conceptually the main thing to understand is that the runtimes are declarative - you declare the runtime and associated configuration, if you want to change something you do not dynamically change a property, you change the definition, and then the service will create a new version of that runtime to meet what has be declared before removing the previous version - performing the update in a blue / green process to avoid any disruption.

Typically you should not need to update your flows (although some changes may be required for specific circumstances), but there are a few things to understand and steps to take when rehosting.

Containerisation considerations

As mentioned previously, IBM App Connect Enterprise as a Service is based on IBM App Connect Enterprise Certified Containers which (as the name suggests) is a container-based deployment. Container-based deployments have many notable differences to a traditional deployment, such as:

  • Deployment isolation: In container deployments we see groups of integrations being independently deployed into their own container runtime as opposed to a more traditional singular centralized ‘broker’ deployment. This has advantages but also some things that need to be considered that we will cover more in the next step.

  • Runtime versioning: In more traditional deployments all integrations were typically running on the same version of the product and all had to be upgraded at the same time. Moving to a containerised deployment, as each container includes the product runtime this allows you to run different versions for each runtime and provides more control and flexibility around when and how to upgrade.

  • Memory and CPU: Container platforms enable explicit / declarative ways to specify how much CPU and memory each (IBM App Connect Enterprise) container requires, ensuring the specific needs of the integrations can be taken into account.

  • Declarative configuration: Newer versions of IBM App Connect Enterprise support configuring the integration nodes and servers using a configuration file (i.e. server.conf.yaml) rather than requiring post deployment commands to be run, but many existing deployments will still be running mqsi commands, for example, to modify an integration server configuration. For the managed service all configuration is declared up-front and if you want to change the state of the runtime then you modify the runtime definition or configurations that define the desired state.

Ensure that you understand the differences between a containerised deployment and a more traditional deployment and how this may impact your solution, then make a conscious decision on how you want to configure, deploy and manage your integrations and runtimes.

Deployment patterns / integration groupings

If you have a current more traditional deployment of IBM App Connect Enterprise then you may have a handful of integrations as part of your solution, or you may have 100s - either way you need to decide how these are going to be grouped and deployed to the integration runtimes in the managed service. A simple approach would be to mirror your existing traditional deployment and deploy all the BAR files or integrations that were grouped on an integration server or execution group to the same integration runtime in the managed service. You have typically made a choice already about how to group them in your current deployment such as grouping logically by use-case / project, or by workload patterns (batch workloads vs. real-time integrations), however, in the managed service you have additional factors that should be considered.

As the managed service is a container-based deployment each runtime comes with additional isolation - they can be managed independently allowing you to pick different runtime versions and upgrade at different times. Additionally, you define CPU and memory available to each runtime independently allowing you to control resource usage and cost for each runtime and associated integrations hosted there. Although deploying fewer integrations to each runtime does have advantages around management and lifecycle, there is a cost implication as there is an overhead to each.

Typically a good starting point would be to mirror your current deployment - as you have likely made you deployment choices for a reason, and you would be able to readily determine how much resource each integration server or execution group uses in your current deployment making capacity planning and runtime configuration easier, and then monitor and decide if you want to make any changes as a second phase. But migrating to the managed service is a good point to reflect on these choices and ensure they still hold or whether you should modify your deployment pattern as part of the move.

Understand appropriate configuration mappings

When running IBM App Connect Enterprise as software, properties of the runtimes may have been set using commands like mqsichangeproperties, and the runtime or flows may be dependent on files that exist or have been configured on the host system such as odbc.ini files or keystores and truststores. Moving to SaaS, these are provided declaratively by creating configurations and associating these to the runtimes that you create (a list of configuration types can be found here: https://www.ibm.com/docs/en/app-connect/saas?topic=information-configuration-types). Commonly required configuration types would be:

  • server.conf.yaml: To declaratively configure most of the things that would have been configured using commands such as mqsichangeproperties.

  • setdbparm.txt: Enables you to provide details of mqsisetdbparms commands to run for the integration runtime.

  • Policy project: Create policies within a policy project to control the behavior of the message flows and message flow nodes at run time.

  • Keystore / Truststore: Provide keystores and truststores that are used by the runtime or message flows.

  • Generic files: General files that may be referenced by the runtime or message flows.

Availability requirements

The managed service is deployed across multiple availability zones in each region providing high availability for runtimes and supporting features, and if an availability zone within a region were to become unavailable for any reason workload would automatically move to the healthy availability zones. For higher availability you can configure multiple replicas of your integration runtimes which the service will look to distribute across the availability zones, as an example, if you configured 2 replicas of an integration runtime hosting a set of message flows then the service would look to schedule these onto separate availability zones such that replica 1 was on availability zone 1 and replica 2 was on availability zone 2. If availability zone 2 became unavailable for any reason then the service would automatically re-schedule the runtime to availability zone 3 - but whilst that was happening replica 1 would continue to run and service the workload providing higher availability for the integrations than if only a single replica was configured.

You should understand the high availability requirements of your integrations, and configure associated integration runtimes appropriately to meet those requirements.

Upgrade options

From the responsibility table shown previously in this article it can be seen that the managed service is based on the IBM App Connect Enterprise Certified Containers technology, and that IBM is responsible for the installation, maintenance and upgrade of the operator. The runtimes are listed as a “shared responsibility” for creation and upgrades - the reason for this is that when you create your runtimes you have several choices, and one of those choices relates to the version of the runtime that you want. Within the service we support the current and previous versions of the runtime, so if the current version was 12.0.9 for example then we would support 12.0.9 and 12.0.8 runtime versions. When you create a runtime you will have three choices on the version:

  • Current mod release (i.e. 12.0.9)

  • Previous mod release (i.e. 12.0.8)

  • Current major release (i.e. 12.0)

The supported runtime versions is controlled by the version of the operator that is installed - if we update the operator and there is a new current mod release that is available, let’s say 12.0.10, then we will remove support for 12.0.8 and support 12.0.9 and 12.0.10. At this point, any runtimes that had been created, and the 12.0.8 version chosen, will be automatically upgraded to the latest runtime version. If your runtime had been created to use version 12.0.9, then it would stay at 12.0.9 and allow you to test at 12.0.10 before updating your production runtimes to the new level. Alternatively, if you had chosen the current major release (i.e. 12.0) as your runtime version, then if a new 12.0 mod release becomes available, you are automatically updated to keep current. More information can be found in the documentation for updating the version of your integration runtime and it should be noted that whatever version you choose, we will automatically keep it updated with fixes within the chosen mod release.

You should decide if you are happy to be kept up to date on the latest level or whether you want some control to perform any testing you want before updating. If choosing the latter, ensure you have an appropriate process for testing in place.

CI/CD pipelines

When moving to the managed service you need to understand how to build your CICD pipeline so that you have a process from build and test through to deployment onto your production runtimes, and plan how to technically achieve this and what technologies to use. You may already have a CICD pipeline for IBM App Connect Enterprise software - that may be utilising some of the unit test capabilities within the offering or you may be looking to build something new, either way the public API provides all the capabilities you need to deploy and manage your integrations and runtimes in the managed service. There are also several additional articles here that introduce the App Connect public API and show how you can use the App Connect public API to build a pipeline using AWS CodeDeploy.

What are pre-cautions or hiccups I might face that are ACE specific

This section looks to outline some of the main things to be aware of that may work differently or are not supported in the managed service.

With regards to product capabilities, more information can be found in the documentation (https://www.ibm.com/docs/en/app-connect/saas?topic=integrations-supported-resources-in-imported-bar-files and https://www.ibm.com/docs/en/app-connect/saas?topic=known-limitations) but the main things for awareness are:

  • Default local queue manager: Certain nodes within Toolkit have a dependency on a default queue manager for certain capabilities - on the managed service there is no default queue manager configured, so if your integrations have this requirement then you would need to configure a remote default queue manager to support these. More details can be found in the documentation: https://www.ibm.com/docs/en/app-connect/12.0?topic=mq-using-remote-default-queue-manager

  • TCPIP Server Nodes: Although the use of TCPIP Client Nodes is supported you can not currently run flows using the TCPIP Server Nodes.

  • Global Cache: The global cache capability in IBM App Connect Enterprise is underpinned by WebSphere eXtreme Scale technology - this is not currently supported in containerised deployments and so this feature is not currently available in IBM App Connect Enterprise Certified Containers or IBM App Connect Enterprise as a Service.

Conclusion

There are multiple reasons for moving to SaaS, and everyone is going to have different reasons and have different migrations depending on their own specific solutions and decisions. There are also integration solutions and workloads that are well-suited for a move to cloud whilst others that are better suited to remaining in customer-managed environments. Even if you have identified those that you are going to move to IBM App Connect Enterprise as a Service, it will often make sense to take a phased approach to the migration. Either way a hybrid approach is often the best path and there are multiple options available in support of this with port-forwarding, callable flows, and more to come.

0 comments
20 views

Permalink