Message Image  

Migrating highly-available configurations to IIB v10

 View Only
Tue July 14, 2020 01:31 PM

Introduction

There is an existing Integration community article on migration to v10, which covers the main considerations when adopting the new version that apply to any configuration. There are some new concepts to understand particularly around security and remote administration, and there are some options on how to conduct the migration. See here for that existing article:
Migrating to Integration Bus v10

If you have built an IIB configuration using active-passive configuration for high availability, you can migrate this to v10 but there are some additional steps compared to a configuration with independent integration nodes. Those are covered in this article, in more depth than the original migration post.

It is not necessary to upgrade your developer environments to v10 before upgrading integration nodes, as v10 nodes can still run broker applications and accept BAR files created in previous versions. You will not be able to develop using any new function until you migrate to the current development environment, though.

What sort of HA configuration do you have?

Support for active-passive highly available configurations exists in all versions that support migration to v10 (back to v7). You may use a system-provided HA manager, such as Microsoft Cluster Services or IBM’s PowerHA, or you might take advantage of IBM MQ’s support for active-standby queue managers and have your integration nodes follow their default queue manager between systems.

In both cases, you need to identify each highly-available integration node in both its possible locations, and migrate it as a unit, because the data associated with an integration node is version-specific. Depending on your setup, you may have to migrate multiple integration nodes at the same time if they reside on the same operating system image.

First decision – update in place, or build alongside?

The first decision to make is whether you want to:
1. upgrade your existing integration nodes in-place to run the new product version
or
2. create an additional integration node active-passive pair alongside your existing setup, and migrate applications over to it gradually

A version upgrade in-place always requires a small amount of downtime while the configuration data is switched over to the new-version format, but keeps all your existing configuration. Creating additional integration nodes can avoid this downtime altogether, but you need to have a way of redirecting application traffic into the new system and you need to have a very clear list of configuration steps for the new nodes, including any user/password configuration, configurable services, changeproperties commands as well as the BAR files.

Building an additional pair of integration nodes may be desirable if you already planned to rearrange your workloads, but it is more like provisioning a new system than migration. Given that, the rest of this article will concentrate on upgrading existing integration nodes in place (option 1 above). There are already some detailed steps in the Migration section of the Knowledge Center, but this article puts them in context.

Migrating with an external HA manager

In this case, you have one or more integration nodes which can run on one of several nodes in a cluster managed by your HA program. The start and stop of those integration nodes is controlled by definitions in your HA manager, which use a set of scripts named hamqsi_* provided in the product documentation (although you may have customised these yourself).

These scripts have not changed in v10 and you can continue to use the scripts from your previous version after you migrate. If you have a configuration established prior to v7 using the IC91 support pack, you can still migrate it using the instructions below.

Note that the v10 product installation has a slightly different directory structure to previous versions, so you will need to update the profiles for any users that run integration nodes to find mqsiprofile in its new location (server/bin/mqsiprofile). This is noted in the instructions below.

On Windows, there is only one way to do it – install the v10 product alongside your existing installation, and migrate each integration node. The Windows product version enables each integration node to be associated with a different set of installed code (via the Service definition for each node). The Knowledge Center also contains some Windows specific migration documentation:

Steps to migrate on Windows using an HA manager (i.e. Windows Cluster with FCM):

  1. On each node in the cluster, install v10 code alongside your existing product version (in the same location on each node).
  2. For each active integration node in this cluster, repeat the following steps:
  3. Bring the integration node offline, keeping the shared disk and MQ resources online, on the primary node in the cluster.
  4. Open a v10 command console on that primary node and run the mqsimigratecomponents command to upgrade the integration node to v10.
  5. Still with the integration node resource offline, move the integration node cluster group to the secondary node.
  6. Start a v10 Command Console on the secondary node and run mqsimigratecomponents on the integration node again.
  7. Bring the cluster disk and MQ resources offline.
  8. Restart the integration node cluster group on the primary node.

On Linux or UNIX, you have a decision to make at this point:
1. Keep the product installation in the same place, and update all integration nodes in a cluster at the same time.
2. Add another product installation location for v10, alongside the existing version, and modify existing integration nodes to run the new version one at a time (changing their user ID at the same time)

Option 1 is simpler to perform, but involves more downtime and a longer backout procedure.
Option 2 is more complex to perform with more chance of mis-typing, but provides the ability to migrate integration nodes one at a time in a given HA cluster, and provides faster restore to previous version in case of problems.

Steps for Option 1 on Linux/UNIX (keep product install in its current location):

  1. Take Passive Nodes out of the cluster.
  2. Uninstall previous-version IIB product code on Passive Nodes.
  3. Install v10 into same location where the previous-version code was located on Passive Nodes.
  4. Update the profile for integration-node users to source mqsiprofile in its new v10 location.
  5. Accept the license for the new installation using the iib command.
  6. If using databases, back up the existing ODBC file and create a new file from the template in the existing location. Ensure that all existing data sources are defined in the new file, following the template. New libraries are used by the new version. Ensure that the ODBCINI environment variable is set to the updated file if it isn’t already in the profile.
  7. Take the active node offline including all its integration nodes.
  8. Uninstall previous-version code on Active Node.
  9. Install v10 into same location as previous-version code on Active Node.
  10. Update the profile for integration-node users to source mqsiprofile in its new v10 location.
  11. Accept the license for the new installation using the iib command (Linux/UNIX only).
  12. Migrate Brokers on active Node as per documentation (use the mqsimigratecomponents command from the v10 command console)
  13. Bring Active node back online.
  14. Add passive nodes back into the cluster.

Steps for option 2 on Linux/UNIX (install v10 in a new location):

  1. Install v10 product into a new location on all machines in the cluster (must be the same location on all)
  2. Duplicate the userid the broker are run under i.e. brkusr, create brkusr10
  3. Update the brkusr10 profile to run the mqsiprofile located in the new build
  4. Create a new ODBC.ini file based on the v10 templates in a different location to the existing file, but defining the same data sources. Update the new profile for settings such as the ODBCINI as per the migration documentation.
  5. For each active integration node in this cluster, repeat the following steps:
  6. Bring down the integration node and take it out of the cluster.
  7. Migrate the integration node as per documentation.
  8. Update the HA configuration to use the new userid i.e. brkusr10  i.e. if you have a start script setup to run as

    hamqsi_monitor_broker_as BROKER1 BRKQM1 DB1 brkusr

    update to be

    hamqsi_monitor_broker_as BROKER1 BRKQM1 DB1 brkusr10

    This must be done for all the scripts controlling the broker.

  9. Synchronise the cluster configurations to the various nodes.
  10. Start the broker and reintroduce into the Cluster.

Migrating with MQ active-standby queue mangers

In this case, your configuration does not have an external HA manager, but your integration nodes reside on queue managers which can run on two different systems managed by MQ. If you have an MQ failure or a system failure, processing will move onto the standby system. You can see more about this configuration in the Knowledge Center under multi-instance integration nodes.

There are two variations of this configuration:
1. You start your integration nodes manually using the mqsistart command
2. You configure your integration node to be started as an MQ Service, when your queue manager starts
The instructions are basically the same, but if you have a type 2 configuration then there is one extra step to carry out which is indicated below.

You also have a choice about how much to change at a time, just as with an external HA manager. You can do one of the following:
1. Replace your current-version installation with v10, and upgrade all integration nodes at once.
2. Install v10 alongside your current version, and migrate your integration nodes one at a time.

Option 1 involves less system administration but backout is slower and you will incur more downtime. Option 2 gives more flexibility, less downtime, and the chance of faster backour, but you have to be very careful to ensure that each integration node is started from the correct mqsiprofile environment before and after migration, which may involve some extra scripting.

For option 1 (all integration nodes at once), do the following:

  1. Ensure all active queue managers and integration nodes are on one image of the pair. Call this one the “active machine” and the other one the “standby machine”.
  2. Stop all integration node instances on the standby machine.
  3. Uninstall current-version IIB code on the standby machine.
  4. Reinstall v10 code on the standby machine in the original location.
  5. (Linux/UNIX only) Ensure that the v10 license has been accepted with the iib command.
  6. (Linux/UNIX only) Back up the existing odbc.ini file. Create a new file with the old name, based on the v10 template, and ensure all existing data sources are defined in there with the new libraries from v10.
  7. Stop all active integration node instances.
  8. Uninstall current-version IIB code on the active machine.
  9. Reinstall v10 code on the active machine in the original location.
  10. (Linux/UNIX only) Ensure that the v10 license has been accepted with the iib command.
  11. (Linux/UNIX only) Repeat step 6 to create a new odbc.ini file for the existing data sources, backing up the existing file first. Ensure that the ODBCINI environment variable is set to this file name.
  12. Start a new IIB console sourcing the new v10 mqsiprofile on the active machine.
  13. Run the mqsimigratecomponents command on each broker on the active machine.
  14. (Only if your integration node runs as an MQ service) Update the MQ Service definition on your integration node’s queue manager for your integration node. Use RUNMQSC “DISPLAY SERVICE” to locate the entry to change, and update all paths to point to the new installed location.
  15. Restart each integration node on the active machine.
  16. On the standby machine, start a new command console with mqsiprofile. (Ensure ODBCINI is set if required).
  17. (Windows only) Restart the standby instance for each integration node.

For option 2 (migrate one at a time), do the following:

  1. Install v10 code into a new location on the active and standby machines.
  2. (Linux/UNIX only) Create a new odbc.ini file for ODBC data sources, based on the existing file from your existing version. Ensure you use the template in the v10 product installation. Update your environment so that the ODBCINI variable is set to the new file if using v10, and the old file if using your existing version. You can use conditional code in a profile extension script to do this, which is described in the Knowledge Center under establishing a command environment.
  3. For each integration you want to migrate:
  4. Stop its standby instance.
  5. Stop the active instance.
  6. Source a new command console from the new v10 installation on one machine.
  7. Issue the mqsimigratecomponents command on that machine, as per the migration documentation.
  8. (Only if your integration node runs as an MQ service) Update the MQ Service definition on your integration node’s queue manager for your integration node. Use RUNMQSC “DISPLAY SERVICE” to locate the entry to change, and update all paths to point to the new installed location.
  9. Restart the integration node on that machine.
  10. On the other machine, establish a new command console having sourced the new v10 mqsiprofile.
  11. (Windows only) Restart the standby instance on the other machine.
  12. Repeat steps 2-10 for remaining integration nodes.

Deciding if your migration was successful (and backout plans if not)

Before starting on any of the numbered lists above, you should have an idea of how to revert those changes should you encounter a problem during the procedure. You would normally consider a migration to be successful only once you had observed data to be processed correctly by the upgraded integration node, but that depends on your chosen test procedures.

Should you encounter a problem during migration or testing immediately afterwards, you can revert an integration node to its previous code version. Exact steps depend on which of the variants of the steps above you performed, but you should consider the following steps for your rollback checklist:

  • An integration node needs to be migrated back so it is associated with your previous version. Use the mqsimigratecomponents command from the current product version to do this, with the -t flag. See how to back out a migration command in the Knowledge Center for details.
  • If you replaced the previous-version installation with v10 in the same location, you need to restore the previous product installation after running mqsimigratecomponents (but before restarting with the old code).
  • You need to restore the command environment used to start your integration node to the previous version. That means updating user profiles (especially if using an external HA manager).
  • If you have changed your ODBC.INI file in place, you need to revert that file to its previous version which you backed up during the procedure.
  • If your integration node was started by an MQ queue manager, you will need to manually revert the MQ service definition to use the old installation path.
  • After migrating back, if you are on Windows, you need to ensure that each integration node is started on each operating system image, so that the Windows service definition is updated with the correct path and profile. Ensure that it is started with you previous version of product code.

Summary

This article has given you some detailed steps on how to upgrade existing highly-available configurations built using previous versions of Integration Bus and Message Broker to run v10. It covers configurations built using external HA managers as well as those using MQ to trigger failover.


#IntegrationBus(IIB)
#IIBV10
#migration