This article explains how to configure the NFS settings for multi-instance brokers running on a Linux or UNIX platform. The instructions are valid for IBM Integration Bus v9, WebSphere Message Broker v8, and WebSphere Message Broker v7.
You will need a shared file system on networked storage, such as a NAS, or a cluster file system, such as IBM’s General Parallel File System (GPFS). You can also use a SAN as the storage infrastructure for the shared file system.
You may be wondering why you need a shared file system for a multi-instance broker. The reason is that the work path for a multi-instance broker is split between those parts that are specific to the local instance, and those that need to be shared between all instances of the broker. The local parts include broker logging, error reporting and some registry settings. The remote shared parts include the deployed message flows,associated artifacts, and the common registry settings.
The most common configuration for a multi-instance broker requires a minimum of 3 nodes (machines):
- One machine that acts as the NFS server, and hosts the shared directories (N1 in the diagram below).
- An NFS client machine that hosts the active broker instance (N2 in the diagram below).
- Another NFS client machine that hosts the standby broker instance (N3 in the diagram below).
The support statement for the shared file system used by IIB multi-instance brokers matches that for the WebSphere MQ multi-instance queue managers.
Before defining the NFS mounts and NFS file systems you should first follow these instructions (creating the shared directories). Pay particular attention to the directory ownership and permissions that are needed. Also make sure that the user id (UID) that administers the broker, and its accompanying group id (GID) are the same on all the systems where the broker instances will run.
The following steps demonstrate how you can configure the NFS mount points for a multi-instance broker running on a Linux and UNIX platform.
Creating the mount for the shared directory on the NFS server:
In this example the shared directory on the NFS Server N1 is called IIBSHARED. The shared directory should be located beneath the system directory called /exports. This directory is then made available to the multi-instance broker on host nodes N2 and N3 by editing a system file called /etc/exports which contains a list of directories that can be exported to NFS clients. You will need root permission to modify the file using a suitable editor such as vi.
Add the following stanza to /etc/exports:
N2 and N3 are two NFS client nodes that host a multi-instance broker.
-vers=4 means NFSv4 protocol is used to contact the NFS daemon.
sec=sys sets the security type to “sys” which uses local UIDs and GIDs (AUTH_SYS) to authenticate NFS operations.
rw=N2:N3 indicates that the mount point has read/write access from nodes N2 and N3.
When the /etc/exports file has been updated, you can make the mount point available by issuing the following command:
Checking the shared mount exists:
On each of the multi-instance broker nodes N2 and N3 issue the showmount command:
showmount -e N1
Verify that output includes the exported mount IIBSHARED.
Creating the mount points on the NFS clients:
The first step is to create a local directory. For this example I am calling the directory /MQHA/IIBSHARED. There are no strict rules regarding where the local directory is located. It does not have to be under root. However you should conform to the requirements specified in the IBM knowledge Center for creating the shared directories.
Also you need to be aware that a multi-instance broker writes its logs, errors, and some registry information to the local broker workpath, which is typically /var/mqsi on Linux and UNIX machines. For administration purposes you would be advised to make sure that your naming conventions for the directories make it easy to distinguish between multi-instance and single instance brokers.
Now create the mount point to access the shared directory hosted on server N1. Ideally make the mounts permanent across a reboot of the system. The way to achieve this depends on the type of Linux or UNIX platform. Here are some examples:
On AIX use either the smit tool (smit nfs -> Network File System (NFS)), or the mknfsmnt command to define the mount as:.
nodename = N1
dev is the local name of the shared directory on node N1.
vfs indicates that the virtual file system being mounted will use the nfs protocol.
nodename specifies N1 as the name of the node that hosts the shared directory.
mount set to “true” means the file system will be mounted when the system boots.
account set to “false” means that disk usage accounting is not used for this mount.
The options are:
rw means the mount has read write permissions.
sec matches the sys security model on the shared directory on host N1.
hard is used when an application needs to write to the mounted directory. NFS requests are retried indefinitely rather than timing out. It is combined with intr to allow termination of a process waiting for the NFS communication.
intr allows NFS requests to be interrupted if the server goes down or cannot be reached.
vers this option is used by nodes that can run multiple NFS servers. This also uses NFS version 4.
To check that the stanza has been created correctly use a file editor such as vi to examine a file called /etc/filesystems.
On a Linux platform add the following entry to the system file /etc/fstab (which contains static information about the Linux file systems) using an editor such as vi (again you will need root permissions):
N1:/IIBSHARED /MQHA/IIBSHARED nfs4 sec=sys,hard,intr 0 0
N1:/IIBSHARED is the :
/MQHA/IIBShared is the local directory for the mount.
nfs4 is the version of NFS
The options sec,hard,intr are the same as those described in the AIX section above.
Mount the NFS directory structure:
Use the mount command as follows to mount the shared directory:
Test the mount by issuing the mount command with no options. You should see a line similar to this in the command output:
N1 /export/IIBSHARED /MQHA/IIBSHARED nfs4
Verify the mounts:
You can also check the integrity of a shared mount using a WebSphere MQ series utility called “amqmfsck“. This tool verifies that a shared file system is compliant with POSIX standards, and capable of sharing data to support multi-instance queue managers (and by extension, multi-instance brokers):
What next? :
You are now ready to create your multi-instance broker (and its multi-instance queue manager) on the NFS client nodes N1 and N2. The instructions for this step can be found under the IBM Knowledge Center topic: Using multi-instance brokers.