MQ

MQ

Join this online group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.

 View Only

Configuring the MQ Console and REST API for high availability

By Gwydion Tudur posted 3 days ago

  

The MQ Console and REST API are an increasingly popular way to administer your MQ estate and allow applications to access MQ to process messages without the need for an MQ client. With that in mind, it’s worth considering how the MQ Console and REST API can be set up in a way that provides high availability of these services.

Decoupling the web server from the MQ installation

The MQ Console and the REST API both run in the mqweb server, which is a Liberty server that's supplied with MQ. When the MQ Console and REST API were first introduced, they could only be used with local queue managers that ran in the same installation as the mqweb server. A few developments since then have decoupled the mqweb server from the queue managers. The most important enhancements in this area are:

  • The stand-alone IBM MQ Web Server. As you might guess from the name, the stand-alone IBM MQ Web Server is a separately installed component, which does not need to be installed on the same system as the MQ server component. This means that you can now install and run as many instances of the mqweb server as you like, at no additional cost.
  • The MQ Console and the messaging REST API can now connect to remote queue managers. This means that a single instance of the MQ Console or the messaging REST API can be used to interact with any queue manager in your MQ estate, regardless of where they run. It also means that the MQ Console and REST API are now backwards compatible with queue managers that run earlier versions of MQ.

These enhancements give you much more flexibility in terms of where you run the mqweb server. They also make it easier to run several instances of the mqweb server to increase the availability of the MQ Console and REST API, and add capacity to serve more users.

Before we go any further, note that there are some restrictions associated with the stand-alone IBM MQ Web Server. It’s available only on Linux, and only the messaging REST API, and not the administrative REST API, is supported. For more information, see the MQ Console and REST API overview in the MQ documentation. The rest of this blog post uses the stand-alone IBM MQ Web Server, and the examples take these restrictions into account.

Building a highly available mqweb server

We’ll now look at how you might go about setting up two mqweb servers to provide high availability for the MQ Console and REST API. The goal is to end up with a configuration that is similar to what’s displayed in this diagram.

Connections to the MQ Console or the messaging REST API target a load balancing proxy, which has been configured to forward the connections to one of two mqweb servers. The mqweb servers are identically configured, so that users can expect the same behaviour, and be able to access the same queue managers, regardless of which mqweb server the proxy chooses to forward the connection to. Of course, if you need to increase availability further, or increase the available capacity, you can easily expand this configuration by adding more instances of the mqweb server.

For simplicity, the diagram shows only one connection from the mqweb servers to a queue manager. You can of course configure each mqweb server to access several remote queue managers so that each queue manager can be accessed through either the MQ Console, or the messaging REST API, or both.

Creating the mqweb servers

The first step is to create two identical mqweb servers. You’ll need to ensure that the mqwebuser.xml configuration is the same for both servers, that the same remote queue managers are defined to both servers, and that the user registry that’s used by both servers is consistent.

The mqweb server configuration cannot currently be easily shared between more than one instance, so you'll need to ensure that each instance of the mqweb server is created with identical configuration, and that the configuration of each server is kept consistent as it changes over time. There are several ways to do this. For example, you could choose to issue identical commands on both systems, or clone the mqweb server data directories. If you choose to clone a mqweb server instance, the topic in the MQ documentation that discusses backing up and restoring a mqweb server is useful to read.

If you think that it would be useful to have the ability to easily store the configuration for the mqweb server in a common location that can be shared by more than one instance, then feel free to create an Idea to that effect.

Setting up the load balancer

I’m going to use Apache HTTP server as the load balancing proxy in this example. Other HTTP proxies are of course available.

The basic configuration to set up a load balancing proxy is quite simple. I started by adding these lines to the http.conf file:

ProxyPass "/ibmmq" "balancer://ibmmq"
ProxyPassReverse "/ibmmq" "balancer://ibmmq"
 
<Proxy "balancer://ibmmq">
    BalancerMember "http://mqwebhost1.ibm.com/ibmmq"
    BalancerMember "http://mqwebhost2.ibm.com/ibmmq"
</Proxy>

This configuration is enough to set up a load balancing proxy that distributes any HTTP requests to URLs that start with the /ibmmq path between the two mqweb servers. You could at this point use the messaging REST API by targeting the HTTP requests to the proxy server.

For example, the following cURL command puts a message to the queue TEST.QUEUE that’s defined on QM1. The request will be processed by whichever mqweb server that the proxy chooses to forward the request to.

curl -k "http://proxyhost.ibm.com/ibmmq/rest/v2/messaging/qmgr/QM1-remote/queue/TEST.QUEUE/message" -X POST -H "ibm-mq-rest-csrf-token: token-value" -u gwydion:password -H "Content-Type: text/plain;" --data "Hello World!"

Note that “QM1-remote” in the URL is name of the unique name that’s associated with the remote queue manager definition in the mqweb server configuration, which is not necessarily the same as the name of the queue manager. This isn’t related to the main topic of this blog post, but it’s worth keeping in mind when using the REST API to access remote queue managers!

The problems with this approach

Unfortunately, it’s a bit early to sit back and congratulate ourselves on having successfully set up a highly available mqweb server environment. The above example is a simple REST request that is entirely self-contained. Everything that is needed to process the request is supplied with the request, including the user ID and password to authenticate with the mqweb server.

Unfortunately, the real world is often more complicated. For example, when you log in to the MQ Console, an LTPA cookie is returned to your browser, which is used to authenticate your later requests for the duration of your session. The same is true if you log in to the REST API using the /login URL.

The proxy server configuration above doesn’t work in this case as it doesn’t guarantee that subsequent requests in a single session will always be directed to the same mqweb server. What’s more, the connection doesn’t use TLS, so the user ID and password in the request are not protected on the network!

Configuring session stickiness on the proxy

The solution to the problem of ensuring that all requests for a single session are directed to the same mqweb server is to configure the proxy to use sticky sessions.

There is no session cookie that is suitable for this purpose that is returned by the MQ REST API (there are problems with using the LTPA cookie for this purpose), so you’ll need to configure the proxy to add a cookie to the requests to be used as the session cookie.

Below is the Apache HTTP server configuration that I used to set up sticky sessions and enable TLS on the proxy. 

<VirtualHost *:443>
    SSLEngine on
    SSLProxyEngine on
    SSLCertificateFile "/etc/httpd/ssl/httpd.crt"
    SSLCertificateKeyFile "/etc/httpd/ssl/httpd.key"

    ProxyPass "/ibmmq" "balancer://ibmmq"
    ProxyPassReverse "/ibmmq" "balancer://ibmmq"
</VirtualHost>

Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED

<Proxy "balancer://ibmmq">
    BalancerMember "https://mqwebhost1.ibm.com:9443/ibmmq" route=1
    BalancerMember "https://mqwebhost2.ibm.com:9443/ibmmq" route=2
    ProxySet stickysession=ROUTEID
</Proxy>

The proxy adds a cookie called ROUTEID to requests that it receives, with the value set to 1 or 2, depending on which mqweb server the proxy forwards the request to. Subsequent requests need to send this cookie to ensure that the request is sent to the correct mqweb server. Your browser will do this automatically when you are logged into the MQ Console. Applications that log into the messaging REST API need to ensure that this cookie is sent on following requests, along with the LTPA cookie that is returned by the mqweb server.

More information

That’s it! You now have a highly available MQ Console and messaging REST API that can be accessed through the proxy server that you configured. To access the MQ Console, point your browser at the full URL of the MQ Console login page, for example, https://proxyhost.ibm.com/ibmmq/console/login.html.

You can find more information about the installation options for the MQ Console and REST API, and configuring the MQ Console and REST API in the MQ documentation.

If you’re using Apache HTTP server, you can find information about using it as a reverse proxy in the Reverse Proxy Guide and the mod_proxy reference. Bear in mind that I’m not an expert in Apache HTTP server configuration, so the configuration examples in this blog post might not be perfect!

Finally, if you have any suggestions for how the MQ Console and REST API can be enhanced in the future, feel free to let us know by creating an Idea.

0 comments
8 views

Permalink