MQ

 View Only

How we test MQ

By MATTHEW LEMING posted Fri June 16, 2023 07:49 AM

  

A few weeks ago, David Ware and I were lucky enough to travel to Japan and spend time visiting some of our customers around Tokyo. During our presentation to a large bank, we talked a lot about how we test MQ. The customer was very impressed, and it got me thinking that we should really be talking more about how much testing goes into a product like MQ. This blog aims to explain a little bit of what we do.


There is a lot of MQ to test! MQ runs on many different platforms, and at any given time there are multiple releases of MQ in service – for example, at the moment, MQ 9.3, 9.2 and 9.1 are in standard support, and MQ 9.0 is in extended support. Those are just the LTS releases, we also have CD releases to think about too. 


As well as releases which have already shipped, in development we have the main integration stream, from which we release new versions of the product. We need to keep the integration stream as stable as possible, so we don’t directly write new code into it. Instead, we have many clones of the integration stream: one for defects and one each for all of the major development items we are working on. At the moment there are over ten of these clones, each of which needs to be regularly tested.


MQ interacts with many other different products, for example: WebSphere Application Server, CICS, IMS, Db2, Java etc, as well as different container platforms. Typically, there are several different versions of a given product supported with MQ at the same time.


Clearly there are a lot of moving parts here. Given the critical position of MQ in the world’s infrastructure we have invested heavily in automation over the years to ensure that every single change we make, whether to a version of MQ that has already been shipped, or which is in development, can easily be tested to ensure that it works and doesn’t have any unexpected side effects on other parts of the product. 


When we work on a new piece of function, typically two thirds of the effort goes into writing new automated tests. We don’t deliver the new function to the integration stream until all those new tests are written and passing, and we also make sure all our existing test suites also pass on all relevant platforms too. 
At the time of writing, we have more than 30,000 tests. Each one of which is categorised according to the function, platforms and releases of MQ that it is applicable to, making it easier for us to run ad-hoc tests. Some of these are function verification tests which focus on the fine-grained details of a particular feature. For example, if there is no message on a queue, make sure a 2033 (MQRC_NO_MSG_AVAILABLE) is returned; or if a persistent message is sent, make sure it is still available after a queue manager is restarted.


We also have an extensive range of system tests which run large MQ deployments at load under varying scenarios. For example, we have tests on z/OS which will push large numbers of messages through MQ while at the same time cancelling queue managers and failing coupling facilities, and various other components. These tests include validation logic to make sure that every message is accounted for at the end of the test, and that none have been lost or aren’t in the right place.


Lastly, we also have lots of performance tests. These run on dedicated machines to ensure that we get reliable and repeatable numbers. The results are automatically checked against a base-line and if the performance isn’t comparable then someone will investigate to ensure that a performance regression hasn’t been introduced.


We have bespoke tooling to make it easy to run all or a subset of tests on any combination of platforms and build version we require. And all those test runs generate a lot of data which we feed into monitoring tools to make it easier for us to spot trends.

All of this leads to some impressive numbers.

The 30,000 tests we have consist of about 9 million lines of test code.

We run about 3 million tests per month on over 800 test machines.

When we have a version of code which we are about to ship, for example for 9.3.0 or 9.3.3 (CD and LTS releases are treated the same) we run all our tests on all platforms, which uses over 1.5 years of machine time, spread over a couple of weeks.

All this automation allows us to bring out 3 new releases of MQ a year as well as many fix-packs and APARS while ensuring that MQ is as stable and reliable as our customers expect.


#IBMMQ
0 comments
35 views

Permalink