There’s a lot of talk around at the moment which blames virtually every highly publicized IT failure on legacy systems running on mainframe computers from 30, 40 or even 50 years ago. Of course that is wrong on so many levels, but it’s an easy scapegoat for company directors to use and distract attention from what’s really going on.
I’d be happy to expand on all the false press that mainframes are getting in another article, but the current crop of problems does emphasize the fact that performance and responsiveness for all computer system components is no longer something that can be easily ignored. Currently this applies especially to mainframe processing, as application components that were once only used internally are being re-engineered and are now taking an active part in providing customer-facing services.
So, for example, people who are used to checking their bank accounts or getting online insurance quotes on their smartphone or tablet devices may be blissfully unaware that at the back end it’s those re-engineered mainframe applications that are doing the work. But this extra workload from a new unpredictable user group can certainly pile the pressure on processing systems.
In the past, mainframe technicians would focus on the performance of the underlying mainframe system when talking about and performing mainframe-tuning exercises. Not a bad thing to do, but ultimately limited because it ignores what is happening inside the business applications that we all interact with.
Application performance tuning, which involves identifying and correcting problems with the way applications use processing power, can only be done by looking at and analyzing the application itself. Yet application functionality is a thing that mainframe technicians—and in general the whole group of operational staff charged with running computer applications in production—know little about.
There was a time when you would only consider application-based tuning if and when poor application performance was seriously hindering the operation of your business. A bit of complaining by internal users that response times were slow would rarely trigger this, as it was too difficult to get a multi-skilled and multi-disciplined team together to look at such issues.
Then IBM introduced Variable Workload License Charging to the mainframe world, which was an open invitation to everyone to save money on license fees if they were prepared to tune their applications to run more efficiently and use fewer of the expensive processing MIPS. Even so, proactive application performance tuning, as a cost containment exercise, is still limited to only a small number of more enlightened mainframe users.
So, here’s yet another call to take application performance more seriously, be it on the mainframe or elsewhere. And, in the mainframe world, if you are going to take what was previously internal processing logic, re-engineer it and then use it in customer-facing systems—which is generally a good idea with lots of brownie points for recycling—then you need to make sure that performance-wise it’s going to be fit for purpose.
In this new world, your users and biggest critics are going to be members of the public on their tablets and phones, rather than a few internal users whose opinion of the performance of your system—while potentially problematic—did not have the potential for embarrassment and damage to your company’s image that the users on the street can cause.
Now is probably a good time to make application tuning a bigger priority in your organization, before your new users do it for you.
Philip Mann is Principal Consultant at mainframe performance management expert Macro 4. Philip has been working with IBM mainframes more than 30 years, including more than 10 years with Macro 4, where application-performance tuning is one of his major interests. In Phil’s opinion successful application performance management is often hindered by the same problems that are discussed in this article.