Mainframe Modernization Leveraging IBM Z AIOps
What is mainframe modernization? Well, many vendors will tell you that it is the replacement of some, or all of your mainframe infrastructure, which is old and costly to run (or so they will tell you), with commodity servers (and their software and services), that will run your applications for less cost, and at the same time, they’ll run faster. Those are pretty bold claims. Their customers aren’t always happy, but the big words do sell well.
To be fair though, these types of companies actually do have plenty of happy customers. And that’s because there always have been folks running applications on the mainframe that are in fact better served by running them (or improved versions of them) on commodity servers.
But that’s not the only way to “modernize” a mainframe. What are the actual aspects of the mainframe that require modernization? Well, it comes down to three main attributes: cost, professional skills and modern interfaces – and that’s really it. So let’s take a closer look at all three of these “problems” you’ve been saddled with by your mainframe systems and how a unified IBM Z AI Operations platform may be the ticket to modernizing your IT infrastructure operations.
There is no argument that mainframe computing comes with a hefty price tag. And if you’re running non-business-critical workloads on a mainframe, that’s going to be pretty painful. If that’s the case, call one of those vendors I mentioned at the start; one of them can certainly help you. However, if your mainframe processes 75% of your revenue, and you’re running high-intensity transaction processing, you’re probably using the most cost-affecting computing solution on the planet for those kinds of workloads.
The cost of the mainframe includes the hardware (very costly), the software (costly), power consumption, support personnel, and other things. Replacement servers are very cheap per unit, but costly when you run hundreds or thousands of servers. The server software is also pretty reasonable per server, but can become outrageous for hundreds or thousands of servers. Same goes for support personnel, power and air conditioning, etc. If you really care about the bottom line, you had better be prepared to look closely at this. You could start by reading a paper that details the cost advantage of the mainframe for large-scale transaction-intense workloads or this blog entry outlining the business value of the transformative mainframe by IDC.
As you’ll see, exploiting IBM Z AIOps technology can be a way to reduce or avoid down time, automate operations, or more acutely manage resources - all leading to significant cost savings.
It’s no secret that the most experienced mainframers are beginning to retire, but is that expertise dying off? No. Is it impossible to find experienced mainframe people? Well, no. It’s just hard to find early career folks with mainframe experience. Actually, I might go as far as to say that the seemingly diminished availability of mainframe skills is in reality an artificial shortage.
You see, for years, many IT organizations have followed the bad advice from Gartner to implement BiModal IT – to divide IT into two groups: the “cool kids” group running new technologies, and the legacy group running the mainframe. The last part effectively puts the mainframe and all of the people associated with it into a silo, where little new development, little hiring, and few new purchases are “needed.” The natural result is that staff attritions are not replaced; in fact any downsizing is disproportionately targeted at the unpopular silo, which in turn causes a predictable “shortage” of mainframe expertise.
There are people with mainframe experience out there – any decent headhunter can find one for you. However, finding a young, inexpensive programmer with mainframe experience? That’s harder. But what about your own people? Are COBOL and JCL skills beyond today’s millennial computer talent? Certainly not. Seriously, you could train your own people, or hire new grads that can do that work. Look what’s happening in India – mainframe support has been outsourced there for years, and now there are legions of mainframe talent in India. The “shortage” can be overcome.
Yes, the green-screen interfaces are cumbersome and often hinder user workflows. Yet organizations still use them. Why? Mostly, they don’t want to invest in new computer architecture, experienced talent doesn’t want to change their daily working habits, or they’re putting off the inevitable. They are wary of the risk that such a paradigm shift might cause – the interruption of daily business, the possibility that the shift won’t work, or will end up costing more, or any number of unknowns. And that is a completely understandable response to a pending change that is fraught with risk, and guaranteed to cost millions of dollars in capital expense.
Does that mean these IT organizations are stuck with green-screen interfaces? No. Clearly a migration would result in new interfaces, but there’s no reason why new interfaces can’t be applied to the mainframe. It’s just a matter of finding the right way to do it. And to the former point about procuring fresh talent, companies must do it, because new grads most certainly are not learning green screen skills in school.
Since the three big reasons for “mainframe modernization” are separately solvable without necessarily diving into a full-on migration process, what modernization techniques are available now to modernize your mainframe systems? Well, it turns out that there are plenty. Probably the most pressing is the user interface issue, and there are many ways to tackle that problem. Similarly, there are solutions that can ease your reliance on maintaining legacy code. If cost is an issue, there are creative ways to put a big dent in that. And if performance is an issue, there are some very clean solutions for that. Finally, improved IT transparency, observability, monitoring, and operational excellence can help to solve many of these challenges.
There is actually a long history of solutions for the modernization of mainframe green-screen interfaces. The first were screen scrapers, which still exist today, that capture and convert character data, or capture bitmap data. Some emulators used user macros that could drive up mainframe resource usage costs. More adventurous techniques involve actually redesigning some of the legacy code. These solutions all present some level of risk – rising costs, significant redevelopment costs, and so on.
There are plenty of examples of success here though. For instance, IBM’s premier monitoring products - the OMEGAMONS - which have been around for decades and are used by 100s of mainframe customers have undergone modernizations from green screens to a TEPS Enterprise Portal that allows users to easily visualize desired data. Read here for more.
Today the biggest demand is for mobile access to mainframe applications, and today there are solutions that actually leverage the legacy code base to drive new mobile interfaces for mainframe green-screen applications. And the good news is that these tools leverage legacy applications as they are. Legacy applications contain years’ worth of intellectual property, and run fast and reliably. These advantages are leveraged – no new mainframe-side processing takes place; in fact, it need not be modified at all. And that leads us to code modernization.
Today, there are solutions that can leverage all of the code design work done on COBOL programs for the past decades, and help you to move seamlessly into the future (where there may be a continuing shortage of mainframe COBOL, JCL and assembler language expertise). Some of these solutions translate code into various types of distributed-systems flavors of COBOL, however, they are generally limited to smaller projects, where a re-platform will not affect performance. For larger projects, costs quickly get out of control when matching previous levels of throughput performance, 5-9’s reliability, redundancy, and horizontal AND vertical scalability on another platform.
Better solutions allow you to leverage existing code as it is, without a major redesign, re-engineering or complete migration. For larger projects, leveraging what is in place is the fastest and more economical way to modernize. New code can be initiated that can interwork with legacy code – new business rules and business logic can be used to augment the legacy code base, using younger, cheaper programmers, using modern toolsets and programming languages. And that code can run anywhere – on your mainframe, or on other platforms.
This also relates to securing talent. Most large firms have figured this out: If you win the war for developers, you have a better chance of winning in business. Therefore, providing a world-class experience for developers is of ultimate importance. One great example is the IBM Z Cloud Modernization Stack on AWS which provides a cloud-native experience to modernize development practices, helping developers to adopt a unified set of tools and DevOps methodology. Read more about that here. As more developers gain cloud skills this will accelerate an enterprise’s journey to a fully optimal hybrid cloud infrastructure.
While running mainframe systems cannot truly be considered in and of itself a cost issue (see above), there are many ways to optimize mainframe operations without making changes to code logic, databases, and platform hardware. One is high-performance in-memory technology, which can sharply reduce the amount of CPU and MSU resources used by your mainframe applications, thereby reducing their impact on the monthly bill. Similarly, smart performance capping can reduce cost – some of the best solutions can do that without actually capping business-critical workloads.
Other examples may include using anomaly detection software to avoid costly mainframe outages, establishing dynamic thresholds to allow greater operational flexibility/efficiency based on ever-changing system parameters, automating operations for newly provisioned resources dynamically, or more accurately forecasting performance and capacity of workloads as it pertains to Tailored Fit Pricing capping scenarios.
One tried and true method to improve performance is a general systems upgrade – adding processor cores, memory and other hardware onto your existing machines, or even an upgrade to the newest mainframe system (z16), if you haven’t already. Upgrading system software can also improve performance. These solutions will of course, come with an increase in operations cost. However, some of the same solutions that help control costs can also make a big difference in performance as well, without adding to your monthly bill – for example, in-memory technology can improve application performance as well as database performance (in cases where many database applications are optimized). And the new boxes come with built-in support for new software tools such as IBM z/OS Workload Interaction Correlator and IBM z/OS Workload Interaction Navigator which can provide a deep-dive view into your entire workload stack to and help diagnose the root cause of critical performance issues.
As you know, tremendous amounts of IT data is saved every hour of every day on all of your systems, both your mainframe systems and midrange servers; enough data that you could realistically call it your own “IT Big Data.” All companies leverage this data at least for the purposes of paying the monthly licensing bills. The more serious IT organizations also use this data to look at efficiency and to glean some analytical insight.
Going beyond that, however, is where you can make a quantum leap – and that means IT business intelligence. By adding business structure and costing information to your IT data, it becomes possible to measure who in the company is using which resources, and how much that is costing. It can also help to measure the immediate effects caused by business changes (company mergers and acquisitions, process changes, new product introductions, etc.). This is where tools like IBM Z Software Asset Management can really pay off to help you reduce wasted Software spend, avoid SLA infractions, and stay audit-ready. The power of IT’s own data can help change the position of IT from being just a huge cost center into a window into general business efficiency.
Overall, there should be skills efficiencies gained with IBM Z AIOps technologies, such as automation reducing demands for mainframe expertise. Also, long term, the use of AI technologies can capture and leverage tribal knowledge, further reducing skills pressure. In time, the need for some skills may disappear while the need for others will simply change form with advancements in AI/ML technologies. Either way, enterprises will always have a need to securely and efficiently handle massive volumes of data such that they can return value to their stakeholders…something the mainframe is uniquely positioned to do.
Nobody will argue that today’s IT systems must be modernized to handle the new and changing demands of tomorrow. And there are as many ways to do that as there are bits of data in your cell phone’s memory card. But don’t let anyone define for you what “modernization” means – it doesn’t mean use Vendor A’s specific (and possible inflexible) software solutions, and it certainly doesn’t mean to suddenly or even gradually dump your existing high-value and mission-critical IT assets into the landfill. So if you’re running a mainframe – the very best system on the planet for processing business data – and it’s generating 60 or 75 percent of your revenue, find a partner (like IBM) who will actually modernize it with you, with an AIOps approach that lifts the burdens from your people while making your systems smarter.
For more information, visit our IBM Z AIOps website or consider taking this free assessment to determine what's next on your IBM Z AIOps journey