IBM Destination Z - Group home

Is it Wrong to Rely on Legacy Systems?

By Destination Z posted Mon December 23, 2019 03:40 PM

  

Much of the software at the core of the systems that handle day-to-day activity in major banks and other large companies is 20, 30 or more years old. But if there’s a glitch of any kind, the decision to update and continue using these legacy systems, rather than bringing in “shiny new applications,” is an easy target for criticism.

Reusing perfectly good legacy code to support new requirements makes sound financial and business sense. And many organizations have adapted their existing mainframe systems to handle new workloads from online and mobile users. Think of the huge volumes processed online by financial services companies who have adapted quotation engines to handle mortgage and insurance quotes over the Web and mobile devices.
 
But there’s a stigma around “old” technology, which means many people feel happier with new “modern” IT. In all other areas of business, environmental concerns are leading to increased reuse and recycling; so why not recycle parts of your older computer systems, originally designed for internal use, and reuse them as components behind your new customer interfaces?
 
There are good reasons for sticking with established systems that have stood the test of time. What’s more, do you really have the time to rewrite a new version of what you already have running? You’ve heard of brownfield building sites—new houses built on disused industrial land to avoid encroaching on protected countryside. Well, now there’s a concept of brownfield architecture—creating new computer systems by combining the best of the old with the new—thus avoiding having to build new systems from the ground up and re-inventing the wheel. (It's true, look it up on Wikipedia if you don't believe me.)
 
If you do reuse legacy applications, then you need to make sure they’re going to be fit for the job, and that’s often an application-performance issue. A component of your application that experienced low usage rates in the past under internal use can suddenly see a 10-fold, 100-fold or 1,000-fold increase by being reused as part of an external customer-facing system. This sort of thing does happen.
 
At low transaction volumes, application performance and resource utilization might not be a problem and is often overlooked, but sudden volume increases as a result of reengineering immediately highlight inefficiencies and bring the focus on to application performance.
 
To maintain high service levels at high transaction volumes it’s really important that systems-management teams ensure applications are using mainframe CPU efficiently. Application-focused performance tuning is what is needed here. It works by profiling the way (and where) applications use CPU resources as they run; so that hot spots and inefficiencies can be identified, then corrected.
 
So what is it that is exactly “old” about legacy systems? In reality the mainframe hardware they run on is today as modern and durable as any other computer hardware, and the application software that is the real legacy has stood the test of time, keeping the computer systems of major companies running for nearly 50 years.
 
Providing you put the necessary checks in place, it can make perfectly good sense to live off the legacy of code that you invested so much time in accumulating.
 
Philip Mann is principal consultant and mainframe performance-management expert at Macro 4. He has been working with IBM mainframes for more than 30 years, including more than 10 with Macro 4, where application performance tuning is one of his major interests.

0 comments
1 view