AIOps: Performance and Capacity Management - Group home

6 Ways IZTA In-memory Technology Can Save You

  

(Originally published on Planet Mainframe by DataKinetics CEO Allan Zander.)

When people hear talk about in-memory technology, they immediately assume the discussion revolves around Big Data and analytics being run on distributed systems, and that the database part of the discussion focuses on things like SAP HANA, TIBCO, VoltDB, or even IBM BLU Acceleration. While these are all great products, and distributed computing is everywhere, they’re mostly irrelevant in a mainframe-centric discussion.

Now some mainframe folks might rightly start thinking about buffering, which is a type of in-memory technology; it certainly applies to DB2 databases, and is an everyday concern for mainframe folks from the DBA through to the CIO. But that’s not what I’m talking about. I’m talking about mainframe high-performance in-memory technology (IBM’s IBM Z Table Accelerator – IZTA) that has been, and continues to be as revolutionary for the mainframe world – as those distributed systems products mentioned above, are for the distributed world. Most of the big banks use mainframe in-memory technology, and if you’re not using it now, it could help you to solve some of the most serious problems that you’re facing today.

Here are just six ways IZTA in-memory technology can save you:

  1. Accelerate your batch applications
Accelerating batch applications is the most direct way in which IZTA in-memory technology can make a difference in the life a CIO: it will make a noticeable difference in application performance as well as resource usage from the application’s perspective, and it will deliver the most immediate ROI.
It works by allowing a batch app to read the most often accessed reference data using a very short code path. To do this, small amounts of data are copied into high-performance in-memory tables, and are accessed from there using a simple API. Using this technique, some applications have been made to run 100x faster. Even modern, well designed batch applications can still be made to run 30x faster or better. This can improve your workload capacity stance. And no changes to the database are required at all.
  1. Accelerate your online applications
Optimizing online transaction processing applications using high-performance in-memory technology is really no different than it is for batch applications, from a technical perspective. You pick the data that you access most often, copy it into high-performance in-memory tables, and get the application to access data from there using an API and a really short code path. 95 percent of the rest of the data accesses are made directly from DASD as before.
The big difference is in identifying the data that is best suited for optimized access. For batch applications, it’s the data that is accessed hundreds of thousands of times in an hour, or in a batch run. For OLTP applications, it’s a little different. You may have to look for the data that is accessed a few hundred times for every online transaction over the course of an hour or a day. It can be a little harder to identify, but once done, the savings in processing time and resource usage, along with workload capacity improvements are no less than they are for batch applications.
  1. Accelerate your DB2 database
While IZTA in-memory technology that is applied to applications can do nothing directly to optimize DB2, the application optimization by itself, does have a beneficial effect on the database. In cases where multiple applications (both batch and online applications) can be optimized using application-specific in-memory technology, the cumulative effect can result in significant improvements to overall DB2 performance.
Even in cases where the benefits as applied on an application-by-application basis are minimal, the cumulative effect can be significant. The result can be a virtual increase in system throughput capacity and a significant reduction in overall DB2 resource usage or, alternatively, an opportunity for a big increase in workload adoption.
  1. Solve the worst business rules maintenance challenges
Believe it or not, there are still some organizations that run mainframe applications that contain embedded business rules within them. Why would they still do this? For one of two reasons: either because they are legacy applications that they are planning to replace or, more likely, because there is an urgent business need for the uber-fast processing of business rules. An unfortunate side effect is that rule maintenance is extremely painful and time consuming, involving program recompiles.
In-memory technology can solve this problem by externalizing rules into high-performance IZTA in-memory tables and accessing them at program speed, using a very short code path. Business rules are easily maintained, and do not require program recompiles, or any other such complication.
  1. Change legacy packaged applications into faster in-memory applications
IT organizations sometimes run packaged applications for which there are no possibilities for editing or optimizing. Others just have legacy applications without the in-house skill sets to make changes or optimize. Still others are concerned about the risk involved in making changes to legacy products. These are all valid concerns, and justify the decision to say no to code changes.
There are solutions even for these folks: an in-memory solution with an SQL interception/redirector can change these legacy DB2 applications into better-performing in-memory DB2 applications without changing a line of code.
  1. Improve system capacity using current assets

For the most part, IZTA in-memory technology makes things run faster. It does this by allowing processing to take place using fewer system resources—less CPU, less I/O, and less MSU. And that translates directly into improved capacity for growing workloads.

What Next?

You may have realized that these are some of the biggest challenges for any organization running mainframe systems at the core of their business operations. In fact, it is likely that you are struggling with one or two of these challenges right now. My advice to you is to look into IZTA in-memory technology. In-memory technology is not only for the sexy new distributed systems. It never has been; it’s been running on mainframe systems for decades, and is still being used by virtually all of the biggest banks and insurance companies in the world.

Do some due diligence and look into it. I’m willing to bet that you can solve at least one big problem this year with IZTA in-memory technology.

See more on IBM’s IZTA: https://www.ibm.com/ca-en/products/z-table-accelerator/details