Db2 for z/OS and its ecosystem

Db2 for z/OS and its ecosystem

Connect with Db2, Informix, Netezza, open source, and other data experts to gain value from your data, share insights, and solve problems.

 View Only

Simplify hybrid cloud mainframe data integration through virtualization with IBM Data Virtualization Manager for z/OS

By Jonathan Sloan posted Tue February 13, 2024 08:00 AM

  

The verdict is in – hybrid cloud is the standard. And that means organizations will need to implement their cloud applications as hybrid by design as opposed to hybrid by default. You will need to think a bit differently when it comes to data when it originates on the mainframe.

Most mainframe cloud application modernization approaches have defaulted to pushing, pulling, or replicating your mainframe data. But if you’ve been working with data long enough, you probably realize that these approaches are only part of a solution. Many mainframe applications, such as banking, financial services, or even retail applications, demand that data be accurate and up to date and that most often means data needs to be accessed at its source.

For those of you that think otherwise, consider what happens when you make a copy of data and use it elsewhere. If the data has been subject to even minor changes, how do you determine which copy is correct and how can you keep them in sync? Yes, there is a technical answer to every technical challenge, but there is always added complexity, cost, and latency. When data needs to be truly consistent, it should be accessed directly at its source.

When that data source is on the mainframe there is an additional challenge to this story. Relational databases such as Db2 for z/OS are readily accessible via several methods including RESTful calls, ODBC and JDBC connects, after all, Db2 was the original reason for SQL. However, traditional IBM Z data sources such as VSAM, IMS and others are not readily and efficiently accessible via SQL. And this is a fundamental necessity to simplify and modernize mainframe data access for today’s developers.

But accessing data at its source means opening your mainframe to the cloud, and that can be scary for some organizations. Unlike application containers, which can be spun up to manage application resiliency and capacity, data sources just can’t be duplicated to handle additional capacity requirements for the same consistency reasons mentioned earlier. To better understand the challenges associated with multiple copies of data see this article “The Transactional Fallacy: The Cloud is Not Always Better for Your Most Important Workloads” in issue 5/2022 of Enterprise Executive.

To make data accessible to the cloud for applications that may need to be accessed 24x7 you need to consider reliability, scalability, and simplicity. And that means your software must be enterprise ready.  IBM delivers this capability via IBM Data Virtualization Manager for z/OS (DVM for z/OS). Recent improvements have further enhanced DVM for z/OS for use within hybrid cloud transactional applications, making it the go-to software for enabling cloud access to virtualized, traditional, non-relational mainframe data sources. To learn more about the recent announcement of IBM Data Virtualization Manager for z/OS 1.2, register for the webcast Modernize access to your mainframe data for hybrid cloud and AI applications here.

Data virtualization masks the sometimes-complex underlying architecture of mainframe datasets for read-write access. To do so, it employs an abstraction layer to shield developers from requiring specific skillsets for different data sources. Data is mapped from VSAM, IMS or other mainframe data sources to a simplified table structure understandable and accessible by most application developers via industry APIs like SQL. A metadata catalog keeps track of data location and availability. But what is key for always-on hybrid cloud applications is that those data maps be shareable across environments. Shared meta-data objects enable horizontal scalability and provide high availability by making the meta-data repository available even if a specific server is not.

Scalability is essential to support the many connections potentially coming in from a cloud environment. DVM for z/OS can scale both vertically and horizontally, allowing an organization to add capacity as necessary. Additional images of DVM for z/OS can share a meta-data repository, as mentioned earlier, to support additional connection requirements. Additional images of DVM for z/OS consume limited additional memory and take advantage of memory above the bar for memory resident items.

Organizations are often correctly concerned about the impact of opening the mainframe for hybrid cloud access. They wonder about the potential impact on general processor utilization. It is true that the additional activity associated with online direct access will drive additional capacity utilization. But organizations often don’t consider that they are already driving up capacity via the replication and extract, transform, and load (ETL) processes used to move data from the mainframe to other platforms. And since replication and ETL processes generally move all data from source to target, an excessive amount of data is often moved in lieu of only moving only those records that are needed for a specific inquiry.

Studies have indicated that as much as one third of mainframe capacity is consumed by extracting, replicating, copying, and moving data. In addition to the excess capacity used, copying data means increased latency, security risk, complexity, fragility (lack of resiliency) and increased bottom-line costs.

Lines of business and development teams need to weigh the benefit of real-time direct access to data against the associated potential additional cost. DVM for z/OS addresses some of these costs by taking advantage of IBM Z specialty processors whenever possible. Specialty processors such as the zIIP processors (z Integrated Information Processors) help lower costs by handling specialty workloads like TCP/IP, DRDA and DVM for z/OS workloads.

Another benefit for virtualization is simplifying development efforts. Besides masking the underlying complexity of traditional, non-relational transactional data sets, often data needs to be combined from multiple data sources for richer more compelling customer applications. In lieu of accessing these systems via separate queries and combining the results within an application, DVM for z/OS pushes queries to where the data originates and joins these virtual data sets in memory, thereby reducing the amount of data movement off platform. Less data duplication could mean less cost and less risk (security and governance).

DVM for z/OS supports more than just hybrid cloud access. It can be used for data access modernization efforts in several ways including:

·       Hybrid cloud delivering direct, real-time access to the most current IBM Z data

·       Data access modernization, to modernize application data access with industry standard APIs like SQL and REST

·       Data fabric and analytics to get real-time analytic insights from real-time data

·       Integration with IBM watsonx.data via a new PrestoDB Connector to build AI workloads from IBM Z data, see here

It's a great time to take advantage of the cloud’s low cost of entry and ubiquitous capacity to build more compelling, differentiating customer applications. It’s also time to recognize the value of the mainframe’s reliability, availability, security and especially its data. Build hybrid cloud applications with these advantages in mind by minimizing if not eliminating data copies and simplifying access to mainframe data sets via data virtualization.

1 comment
71 views

Permalink

Comments

Wed February 14, 2024 04:45 AM

Nice article