This is the second of two articles on virtualization. The first article focused on virtualization from the perspective of a programmer
whereas this article takes the view of the planner or system administrator. Its main focus is two software innovations: z/VM and Processor Resource/System Manager (PR/SM).
In the 1960s, before virtual storage was announced for VS1 and MVS, IBM was working on large-scale virtualization where a physical machine like an IBM System/360 computer could run multiple instances of an OS for use by a single user. This single-user OS, like what later became VM’s Conversational Monitor System (CMS), could be used by individuals to write and edit programs, test applications and run application programs.
Later, this virtualization software was enhanced to host guest OSs so some VMs ran CMS whereas others ran a multi-user OS like MVS. This was a significant innovation because you could run both a production and a test version of the same OS to speed the process of implementing system maintenance. Also, you could use VM to test a new version of an OS in support of a major system upgrade.
Fast forward to 2015: What is the state of virtualization on z Systems mainframes? On a single z Systems computer you have a couple of alternatives that are not mutually exclusive. The alternatives are LPARs supported by PR/SM and VMs running under z/VM.
PR/SM was introduced by IBM in 1988 and is a strong feature built into z Systems servers. With PR/SM, you can logically divide an enterprise computer into grouping of resources like CPU and memory that can each host an OS. For example, you could partition a z Systems mainframe into four LPARs and run z/OS, z/VSE, TPF and Linux in each of the LPARs. Expanding on this example, you could have five LPARs and run z/VM in one of the LPARs.
VMs with z/VM
There are many reasons to run z/VM in a z Systems LPAR. z/VM is capable of hosting more partitions, hundreds to thousands of them, as compared to PR/SM so that makes it a stronger server consolidation engine. z/VM also has significantly more flexibility because of its design and history and can even be used to emulate hardware that doesn’t physically exist on the server.
PR/SM and z/VM have different roles and uses and are typically run simultaneously on the same z Systems computer. PR/SM is a hypervisor that runs directly on the machine. Interestingly, it is based of earlier version of VM/XA’s control program so it has some VM built in.
z/VM has a longer history, appearing as a product in 1972, containing CMS and having the ability to host many guest OSs. It can be used to rapidly deploy VMs in real time, so it’s ideal for the kind of self-service associated with cloud environments. It also has a high degree of resource sharing with processors, memory and I/O bandwidth.
z/VM has an enormous amount of built-in support, e.g., TCP/IP for z/VM and Language Environment for z/VM (runtime environment for z/VM application programs written in C/C++, COBOL and PL/I). z/VM also has optional support, e.g., directory maintenance facility for z/VM, performance toolkit for VM (tools for analyzing z/VM and Linux performance data) and RACF security server for z/VM. This built-in and optional support is key to providing integration and management capability when multiple OS environments are at hand.
Implementation Example of Virtualization
Figure 1 is an example of an implementation that was developed to show the features of PR/SM and z/VM combined on a single z Systems computer.
Figure 1. LPARs and VMs in an Everyday Configuration
Figure 1 shows a single z Systems computer that is used to run a variety of OSs including z/OS, z/VM and Linux. At the bottom of the figure, the mainframe computer is displayed with a variety of computing processors. The Central Processor (CP) is the standard processor used by any OS and application. The IBM System z Application Assist Processor (zAAP) is used by z/OS for designated workloads including the IBM Java VM and XML System Services functions. The IBM System z Integrated Information Processor (zIIP) is used by z/OS for designated workloads, for example, various XML System Services, IP security offload and parts of the IBM DB2 Distributed Relational Database Architecture. The IFL is used by Linux and for z/VM processing in support of Linux.
There are other processor types that are not shown here like the System Assist Processor that offloads and manages I/O operations and the integrated firmware processor that is used for managing new generations of PCIe adapters.
In this implementation, LPARs 1 and 2 support the production z/OS workload. This would likely be a mix of application programs including batch and real-time COBOL programs supported by Time Sharing Option. Online transaction processing like CICS or IMS, as well as a data base management systems like DB2 and access methods like VSAM would also be running there. Commercial ISV solutions, like those found in the Global Solutions Directory
, would also likely be part of the application mix.
LPARs 3 and 4 run pre-production z/OS workloads. In testing, pre-production is the last step in application verification before an application change or new application is moved to production. For this reason, pre-production environments are often configured to be as close as possible to the production environments they are designed to represent. In this case, there are two pre-production LPARs to match the two production LPARs and they include the same mix of supporting CP, zAAP and zIIP processors. Also, LPARs 3 and 4 are ideal candidates for z/OS system programmer pre-production testing.
In this implementation, LPARs 6 and 7 run Linux. The system designers have spread this Linux workload over two LPARs and have set up the system definitions to give the workloads in these LPARs a high priority in the use of system resources. Specifically, the system administrator has assigned one system processor for the exclusive use of each LPAR.
LPAR 5 is set up to run z/VM so VMs can be used for development and test of z/OS and Linux. It is ideal to use z/VM for development and test systems because you can bring up the VMs when you need them, on demand, for use. In this way, the only resources used are disk space for the images and associated files. Since there are often many instances of test systems with different revision levels and configurations this flexibility and low operational cost is valuable. In the diagram, three VMs are shown but in actual operation fewer or more VMs would probably be running at one time based on problems being researched and fixed by software engineers and the development schedule of new applications handled by designers and programmers.
LPAR 8 is set up to run Linux production sharing a set of IFLs. Unlike the workload in LPARs 6 and 7, this Linux workload is sharing a collection of processors among the VMs. This approach of allowing a collection of VMs share resources permits the overcommitment of z/VM real memory and real CPU which is technically possible and useful in normal operation. Overcommitment is discussed here.
Is This Mix Practical?
This mix of OSs spread over LPARs and VMs is a classic z Systems mixed workload that minimizes power consumption and data center floor space as well as software and support costs. This way of designing systems and implementing applications takes advantage of the strengths of z Systems servers with its ability to provide high availability, robust security and strong performance. No hardware and software solution besides z Systems provides a more effective outcome to the challenge of hosting mixed workloads in a manageable and cost-effective manner.
Joseph Gulla is the IT leader of Alazar Press, a publisher of children’s literature.