Some 2000 customers worldwide use IBM® IMS™. And not just on a small scale: banks, insurance companies, brokerage houses, hospitals, automobile manufacturers, airplane manufacturing companies, telephone companies, and government agencies use IMS for all sorts of critical tasks.
Over 95% of Fortune 1000 companies use IMS in some capacity, as do all of the top five US banks. The incredible thing about IMS is that it’s been around for over half a century, and it is still in use today. What’s more, is that clients’ trust and confidence in IMS hasn’t waned. In fact, when it comes to performance, reliability, and availability, clients put their trust in IMS.
What gives? What is it about IMS that makes enterprises continue to use it? What makes them have that trust and confidence in IMS? To understand this, let's look at the origins of IMS.
How it all began
We all know that IMS first became available in 1968 (the project actually started in 1966) and was initially called Information Control System and Data Language/Interface (ICS/DL/I), developed for the NASA Apollo space program. It was the result of a joint project between IBM, North American Rockwell Space Division, and Caterpillar Tractor to build an automated system to keep track of the enormous number of parts and materials needed to build the Saturn V moon rocket.
Back in 1966, there weren’t any database systems (except for GE’s Integrated Data Store (IDS), which ran only on GE’s mainframe). IBM’s System/360® mainframe had just been introduced in 1965. So, computers and their operating systems weren’t as sophisticated as they are today. There weren’t any widely available best practices that programmers could use. Memory and external storage systems were at a premium and not as fast as today’s devices. For example, the IBM 2314—the Model 1 A1 Disk Device (introduced in 1969) had storage capacity of only 145 MB and average access time of 60 milliseconds.
However, for the Apollo space program, speed was of the utmost importance. The engineering team needed to get at the data fast. And there was a lot of data—2 million parts! Furthermore, data could never be lost. So, the system had to be fast, scalable, secure, and reliable.
These were the initial requirements that IMS had to satisfy back in the early '60s and '70s and had to continue to satisfy over the past five decades, which was no small feat.
So, IMS was built for speed, efficiency of storage usage, data integrity, high availability, recoverability, and scalability. That is part of the IMS DNA.
Let’s look at some of the ways IMS Database Manager achieves these goals:
Separating data definition and data access from the application code
Remember, North American Rockwell had a big file (2 million parts) and many programs requiring fast access to it. How do you ensure data access is uniform? How do you build a uniform approach to data recovery? The initial designers came up with a brilliant approach: Separate data definition and data access from the application programs that process the data.
The point of separation was the Data Language/I (DL/I) language. The application code could now focus on processing the data without the complications and overhead associated with the access and recovery of data. This paradigm virtually eliminated the need for redundant copies of the data. Multiple applications could access and update a single instance of data, thus providing current data for each application. Online access to data also became easier because the application code was separated from data control.
This made IMS application code maintenance easier.
As disk devices improved, data access routines could be upgraded without the need for modifying application code. Because data access was separated from the application programs, data integrity could be maintained through lock managers. IMS has two: Program isolation (PI) and Internal Resource Lock Manager (IRLM).
Speed Speed Speed!!
Everywhere you look you can see that the IMS Database Manager was designed with performance and availability in mind:
Predefining data definitions
The data definition processes of PSBGENs, DBDGENs and ACBGENs were provided so that data definitions for databases can be predefined and put into a runtime format so that performance can be gained from data access routines.
Specialized data access methods
With data access separated from application code, IMS engineers from the beginning were focused on high-performing data access methods. Even though VSAM (Virtual Storage Access Method—introduced in the '70s) was considered the standard access method on the mainframe, IMS Engineers wanted faster access methods. Hence came OSAM (Overflow Sequential Access Method)—a high-performing access method designed specifically for IMS. In fact, most IMSers joke that VSAM stands for Very Slow Access Method.
IMS databases designed for speed and high availability
Remember that disk devices in the late '60s and early '70s were not as sophisticated as today’s devices. IMS has nine different types of IMS databases. Each is designed to optimize how data is stored on DASD and also optimized for the different types of data processing needs. Here are some examples:
- HSAM – designed for historical or archived data
- HISAM – designed for when you need random access at the root level and sequential access within the database record and where you have minimal delete activity
- HDAM – designed for when you need fast and random access both at the root and the dependent segment level
- HIDAM – designed for when you need both sequential and random access at the root level and random access at the dependent segment level; HIDAM is not as fast as HDAM
- HALDB – designed for managing large databases (up to 40 TB of data)
- DEDB – designed for fast data entry (via SDEP segments). DEDBs also have significant features that make it suitable for high performance and high availability applications.
IMS database buffer pools designed for performance
IMS has three types of buffer pools for the different types of data sets:
- VSAM shared resource pool
- OSAM buffer pool
- OSAM Sequential Buffer (SB) pool
Each of these pools can be optimized for performance. For example, you can direct given data sets to specific subpools. Also, both OSAM and VSAM subpools can be changed dynamically via the type-2 UPDATE POOL TYPE(DBAS) command.
IMS databases can have both the relational and hierarchical data model
Data processing has evolved over the past few decades. COBOL is no longer the programming language of choice. Newer programming languages like Java have taken hold. SQL has become the de facto standard language for databases.
IMS has also evolved.
Today, IMS application programs can be written in Java and access to IMS databases can be done by using JDBC. This means that databases that were built in the '70s or '80s as hierarchical databases can now also be viewed in a relational data model, that is, as a set of tables related via foreign keys.
Furthermore, DDL can be used to define and modify database objects. In fact, IMS has continued to be on the cutting edge of technology. And you can now also access IMS data from almost any platform via RESTful APIs.
IMS database staying strong into the future
IMS Database Manager is a powerful database system. With its unparalleled performance, efficient use of storage, high availability, reliability, and scalability, it continues to undergird the local hospital, the entire financial sector, and even the U.S. federal government.
It is often said that IMS helped NASA fulfill President Kennedy’s dream of sending Americans to the moon and returning them safely to Earth. Perhaps, someday IMS will help humans live on the Moon or Mars.
Don't forget to visit IMS Central to find more IMS training content, what's new, and links to documentation and support.