Flood. Deluge. Tsunami. Human history, culture and consciousness are overflowing with memories of and references to unmanageable inundations that have completely changed the context of our survival. And of all the stories of hardship and survival, one of the most pervasive and memorable is the biblical story of Noah’s Ark, which many of us have known from our earliest youth, whether or not we’ve read the original version.
Indeed, much of human history and innovation have included efforts to shield us from and/or survive such overwhelming washouts, so it’s not too surprising that this particular narrative should surface as more memorable than many other vignettes from the same source.
This makes it a particularly relevant metaphor for the modern flood of information technologies and their impact on every aspect of life and business—and especially for the ways that they have submersed and overturned the stability, continuity and security of what we have historically taken for granted.
It’s also an appropriate metaphor given the role of the survivors of the great flood in preparing a robust environment to preserve a functioning context during and after the disaster that swept everything else away.
In other words, it’s a story of hope and functionality triumphing over despair and devastation.
And, I contend, it is very much like the story that the IBM mainframe has been living since its inception, and will continue to do so as the world around it changes and the tides of time alter the surrounding landscape.
Dimensions and Degrees
I have often asserted that the IBM mainframe is a culmination of the experience, best practices and wisdom from human history, both in terms of the original System/360 architecture that was announced on April 7, 1964, and in the ways that it was improved and enhanced based on the experiences, insights and standards that members of the mainframe culture recognized as essential.
Many of these same dimensions of functionality are analogously illustrated in the story of how Noah’s Ark was designed, populated and employed, offering us insights into why the mainframe continues to be the platform that keeps the world economy afloat and is positioned to weather current and coming challenges as everything around us changes. Let’s look at some of these.
I remember touring the Hoover Dam many years ago, and having the tour guide tell us that they’d feel safer inside the dam than outside in case of a natural disaster, because it was so overbuilt. Lessons learned in building the Hoover Dam allowed for the optimization of future such projects, eliminating features that were unnecessarily strong and consequently costly, the idea being that excessive strength is not cost-justifiable in the face of likely disasters.
Likewise, the commodity consumer electronics computing devices that emerged in the 1970s and 1980s benefited from the lessons learned on the mainframe and other early large-scale computing environments, and then optimized for the costs and strengths that consumers could afford and justify.
While functionality and security have improved on these smaller computers over the years, they’ve been founded on the sand of consumer commodification, and are sitting ducks in the face of overwhelming challenges to their security, hardware resilience, performance, etc. Retrofitting them to a rock-solid foundation such as the mainframe has isn’t cost-beneficial.
The mainframe, on the other hand, was founded on the solid high ground of uncompromising requirements from military, government, finance, academia, healthcare, manufacturing and other world-class businesses. While it was initially very expensive, that up-front investment has continued to pay dividends to every organization that has been willing to take further advantage of it. In fact, the cost-benefits equation invariably favors adding new workloads to the mainframe rather than moving new or established workloads to any lighter platform.
To refer back to our analogy, the ark was also built according to an exacting plan, and would certainly seem to have been overbuilt given the local pre-flood need for watercraft.
The story of the ark has it containing a male and female of every species; more in the case of edible ones. That’s called doing your backups and having fail safes in place.
This is, of course, a concept the mainframe also embodies. Within its first decade, many innovations were built on the solid foundation of IBM’s System/360 and its successors, including backup principles and practices. Those meant that, after a disaster, production could be restored and continue running as soon as possible.
This is one of the object lessons we can take from the ark having a backup mating pair of every species. Another is having redundant systems in a functioning environment, so if one fails, you can have a secondary one to move over to. Kind of like having extra sets of edible animals.
Contrast this with distributed computing, where every individual computer may have a slightly different hardware and software configuration, and any set of backup computers is nearly certain to have yet another heterogeneous set of configurations. Rare is the distributed environment that regularly and successfully tests a complete restore from backup—which the average mainframe shop does at least once a year.
I like telling my students in the mainframe security courses I teach that, “Security is a practical illusion.” What I mean by that is, as long as everyone agrees that something is secure, it works, but as soon as someone finds a way around the current security—and someone always does—it becomes necessary to move to the next level.
Related to this, however, is how well-founded the perceptions of those responsible for security are. If your average user is a part of the security chain beyond their userid, password and the small amount of corporate application access and data to do their job, they are the weakest link—particularly if that link is allowed to click on rogue links in their email and allow the consequent malware to invade the corporate network.
The ark was so secure that its door was closed by a divine hand. That’s pretty nice insurance, and more than we can hope for technologies that are human-originated. However, thousands of years of history have certainly shown us the principles of responsible behavior that such object lessons illustrated.
All that wisdom was brought to bear, not only in creating System/360, but also in enhancing the security systems on its successors over the past half-century-plus. Reflection, forethought, testing and scrupulous response to user community feedback have made mainframe security a key part of IBM’s mainframe integrity.
Contrast this to the afterthought attitude that characterizes so much distributed application security. It’s not just the technology; it’s the entire culture that looks at security as an unnecessary distraction and expense, often until it’s too late. Kind of like scurrying to build a watertight boat when the flood waters are already rushing in.
Ready for the Future
At the end of the story of the ark is all of human history. Being built robustly and responsibly to convey its occupants through a most terrible disaster, it becomes the platform from which all future stories emerge.
As we look with trepidation at hacking, viruses, ransomware and the looming limits of Moore’s Law in the face of ballooning software complexity in the distributed world, it’s like a breath of mountain-fresh air to reflect on the relative simplicity and proven solidity of the mainframe, which, when properly configured, staffed and managed, is capable of staying above these abysmal undercurrents.
The good news continues, as the best and most relevant innovations in the distributed world often have their best manifestation on the mainframe. Far from being out of date, the mainframe is a true bridge to a sunny future where quality computing continues to enable our humanity, business, government, etc., rather than threaten our livelihoods with poorly founded commodity devices that are optimized to increasingly obsolete contexts.
Personally, I don’t Noah better way to compute.
Reg Harbeck has been working in IT and mainframes for more than 25 years, and is very involved in the mainframe culture and ecosystem, particularly with the SHARE Board and zNextGen and Security projects. He may be reached at Reg@Harbeck.ca.