Life's full of challenges. Besides the personal—keeping a household budget, eating healthy and not being killed by psychos on the road while driving to work—more than a few arise on the job. Whether inherent to IT careers, self-inflicted or surprise "gifts" from others, they're sometimes nasty landmines disrupting the day's tranquility and productivity. So they're best addressed squarely; ignoring them doesn't make them go away and they'll usually only be worse tomorrow.
Challenges are an area where we're all in this together; people don't always understand challenges faced by others in different—but still mainframe-related—jobs. So when collaborating with colleagues from networking, database, security/crypto, analytics, etc., don't see them as obstacles to getting your work done. Rather, learn enough about their areas and teach enough about yours that you make decisions and solve problems together.
Today's mainframers face challenges different from those faced earlier in their careers and tomorrow's will be different still. And younger workers don't yet have the same in-depth system inner-workings knowledge as veterans. So many areas—technology, the industry, staffing, networking, database—continually change that relying on past wisdom and decisions is a bad idea.
Mainframes: Much Better Than "Still Not Dead"
A chronic challenge is the often-asked question, "Isn't the mainframe dead (yet)?" Of course, a computing platform celebrating its 50th anniversary this month with the usual technology advances and improved efficiencies shouldn't have to prove that it's alive, but it's career-enhancing to periodically demonstrate compellingly why System z remains a powerhouse.
The first step is vanquishing fear, uncertainty and doubt repeated out of innocent or willful ignorance or for competitive advantage. Sometimes it's simply based on what people are familiar with vs. what they don't know. But information technology must run as a business, with decisions based on IBM-suggested fit-for-purpose analysis as described here
. The concepts and theory cited are system-agnostic, so they can fairly rebut critics calling System z an overpriced, obsolete COBOL processor that should be abandoned. Because it's objective, this analysis supports mainframe use when it's appropriate, but can also suggest other platforms. Recognize that credibility is lost when one's attitude is, "Mainframe is the answer. What's the question?"
One way to buttress mainframe use is knowing the wealth of System z-related hardware, tools, applications, services and information available, and proposing them to meet changing requirements. That involves reading IBM and other vendor announcements, following trade publications in print and online, and participating in relevant communities (discussion mailing lists; LinkedIn groups; and user groups such as WAVV, SHARE, MVMUA, Hillgang and NaSPA). Don't let your mainframe be undermined by a lesser alternative, only to realize too late that a viable System z solution existed. And track IBM acquisitions as they continually add to potential mainframe capabilities.
As central and strategic as System z architecture is, it's no longer sufficient—or self-sufficient—for many organizations' computing. Mixed workloads, with zEnterprise BladeCenter Extension or distributed systems collaborating with traditional mainframe operating systems, are common. This can cause culture shock in old-school mainframers, but the challenge is an opportunity to broaden personal skills and demonstrate big iron effectiveness as a focal point of enterprise computing. Partner—don't compete—with the other guys. You'll both win. Share proof-of-concept projects and seek to use what they use, for shared experience and mutual understanding. Invite them to explore the mainframe; offer sandbox servers.
Common Problems are Often Easily Solvable
Comparative cost is often used to disparage the mainframe. But realize that the days of list prices are gone: deal making rules. And substantial hardware/software/services bundle discounts for supporting new workloads are available via IBM's Solution Editions. Remember exquisite fine-grain configurations available via software sub-capacity pricing and hardware Capacity Upgrade on Demand, On/Off Capacity on Demand and Capacity BackUp.
Critically, when budgeting and comparing costs, ensure fair expense allocation across platforms. Because mainframe data centers have often been in place longer than other resources, they're often fully burdened with unrelated costs such as power, networking and business continuity. Stories are even told of data center audits turning up mainframe cost centers being charged expenses for non-related items such as cafeterias and airplanes. Carefully scrutinize total cost of ownership for all technologies used. Vendor accounting and billing can be a maddening minefield, with cryptic product codes and byzantine specials and package deals. But gold can be mined from these bills; experts exist to audit bills for a cut of savings they find in mistakes. Better still is doing that in-house, forcing vendors to clarify and reconcile statements.
An industry cliché says that it's tough finding mainframe staff and that mainframes will be abandoned when the last grizzled veteran moves on. But IBM's long-standing Academic Initiative
is a successful three-way partnership among universities, industry employers and IBM, educating and motivating new generations of quality candidates to learn mainframes and embrace them for careers. And SHARE's zNextGen
is "a user-driven community of over 900 members, representing 24 countries for new and emerging System z professionals, with resources to help expedite professional development skills." It's gratifying hearing success stories from people new to mainframes becoming just as enthusiastic, committed and optimistic as those of us who've spent our careers here. Many young people are second- or third-generation mainframers matching the multiple mainframe hardware generations we've seen.
Beware communication gaps, misunderstandings and conflicts between generations and technology advocates. Older people may think progress stopped with the last system they learned; younger people sometimes feel the world began with the first system they used. With those attitudes, neither has the complete picture. The same applies across platforms; there's no communication or cooperation if people have mental blocks against learning adjacent technologies.
A critical success factor is consistently meeting user requirements. But that can't be done unless requirements are truly understood—even when users themselves don't understand. The challenge here is imagining oneself in the user's position, with the advantage of the mainframe mindset. And rather than guessing, assuming or extrapolating from user or data center past decisions, it's worth surveying/interviewing users, testing and validating responses, and iterating to stay ahead of and avoid user problems and dissatisfaction.
Scalability is one of the mainframe's most powerful advantages, offering seamless compatibility from entry-level machine to massive MIPS-monster. But effectively exploiting scalability requires consistent capacity planning and performance tuning to avoid configuring too little, too much or too late. Use resources such as Cheryl Watson's Tuning Letter
and SHARE's Enterprise-wide Capacity and Performance Project to avoid and solve problems. Follow structured steps upgrading or installing applications to verify that they work under anticipated—and unexpected—workloads. Especially when using workstation development tools, don't assume that single-user functional testing assures production capabilities. Monitor applications for unusual behavior and models upcoming application or configuration changes with application teams.
Security is a front-burner issue. Now that the world beats a path to mainframe ports, it's essential to constantly monitor for—and act promptly on—anomalies. Despite System z's legendary impregnability, risk increases from insider threats and mischief entering through less-secure technologies. In addition, regulatory and industry compliance minimizes risks and ensures best practices are followed.
Testing is a discipline, not an afterthought. IBM has occasionally provided system and application test suites, but those are a beginning, not the finish line. Customized installation-specific testing for critical functions is essential for assuring consistent function and quality. Validate end-to-end operation, because users don't care that the central complex is working when their transactions fail. Similarly, exercise inter-platform connectivity for analytics, encryption and any distributed functions. Don't act on the old joke that users are there to test system programmers' system programs.
End-to-end application ownership eliminates handoffs at critical junctures and allows tailored deployment based on actual resource needs, rather than a small/medium/large approach.
Business continuity planning (the new name for traditional disaster recovery) is trickier in the modern hybrid/mixed computing world. Restoring mainframe operation won't mean much without network access and other associated platforms. That requires identifying critical applications and their dependencies, keeping documentation and procedures updated, and conducting full-scale (not paper-based or tabletop) drills to keep current and find/fix loose ends. Don't let system documentation get dusty; conduct reviews and drills so that needing IPL or power cycle isn't a chaotic crisis.
Maintenance and debugging tasks get no glory, but done badly undermine mainframe strengths. Take seriously System Modification Program/Extended, Instillation Classification Specifications and dump reading, and APAR/PTF esoterica. Cross-train staffers so people can debug and maintain multiple system components. Gather diagnostic data on initial failures, rather than by recreating problems, which can be difficult or disruptive in production environments. Share the mainframe approach to problems, and educate against rebooting/recycling servers or subsystems to hide problems.
Where there's mainframe maintenance and debugging, can legacy code and code quality issues be far away? Just as for maintenance, testing and much else involved in running a data center, documentation and meaningful handover conversations are essential disciplines. Technologies exist for analyzing/documenting/converting/modernizing software written by long-gone staffers who left no written traces. Most importantly, don't add to the supply of mystery software; commit that no project is done without formal and approved documentation, peer-reviewed for completeness and clarity.
Flexibility Helps With Success
The challenges list is by definition never complete. Today's evolving jack-of-all-trades job requirements sometimes hinder in-depth knowledge—but meeting them leads to a can-do reputation and a brighter career. Through five decades, successful mainframers have expected and tolerated the unexpected, been flexible with solutions and problem solving, and learned constantly. Most critically—because in this always-evolving space we'll never know it all—a key skill is having a killer personal network of experts and quickly finding reliable information.
Gabe Goldberg has developed, worked with and written about technology for decades. Email him at firstname.lastname@example.org.