IBM Storage Defender

IBM Storage Defender

Early threat detection and secure data recovery

 View Only

Funny disaster recovery job posting

By Tony Pearson posted Fri February 16, 2007 11:22 PM

  

Originally posted by: TonyPearson


I didn't really have a theme this week, still recovering from jet-lag from my travels through Japan, Australia, China.

Gary Diskman has an amusing blog entry about a Funny disaster recovery job posting. It is not clear if he is being completely tongue-in-cheek, or a bit cynical. However, it rings true that you get what you measure, and some managers look for easy metrics, even if there are unintended consequences.

Western medicine works this way. Rather than paying your doctor to keep you healthy, you pay him per visit, to get refills on prescriptions, check-ups on medical conditions, surgeries and so on. While Eastern medicine is focused on keeping people healthy, Western medicine profits more from resolving "situations".

I have seen similar situations on the "health" of the data center. In one case, the admins were measured on how quickly they bring back up their web-servers after a crash. They had this process down to a science, because they were measured on how quickly they resolved the situation. I suggested switching from Windows to Linux, a much more reliable operating system for web-serving, and showed examples of web-servers running Linux that have been up for 1000 days or more. Management changed the metrics to "average up-time in days" and magically the re-boots all but disappeared, thanks to Linux, but also thanks in part to shifting the incentive structure. Perhaps some of those earlier situations were "artificially created"?

Back in the 1980s, I was working on a small software project that was about 5000 lines of code. In those days, testers were measured by the number of "successful" testcases that ran without incident. Testcases that uncovered an error were labeled as "failures" to be re-run after the developers fixed the code. When I declared my code ready for test, the test team ran 110 testcases, all successfully, and they were all rewarded for meeting their schedule. I, on the other hand, did not accept these results, met with them and told them I would give them $100 each if they could find a bug in my code in the next week. Nobody writes 5000 lines of code without some error along the way, not even me. (As one author put it, more people have left earth's gravity to orbit the planet than have written perfect code that did not require subsequent review or testing. It's so true. Good software is difficult to write.)

The test team accepted the challenge, and found 6 problems, more than I expected, but at least I felt more confident of the code quality after fixing these. As I suspected, the unintended consequence of counting "successful" testcases was that testers would write the most simple, basic, least-likely-to-challenge-boundaries testcases to ensure they meet their numbers. My experiment was costly to me, but more importantly was a wake-up call for the test management, and they realized they needed to re-evaluate their test procedures, metrics and terminology. This was a long time ago, and I am glad to see that the overall "software engineering" practice has matured much over the past 20 years.

So, my advice is to determine metrics that have the intended consequences you want, while avoiding any negative unintented consequences that might undermine your eventual success. People will quickly figure out how to maximize the results, and if you can align their goals to company goals, then everybody benefits.

Well, I'll be blogging from Mexico next week (yes, it is a business trip!). Enjoy the weekend.

technorati tags: , , , , , , ,

1 comment
7 views

Permalink

Comments

Sat July 11, 2020 08:24 AM

thanks for sharing this
Few fun land