Fellow IBM author Gregory Pfister worked in IBM Austin as a Senior Technical Staff Member focused on parallel processing issues, but I never met him in person. He points out that workloads fall into regions called parallel hell, parallel nirvana, and parallel purgatory. Careful examination of machine designs and benchmark definitions will show that the “industry standard benchmarks" fall largely in parallel nirvana and parallel purgatory. Large UNIX machines tend to be designed for these benchmarks and so are particularly well suited to parallel purgatory. Clusters of distributed systems do very well in parallel nirvana. The mainframe resides in parallel hell as do its primary workloads. The current confusion is where virtualization takes workloads, since there are no good benchmarks for it.
[Guerilla Capacity Planning] by Neil J. Gunther
"In these days of shortened fiscal horizons and contracted time-to-market schedules, traditional approaches to capacity planning are often seen by management as tending to inflate their production schedules. Rather than giving up in the face of this kind of relentless pressure to get things done faster, Guerrilla Capacity Planning facilitates rapid forecasting of capacity requirements based on the opportunistic use of whatever performance data and tools are available in such a way that management insight is expanded but their schedules are not."
Neil Gunther points out that vendor claims of near linear scaling are not to be trusted and shows a method to "derate" scaling claims. His suggested scaling values for data base servers is closer IBM's LSPR-like scaling model, than TPC-C or SPEC scaling. I had mentioned that "While a 1-way z10 EC can handle 920 MIPS, the 64-way can only handle 30,657 MIPS."in my post, but still people felt I was using "linear scaling". Linear scaling would mean that if a 1Ghz single-core AMD Opteron can do four(4) MIPS, and an one-way z10 EC can do 920 MIPS, than one might assume that 1GHz dual-core AMD could do eight(8) MIPS, and the largest 64-way z10 EC can do theoretically 64 x 920 = 58,880 MIPS. The reality is closer to 6.866 and 30,657 MIPS, respectively.
This was never an IBM-vs-Sun debate. One could easily make the same argument that a large Sun or HP system could replace a bunch of small 2-way x86 servers from Dell. Both types of servers have their place and purpose, and IBM sells both to meet the different needs of our clients. The savings are in total cost of ownership, reducing power and cooling costs, floorspace, software licenses, administration costs, and outages.
I hope we covered enough information so that Jeff can go back about talking about Sun products, and I can go back to talk about IBM storage products.
technorati tags: IBM, z10, EC, Jon Toigo, DrunkenData, Jeff Savit, Sun, Gregory Pfister, Austin, Neil Gunther, MIPS, LSPR, SPEC, TPC-C, mainframe, scalability, UNIX, Linux, z/OS, HP, Dell