Art: The next time I get a chance to do some serious benchmarking, I'll have to test that out. In my previous rounds of testing, I found that with x86 hyperthreading enabled, I could configure the system as if it had about a third more cores than it actually did, which is right in line with your 25-40% figure. But it didn't occur to me to disable hyperthreading and "overload" the CPU VPs on the system. In my test setup, with hyperthreading enabled, I found I could configure our 16-core machine as though it had 21 cores, configuring 19-20 CPU VPs and not "crowding out" the OS. But in hyperthreading-disabled tests, I never exceeded 15 CPU VPs for testing. Something I'll definitely play with next time.
This raises another interesting hypothetical, though: If you're configuring a new system from the ground up for Informix, licensing concerns aside, would you rather have fewer, faster cores or more cores that are somewhat slower? (A real-world example: 20 cores at 2.4 GHz or 16 cores at 3.1 GHz?)
Until recently I had always found HT off and overloading to be faster than HT on and overloading – for dedicated DB servers. But now you throw VMs into the mix and I've not found a solid set of rules. The VM config, which I generally have no control over, seems to dictate the final performance. The last one was HT on, 4 DB VMs and overloading was best until you drove really hard then I never saw any significant delta with HT on/off, overloading on/off but by then the disk subsystems were giving up.
Agreed, the key is to test in your world.