PowerVM

 View Only

How does Shared Processor Performance Compare to Dedicated Processors

By Pete Heyrman posted Tue June 16, 2020 02:55 PM

  
PowerVM Shared vs DedicatedWhen deploying a partition, customers can choose between dedicated or shared processors for the virtualized CPUs.  The following sections give some advice on making this decision including changes in POWER9 hardware to improve the performance of shared processor partitions.

Configuration
Dedicated processor configurations are straight forward, you configure the desired number of processor cores you want assigned to the partition and the hypervisor makes a 1:1 binding of a partition’s processor to a physical hardware processor core.  Once a partition is activated, this binding is pretty much static in that a given OS logical thread will always run on that same physical hardware.  This static binding could change if the HMC command optmem is run to re-optimize the physical assignment of resources.  With a dedicated processor partition, you need to size the desired number of cores to meet the peak demand of the partition.  For example, if during a normal workday the CPU consumption is normally around 4 cores, but it peaks around 8 cores, you need to configure the partition with 8 cores or there will be queuing delays in dispatching applications because there are not enough cores to handle the peak demand.

Shared processor configurations consist of several factors:
  • Desired Processing Units (also called entitled capacity or EC) defines the minimum guaranteed amount of CPU time a partition can run on physical cores.  For example, if the processor units are set to 0.5, the partition would be guaranteed 30 seconds of CPU time every minute.  Note that if a partition doesn’t use all of its entitlement, the remaining CPU time is automatically made available for other shared processor partitions.
  • Desired Virtual Processors defines how many processor cores could be running at any moment in time.  For example, if virtual processors are set to 3, at any moment in time there could be 3 physical CPUs in the server running tasks on behalf of the partition.
  • Sharing Mode is either capped or uncapped.  For a capped partition, the amount of CPU time is capped to the value specified for the entitlement.  For example, for a capped partition with processing units set to 0.5, the partition could consume at most 30 seconds of CPU time every minute.  For an uncapped partition, the number of virtual processors defines the upper limit of CPU consumption and not the value specified for processing units.  For example, if virtual processors are set to 3, the partition could consume 180 second of CPU time every minute (3 virtual processors each running at 100% utilization would be 3 physical cores worth of CPU time).  For a partition to consume more than its configured processing units, there must be unused capacity available on the server.
  • Uncapped Weight provides information to the hypervisor on how unused capacity should be distributed across partitions.  A partition with an uncapped weight of 100 is 100 times more likely to receive some of the unused capacity than a partition with an uncapped weight of 1.
With a shared processor partition, you need to size the processing units (EC) to meet the normal workload demand of the partition and set the virtual processors to meet the peak demand of the partition.  For example, if during a normal workday the CPU consumption is normally around 4 cores, but it peaks around 8 cores, you need to configure the partition with 4.0 processing units and 8 virtual processors to meet the normal and peak demands.

Since dedicated partitions should have allocations of cores based on peak demand and shared partitions should allocate entitled capacity (EC) based on normal demand, there is generally a cost saving in physical hardware and software licenses when using shared processor partitions.  Because of the CPU sharing aspects for shared processors, you likely will be able to have more partitions deployed on an individual server which also reduces the size or number of servers required to run your business.

Performance Considerations including POWER9 changes
When you configure a dedicated processor partition, those processor cores are dedicated to that specific partition and normally are not available to any other partition.  If you configure a dedicated processor partition with 3 cores to handle the peak capacity of the partition but the average CPU utilization is only 33%, 2 physical cores on average are licensed but unused by the server.

Assuming the server has a mix of dedicated and shared partitions, there is a Processor Sharing option that can be set to Allow sharing when partition is active that will donate idle cycles from the dedicated processor partition as unused capacity to be used by shared uncapped processor partitions.  This option will be referred to as dedicated donate in future sections.

Note that shared processor partitions automatically share any unused entitled capacity with other shared processor partitions.

Physical Device Interrupt Effects
When an interrupt is received (ex, I/O operation completes for a device) for a dedicated processor partition, the hardware has a map of specific cores assigned to specific dedicated processor partitions. Because of this map, the hardware is able to direct interrupts to one of the threads bound to the partition resulting in very little latency in handling interrupts. 

For a dedicated donate partition, if the dedicated processor partition is running at the moment the interrupt is generated, the interrupt handling is that same as it would be for a dedicated processor partition (i.e. very little latency).  If a shared processor partition is running on the donated core at the time the interrupt is generated, the interrupt is sent to the hypervisor and the hypervisor preempts the shared processor partition (saving the current partition state information) and then dispatches the dedicated processor partition on the core.  In this situation there is extra latency encountered in handling the interrupt.

In POWER8 and earlier generations, the hardware doesn’t have a map of specific cores assigned to specific shared processor partitions.  The shared processor map is just a list of cores where any shared partitions could be running.  When an interrupt is generated from a hardware device, the hardware directs the interrupt to one of the cores in the list of shared cores.  If the device that generated the interrupt is owned by the active partition running on the core that received the interrupt, there is very little latency in handling the request (i.e. the interrupt times would be equivalent to a dedicated processor partition).  If the interrupt was generated by a device not owned by the active partition, the hypervisor gets involved and re-directs the interrupt to the proper partition.  This will add latency in the handling of interrupts because a virtual interrupt must be sent to the partition.  In the situation where the partition is not actively running on any core, additional latency will occur as the hypervisor needs to dispatch the partition to start running on a core.

In POWER9, there is hardware and firmware support that maintains a complete map of all the cores being used by all dedicated and shared processor partitions.  Because of this additional support on POWER9, the hardware is always able to direct the device interrupts to the core where an active partition is running.  Net, in the situation where a partition is actively running on a core, the interrupt latency should be equivalent between shared and dedicated.  If a shared processor partition is not active on any core, the hypervisor will still need to dispatch the shared processor partition to process the interrupt.

Virtual Device Interrupt Effects
Many customers have created partitions that have only virtualized I/O devices (i.e. there is no physical I/O assigned to the partition).  A partition that only has virtualized I/O is able to be migrated between servers (Live Partition Mobility), is able to be remote restarted on another server in the event of a failure and so on.  Virtual interrupts are sent by the PowerVM hypervisor and since the hypervisor is always aware of where virtual processors are running at any moment in time, the hypervisor is able to route the interrupt directly to the correct core.  The only difference between dedicated and shared processor partitions is that with shared, the partition may not be actively running on a core so the hypervisor would need to dispatch the partition to process the interrupt.

Cache effects
A dedicated processor partition maintains a 1:1 relationship between logical processors and physical cores.  Because of this, the only data in a processor cache is data association with the dedicated processor partition.  Net, effectively the cache is dedicated to a given partition. 

For a dedicated donate partition, shared partitions can run on the dedicated processor’s cores, so the dedicated partition and the shared partitions are effectively sharing the physical processor caches.  The longer a shared processor partition is running on the dedicated donate core, the greater the chance that the data associated with the dedicated partition has been displaced in the cache by the shared processor partition.

Shared processor partitions are very much like dedicated donate in that multiple partitions can effectively be sharing the same processor caches.

Affinity effect
As previously mentioned, dedicated processor partitions maintain relationships to physical cores.  Every time the hypervisor needs to run a logical processor, there is one and only one thread where the logical processor can be dispatched.  Because of this, there is little delay in restarting a logical processor on a physical thread because there is no other work that could be running on the thread.  The net effect is there is little delays in dispatching a logical processor.

A dedicated donate partition has the same 1:1 relationship but shared partitions can be running on the cores normally available to the dedicated donate partition.  If a shared partition is active when a dedicated donate logical processor wants to run, the shared partition must be preempted and after its state has been saved, the dedicated donate partition can be dispatched.

Shared processor partitions have relationships to physical cores, but a shared processor partition can run on any core that has excess capacity.  Each logical processor has a preferred core where the hypervisor would like to dispatch the logical processor.  If the core is available (no other partitions using the core), the hypervisor can dispatch the logical processor to the preferred core.  If the core is busy, the hypervisor will attempt to dispatch the logical processor to another core on the same physical chip.  If all the cores on the chip are busy, the hypervisor will dispatch the logical processor on a core on the same drawer/DCM.  If all else fails, the hypervisor will dispatch the partition to any available core in the server.

To reduce the amount of collisions that occur with cores/chips/drawers being busy, the hypervisor uses the desired processor units (entitled capacity) to spread out the dispatching.  For example, assume a Power servers with 8 cores per processor chip and two total processor chips in the system.  Also assume you have 4 partitions (represented by different colors in the figure below) which have an average core demand during business hours of 3.0 processor units.  If these are configured with an entitlement of 3.0 the hypervisor would place the partitions as follows:

 
PowerVM Shared Optimal Assignment
From the layout, each of the colored partitions are placed entirely within a processor chip.  Also note that because the entitlement was 3.0, the hypervisor could only place two partitions per chip. This left some free cores available to absorb some spikes in the demand for processing resources.  Having some cores free within the chip improves affinity because spikes in demand can be handled with local resources.  For example, if the red partition was using 3 virtual processors which is in line with the 3.0 entitlement that was configured, the green partition could dispatch 5 virtual processors without having to dispatch the virtual processor on another processor chip.  Because of this behavior, some customer oversize the entitlement for critical partitions to ensure there is additional capacity to handle spikes in demand and ensure good affinity.

If the partition entitlement is undersized instead of properly sized, for example with 2.0 cores even though 3.0 is the normal demand, the placement of these same four partitions could be as follows:
PowerVM Shared Sub-Optimal Placement
Because the capacity was under-configured the hypervisor may have packed all four partitions into a single processor chip.  In this situation if there really is demand for 3 virtual processors for each of the red, green, blue and purple partitions, the chip will be oversubscribed (i.e. there are 12 virtual processors to run on 8 processor cores simultaneously).  Since the chip is oversubscribed, this causes some virtual processors to be dispatched on other chips, reducing the affinity to the caches/memory and results in a loss in performance.  Not only are the four original partitions affected but other partitions on other chips can also be affected.  These off-chip dispatches could force other chips to be busier than expected which could affect partitions with correctly sized entitlement.  This example illustrates why it is important to correctly size the entitlement of shared processor partitions to obtain ideal performance.

Summary
This overview of processor configuration provides an insight into deciding between dedicated and shared processors when deploying a partition.  Based on call home data received by IBM, over 90% of the partitions deployed on Power servers are running as shared processor partitions.  Many customers have chosen shared processors because the cost to deploy applications (hardware and software licensing costs) can be significantly less than with dedicated processor partition.  Note that some applications are only supported when using shared processors and likewise some applications are only supported when using dedicated processors so there may be additional considerations beyond what has been covered in this blog.

Contacting the PowerVM Team
Have questions for the PowerVM team or want to learn more?  Follow our discussion group on LinkedIn IBM PowerVM or IBM Community Discussions

#PowerVM
#powervmblog
0 comments
68 views

Permalink