Hi Stepahanie,
Thanks;
I am little bit confused though; our reporting etc has been always based on these stats, and now those are totally different compared to old ones.
Example:
We are running almost all our lpars in Uncapped mode in CPU pool.
One example below, 8 core CPU pool with 1 lpar (Oracle) which consumes the whole pool; checking from AIX side with lparstat :
%user %sys %wait %idle physc %entc lbusy app vcsw phint %nsp %utcyc Time
----- ----- ------ ------ ----- ----- ------ --- ----- ----- ----- ------ --------
63.5 8.2 0.9 27.5 7.85 261.7 42.2 0.11 43097 10 117 14.56 12:35:31
64.2 8.5 1.0 26.2 7.82 260.7 43.8 0.14 44410 20 117 14.56 12:35:36
66.8 7.7 0.9 24.7 7.86 261.9 46.3 0.11 42464 12 117 14.56 12:35:41
64.8 9.1 1.0 25.1 7.85 261.7 44.3 0.11 41968 8 117 14.56 12:35:46
61.5 8.4 1.0 29.1 7.77 258.8 39.0 0.19 34001 12 117 14.56 12:35:51
62.5 9.6 1.1 26.8 7.83 260.9 42.7 0.13 36826 23 117 14.56 12:35:56
62.0 8.8 0.6 28.7 7.77 258.9 39.6 0.19 30103 25 117 14.56 12:36:01
>> App (Available Physical Processors) column shows that there is NO free CPU resources in the pool
But when checking the Performance graph (see Screenshot.png in attachment); U get the understanding that there is 2-3 cores Available in CPU Pool (for example for adding new Lpars etc) , but to my understanding this is totally faulty understanding !! Since all resources are reserved / allocated for that 1st lpar.
So we will we having VERY difficult times on explaining to customers why there actually IS no available space in the CPU pool, even though the graph shows plenty of free..And not to mention invoicing, which is allocation based...
Or am I misunderstanding something totally here?
Is the any parameter etc, which would give out the Performance / Utilisation data graphs on the same way as in HMC v92 / v10r1m1010 level??
------------------------------
Tommi Sihvo, Lead Service Architect
Tietoevry, Compute Services
email
tommi.sihvo@tietoevry.com mobile +358 (0)40 5180 Finland
------------------------------
Original Message:
Sent: Fri February 03, 2023 01:37 PM
From: STEPHANIE JENSEN
Subject: HMC v10R1 bug in Performance collection?
Hi Tommi,
This fix went out in HMC V9R2M953 and V10R1M1020:
"Fixed shared Processor partitions utilization shown in the GUI to now reflect utilization data adjusted for OS idle time similar to dedicated processor partitions."
Prior to that fix, shared processor partition utilization on the HMC GUI was shown without subtracting partition idle time. After that fix, HMC GUI graphs subtract partition reported idle time for the data shown in the graph (which is what was always done for dedicated processor partitions). That results in a lower amount but better reflects the partitions' workload capacity.
This may explain what you are seeing.
------------------------------
STEPHANIE JENSEN
Original Message:
Sent: Wed February 01, 2023 06:52 AM
From: Tommi Sihvo
Subject: HMC v10R1 bug in Performance collection?
Hi,
Anyone else noticed anything weird on Performance collection on HMC on v10R1 level?
Example:
We have one 8 core Lpar utilising all cores..
on HMC v92 >> Performance data graphs looked OK;
Then we did HMC upgrade to v10R1 ..and now HMC shows that about5- 6 / 8 cores are utilised.
Of course if would be super if real CPU load would be decreased by HMC update :D :D ..but no...
When checking from lpar side ..the lpar STILL uses those 8 cores...eventhough HMC reports that only about 5-6 are in use...
Anyone seeing similar stuff and / or any idea what to check ??
Could the Interval somehow have been changed etc??
Br,
tommi
------------------------------
Tommi Sihvo, Lead Service Architect
Tietoevry, Compute Services
email tommi.sihvo@tietoevry.com mobile +358 (0)40 5180 Finland
------------------------------