As IBM introduced the E1080 system, the first in a generation of servers based on the Power10 processor , comes a newer of version HMC enabling management of Power10 Systems. HMC V10 comes with a lot of new features & enhancements which will enable a seamless systems management experience and addressing many Request for Enhancements (RFE’s).
To start with, HMC V10 is
- Required to manage Power10 systems
- Supported in 7063-CR1, 7063-CR2 and Virtual HMC appliance models
- Not supported on 7042 machine types
HMC V10 will manage Power10, POWER9 and POWER8 systems, but will not manage POWER7 systems.
HMC V10 adds support for the following features and enhancements
- Support for Power10 processor-based high-end systems and IO adapters
- Support only 128 / 256 MB memory region sizes for Power10 servers
VIOS Management Enhancements:
As a continuation of VIOS Management enhancements delivered in HMC V9 R2 M950, following features have been added in HMC V10 R1 M1010
- Prepare VIOS for Maintenance (RFE 112804)
- Validation for redundancy for the storage/network provided by VIOS to client partitions
- Switch path of redundant storage/network to initiate failover
- Rollback to original configuration on failure of prepare
- Audit various validation and prepare steps performed
- Report any failure seen during prepare
- Option to backup/restore Shared Storage Pool config in HMC
- Scheduled operations to automate the VIOS Config backups in HMC (RFE 149518)
- HMC CLI options to backup/manage/restore VIOS Config/SSP Config (RFE 149518). mkviosbk, chviosbk, cpviosbk, lsviosbk, rstviosbk, rmviosbk commands have been added to support VIOS backup and restore.
- Options to import/export the backup files to external storage
- Option to fail over all virtual NICs from one VIOS to another (RFE 132175)
- Ability to specify labels for FC ports, requires VIOS 126.96.36.199 or later (RFE 132094)
Live Partition Mobility Enhancements:
Automatically choose fastest network for LPM memory transfer:
Requires HMC V10 R1 M1010 and VIOS 3.1.3 or later
HMC will automatically choose the MSP IPs which provide best performance for partition memory data transfer during LPM
HMC will consider the following for automatically choosing MSPs
- Network adapter Speed
- Current data transfers in progress using the MSP
- Current data transfers in progress using the network adapter
- Difference of network speed between the interfaces on source and destination MSP
In case user specifies the MSP IP, user selection will be used, as it’s done currently
For cases like VIOS not being at 3.1.3 or later or one of the HMC’s not at V10 R1 M1010 or later, algorithm prior to HMC V10 R1 M1010 will be used.
Allow LPM/Remote Restart when virtual optical device is assigned to partition (supported with VIOS 3.1.3 or later)
HMC Management and Security Enhancements:
- Ability for user to specify HMC location as user defined info (chhmc/lshmc commands) (RFE 136913)
- Ability to configure e-mail notifications, only when a scheduled operation fails (RFE 138617)
- Ability to enable data replications for groups
- Ability to configure suspended time (after consecutive login failures) as part of console default settings
- Default number of login retries is 3 (with max of 50) and suspension time is 5 minutes (max of 1440 minutes (24 hours))
- Ability to enforce inactivity expiration as part of password policy
- Increase in maximum days for password expiry from 180 to 365 (RFE 113032)
- OS Secure boot support for HMC hardware appliance 7063-CR2
HMC User Experience Enhancements:
- Usability, performance improvements
- Enhancements to help content global search (context-based search)
- Quick view of Serviceable events in dashboard view
- Additional progress information for UI wizards
- UX improvements for left navigation icon and labels
- New Code Update Wizard with usability improvements
AIX Update Access Key support:
- One CoD key per system, like Firmware Update Access Key
- Helps the system administrator track and ensure AIX SWMA is in place
- Check for AIX image date against UAK expiration date will be done when
- AIX updates are applied (AIX check)
- During partition boot, including remote restart and LKU (AIX check)
- During Live Partition Mobility (HMC check)
- Warning only (no enforcement)
- Applicable on POWER10 servers
You can get more information on this enhancement at AIX Update Access Keys
- Replaced objectionable terminology across the HMC, including the GUI, REST API, command line, and messages. Changes in the following functions were made:
- Power Enterprise Pools: master is replaced by controller
- Data Replication: master is replaced by primary, and slave is replaced by secondary
- PowerVM NovaLink Co-management: master is replaced by controller
CMC & Power Private Cloud with Shared Utility Capacity a.k.a Enterprise Pools 2.0 Updates:
- Support for Power10 High-end systems. E1080 systems can be added to the same Enterprise pool 2.0 that has E980 systems.
- Support for tracking (but not metering) Red Hat Enterprise Linux CoreOS
- Ability to retrieve inventory and resource consumption information via APIs for Systems & Partitions in Enterprise Pools 2.0. See CMC API’s for more information.
- Metering for Red Hat Enterprise Linux
- All systems that are expected to share Base RHEL subscription entitlement resources within a pool must have the same Red Hat subscription product and add-on features.
- To enable your Red Hat subscriptions as base and metered capacity resources within a Power Enterprise Pool 2.0 environment, IBM or Business Partner Sales teams must order 5799-RP2 "Red Hat Enablement for Power Private Cloud/Pools 2.0”and include i-listed PRPQ P91342.
- Automatically run refresh capacity daily.
- Budgeting enhancement to support setting Budget to zero and still enable resource sharing between systems instead of throttling being always enabled when budget is set to zero.
Novalink 2.0.2 enabled Power10 System support and support for Power10 compatibility mode. Novalink 2.0.2 or later is required to manage Power10 systems.
Contacting the PowerVM Team
Have questions for the PowerVM team or want to learn more? Follow our discussion group on LinkedIn IBM PowerVM or IBM Community Discussions