PowerVM

 View Only

Power10 LMB Sizes

By Pete Heyrman posted Wed March 06, 2024 05:53 PM

  


Benefits of Larger Logical Memory Block (LMB) Sizes

In firmware FW1050, additional Logical Memory Block sizes have been introduced and this article will provide some guidance in the selection of LMB sizes that best fit different environments along with some performance results.

Configuration of Logical Memory Block Size 

On Power systems, IBM allows the customers to customize the granularity of memory that can be assigned to partitions; this value is the Logical Memory Block (LMB) size.  The LMB size can be configured from the FSP, eBMC or HMC.  The user can select from a list that allows the choice best LMB size for their needs.  The smaller the LMB size, the more flexibility there is in assigning memory to partitions.  The drawback of small LMB sizes is that the PowerVM hypervisor and operating systems (AIX, IBM i, Linux and VIOS) have more memory ranges that must be managed.

Power10 LMB Sizes

When Power10 was introduced, a review was completed of the LMB sizes that were being used on POWER7, POWER8 and POWER9 servers.  In that analysis, 97% of all servers were configured with an LMB size of 256MB.  The next most popular size was 128MB.  Based on this data, Power10 initially only supported LMB sizes of 128MB and 256MB.  Starting with FW1050, Power10 has added support for 1024MB, 2048MB and 4096MB.  The additional LMB sizes provide performance benefits for systems that host large memory partitions.

LMBs Requirements Based on Partition Sizes

The following table shows the number of LMBs required based on memory assigned to a partition.

Partition Size

LMBs required (128MB)

LMBs required (256MB)

LMBs required (1024MB)

LMBs required (2048MB)

LMBs required (4096MB)

100 GB

800

400

100

50

25

1 TB

8,192

4,096

1,024

512

256

32 TB

262,144

131,072

32,768

16,384

8,192

64 TB

524,288

262,144

65,536

32,768

16,384

As shown in the table, there is a significant reduction in the number of LMBs that need to be managed by the PowerVM hypervisor and the operating systems with the larger LMB sizes.

Partition Boot

When booting a partition, the number of LMBs assigned to the partition has an effect on the boot time.  The following table provides some test results of boot times comparing 256MB to 4096MB LMB sizes.

OS

Boot time 256MB (seconds)

Boot time 4096MB (seconds)

RHEL 9.3, 60 TB,

100 processors

222

76

AIX 7.3, 8 TB,

100 processors

340

296

IBM i 7.4,

16TB, 48 processors

862

894

Memory Dynamic LPAR (DLPAR)

Similar to booting a partition, the number of LMBs can affect the time to add or remove memory from a partition.  The following table provides some test results when adding or removing 1TB of memory with different LMB sizes.

OS

DLPAR Remove

(seconds)

DLPAR Add

(seconds)

256MB

4096MB

256MB

4096MB

RHEL 9.3

4473

58

194

16

AIX 7.3

2541

202

108

26

IBM i 7.4

1872

1224

33

33

Additional Information about LMB Sizes

If you use LPAR Partition Mobility (LPM), the LMB size on the source must match the LMB size on the target system.  This applies to both active and inactive partition mobility.

Since changing the LMB size requires a server reboot, you may want to perform the change as part of a future planned outage.  The HMC and PowerVM will automatically adjust the number of LMBs assigned to the partitions after the size is changed.  There can be minor changes in actual sizes due to rounding.

All of the performance data shown in this article was collected on servers without any workloads running.  If workloads were running, this would have increased the time it takes to complete DLPAR remove operations as operating system and/or application data needs to be moved between LMBs before the OS can free up an entire LMB for removal.  With IBM i there can be situations where the data cannot be removed (i.e. it’s required to remain at a fixed memory address for some period of time), which can lead to memory DLPAR requests that time out from the HMC.

Summary

The selection of LMB size is a trade-off between granularity and performance.  If granularity isn’t a significant concern, the LMB size can be set larger and will generally yield better performance for boot and memory DLPAR.

Contacting the PowerVM Team

Have questions for the PowerVM team or want to learn more?  Follow our discussion group on LinkedIn IBM PowerVM or IBM Community Discussions

0 comments
34 views

Permalink