In the previous installment, we laid the groundwork for understanding KVM guests running in a PowerVM LPAR. We explored the initial setup, the underlying architecture, and how these guests function within the PowerVM environment.
In this part, we take a step further into the intricacies of KVM virtualization. This document focuses on the configuration options available for optimizing CPU and memory for KVM guests in PowerVM. We will also take a closer look at KVM's memory management mechanisms, providing insights into how to enhance performance for your workloads.
As we continue to uncover the capabilities of this technology, expect discussions on advanced networking and storage configurations, as well as various performance tuning strategies. Let’s dive deeper into maximizing the potential of KVM guests in your PowerVM environment!
Target Audience
This document is aimed at system administrators and developers running KVM guests in a PowerVM LPAR who want to explore various configuration options for CPU and memory resources available to the KVM guests in an LPAR. We strongly believe that understanding this will empower users to make better decisions when performing capacity planning or provisioning workloads running on KVM guests.
Abstract
Fig 1. Resource allocation to KVM/Non-KVM enabled LPARS on PowerVM
The ability to run KVM guests in an LPAR is a new feature in PowerVM firmware (FW1060.10) release [1]. This feature brings support for the industry-standard Linux KVM virtualization stack to IBM Power and integrates seamlessly within an existing Linux virtualization ecosystem.
The runtime architecture of these KVM guests differs from other virtualization mechanisms available for IBM Power. As illustrated in Figure 1, the allocation of resources to a KVM guest (L2) is performed from the pool of resources assigned to the LPAR (L1) by the PowerVM Hypervisor (L0). This includes CPU threads, memory, and I/O resources available to the LPAR, which can be assigned to KVM guests as either dedicated or shared resources. This document provides detailed information on how various resources are assigned to KVM Guests and how they can be optimized to maximize the performance of the workloads running within them.
Contents
Allocate CPUs to KVM guests
Fig 2: KVM in a PowerVM LPAR CPU Allocation
The KVM guests (L2) running in an LPAR use the shared computational resources available to that LPAR (L1). In turn, the LPAR’s resources are assigned to it by PowerVM Hypervisor(L0). PowerVM Hypervisor creates a slice of the underlying system resources and allocates them to the LPAR. This is illustrated in figure 2, where two slices of the underlying CPU Core-threads are assigned to LPAR1 and LPAR2.
Currently KVM enabled LPARs only support dedicated processors and do not support shared processors; therefore, these CPUs can be mapped one-to-one to the underlying physical processor threads. Once assigned to the LPARs these CPUs can then be allocated to the KVM guests provisioned within the LPAR.
From the L1 prospective, the virtualized CPUs backed by the physical CPUs allocated to the LPAR are termed as vCPUs. As described in part1 [3] of this blog series, the L1 will schedule the L2 vCPUs to run via the H_GUEST_RUN_VCPU hcall to L0. The L0 then performs a switch from the L1 context, restores the L2 vCPU context from the shared guest state buffers [3] and begins running the L2.
The L2 vCPU context is primarily saved and restored by L0 (PowerVM) and the vCPU processing occurs on an L1 CPU thread. This gives L1 enough flexibility to either share the CPU threads across multiple KVM guest’s vCPUs (shared) or dedicate CPU threads for specific L2 vCPU(dedicated or pinned mode). This is illustrated in Figure 2, where guest 1 and guest 2 are each running on dedicated LPAR 1 CPUs. In contrast, guest 3 and guest 4 running in LPAR 2 share the same three CPUs from the LPAR. These configurations have an impact on the performance of the L2 vCPUs, which will be discussed further in this blog.
Configure KVM guest CPUs using Libvirt API
Libvirt API [4] provides a convenient and declarative way to describe L2 vCPU allocation policy and topology using the
Example 1. Domain-XML for CPU allocation and tuning
<domain>
...
<vcpu placement='static' current="1"> 2 </vcpu>
<cputune>
<vcpupin vcpu="0" cpuset="0-4,^2"/>
<vcpupin vcpu="1" cpuset="2"/>
</cputune>
<cpu>
<topology sockets='2' cores='1' threads='1'/>
<numa>
<cell id='0' cpus='0,1' memory='2' unit='GiB'/>
</numa>
</cpu>
...
</domain>
The <vcpu>
tag in the guest domain-xml specifies the number of CPUs. The tags and attributes represent the following:
- current: The number of Guest CPUs at boot time. In the previous domain-XML example, the guest is assigned 1 vCPU to boot.
<vcpu>
number</vcpu>
: The maximum number of CPUs that can be attached or detached from the VM. In the previous domain-XML example, the guest can have a maximum of 2 vCPUs.
Verify allocated vCPUs
Use the virsh command to determine the number of allocated vCPUs to a KVM guest domain. The following example also shows the command output for a domain configured with the domain-XML presented earlier.
Example 2. Verify allocated vCPUs
$ virsh vcpucount <domain-name>
maximum config 2
maximum live 2
current config 1
current live 1
The details provided by the command are as follows:
maximum config: Maximum value possible according to the configuration.
maximum live: Maximum value possible for the live domain.
current config: Current value of CPUs according to the configuration.
current live: Current value of CPUs in the live domain.
Hotplug or unplug vCPUs
Use the virsh setvcpus command to set the number of active vCPUs for the running KVM guests. Based on the policy described in the domain-XML, Libvirsh API will handle hotplugging or hot unplugging the required number of L2 vCPUs and set the active vCPUs in L2. If a hotplug or hot-unplug operation conflicts with polices defined in domain-XML, the operation is aborted, and an error message is displayed.
The following example illustrates how the Virsh setvcpus command, hotplugs one additional vCPU (total vCPUs=2) to the guest defined in the example 1.
Example 3. Hotplug or unplug vCPUs
# Guest lscpu output before hotplug
$ lscpu
Architecture: ppc64le
Byte Order: Little Endian
CPU(s): 1
On-line CPU(s) list: 0
Model name: POWER10 (architected), altivec supported
Model: 2.0 (pvr 0080 0200)
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 1
....
# Hotplug one additional vCPU from host
$ virsh setvcpus <domain> 2
# Guest lscpu output after hotplug
$ lscpu
Architecture: ppc64le
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Model name: POWER10 (architected), altivec supported
Model: 2.0 (pvr 0080 0200)
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 2
....
# Get the vCPU count from host
$ virsh vcpucount <domain>
maximum config 2
maximum live 2
current config 2
current live 2
The Virsh setvcpus command accepts additional command-line flags to specify how or when the changes to the guest domain should be made:
virsh setvcpus <domainname>
2 --config: Updates the number of boot-time vCPUs to 2. A reboot is required for this change to take effect.
virsh setvcpus <domainname>
8 --maximum --config: Updates the maximum vCPUs in the domain-XML configuration to the eight. This option requires the guest to be shut down and re-start to take effect. The --maximum option requires the --config option.
The full syntax and acceptable arguments to the virsh setvcpus command are as follows:
$ virsh setvcpus <guest_name> <vcpus_to_be_plugged> [--maximum] [[--config] [--live] | [--current]] [--guest] [--hotpluggable]
CPU pinning
By default, allocated vCPUs can run on any schedulable L1-CPU thread. However, L2 vCPUs can be pinned to specific L1 CPU threads so that the H_GUEST_RUN_VCPU hcall runs the L2 vCPU on a specified L1 CPU thread. This mechanism improves the cache locality as well as response time of the L2 vCPUs by maximizing the impact of the L1 CPU scheduler’s decisions to run multiple L2 vCPUs on the same L1 CPU physical thread.
To pin the L2-vCPUs, use the <vcpupin>
element of the L2 guest domain-XML. It is shown in example 1 that 2 CPUs are tuned via the <cputune>
element which pins vCPU(1) to host CPU 2 and vCPU(0) to host CPU set {0,1,3,4}.
To determine the vCPU pinning information of a KVM guest domain, use the following command template. The following example shows the output of a KVM guest configured as described in example 1.
Example 4. Configuring KVM guest
$ virsh vcpupin <domain-name>
VCPU CPU Affinity
----------------------
0 0-1,3-5
1 1
Another command virsh-vcpuinfo provides details on the CPU affinity of a guest domain. This command generates a text map of all the L1 CPUs that an L2 vCPU is pinned to. This output format is useful for parsing in scripts or other automations:
Example 5. Provides details on the CPU affinity of guest domain
$ virsh vcpuinfo <domain-name>
VCPU: 0
CPU: 5
State: running
CPU time: 251.6s
CPU Affinity: yy-yyy----
VCPU: 1
CPU: 1
State: running
CPU time: 230.4s
CPU Affinity: -y--------
To pin the vCPUs while the domain is running, use following command:
$ virsh vcpupin <guest_name> <vcpu_no_to_pin> <cpu_no_to pin_to>
L2 vCPU SMT Topology
Power10 processor cores support up to 8 threads per core, enabling simultaneous multithreading (SMT). Processor threads allow more instructions to be processed in parallel on a single processor core, by using multiple execution units. Many workloads benefit from the throughput improvements of SMT when the same code and data are run on sibling threads of the same CPU core.
KVM guests running in an LPAR can be configured as SMT=1, SMT=2, SMT=4, or SMT=8. It is recommended to set SMT=8 for the guests that align the KVM guest topology with that of an LPAR. Guest instances can choose any SMT mode, and scheduler can balance the load across cores. similar to an LPAR. (with ppc64_cpu --smt=xx).
To configure a KVM Guest, the number of vCPUs must be set to the product of the number of cores and threads per core assigned to the guest, and with the number of threads per core explictly defined. For example, to limit a guest to SMT=4 using Libvirt API, configure a guest with the following settings to achieve an SMT=4 setup and 2 cores:
<vcpu>8</vcpu>
<cpu>
<topology sockets='1' cores='2' threads='4'/>
</cpu>
With this configuration, the guest OS can enable up to SMT=4 for its cores. This is illustrated below for a guest that uses the above domain-xml fragment:
$ lscpu
Architecture: ppc64le
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Model name: POWER10 (architected), altivec supported
Model: 2.0 (pvr 0080 0200)
Thread(s) per core: 4
Core(s) per socket: 2
Socket(s): 1
Virtualization features:
Hypervisor vendor: KVM
Virtualization type: para
…
# check the SMT mode
$ ppc64_cpu --smt
SMT=4
#switch to SMT-2
$ ppc64_cpu --smt=2
# check the cpu topology
$ lscpu
Architecture: ppc64le
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0,1,4,5
Off-line CPU(s) list: 2,3,6,7
Model name: POWER10 (architected), altivec supported
Model: 2.0 (pvr 0080 0200)
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
Virtualization features:
Hypervisor vendor: KVM
Virtualization type: para
Allocate memory to KVM guests
Fig 4: Memory Virtualization of KVM in an LPAR Guests
Like Memory for L2 KVM guests is virtualized by the L1 KVM and backed by memory assigned by L0 PowerVM. The PowerVM assigns a dedicated segment of available hardware memory to the L1 LPAR, which is then pooled for L2 allocations.
L2 guests running in an LPAR do not have pre-allocated physical RAM. Instead, each L2 guest functions as a Linux process, with the L1 LPAR’s Linux kernel allocating memory as needed. Additionally, the L1 LPAR’s memory manager can move the guest 's memory between its own memory and swap space. From L1 KVM perspective, an L2 KVM guest is essentially a userspace process with some special privileges running within its context, so standard virtual memory mechanisms[8] are used to allocate memory to the KVM guest. These include various memory devices or regions as that become part of the L2 KVM guest’s physical address space as illustrated in figure 4.
- RAM pages: Memory pages from L1 LPAR.
- HugePages: HugePages from L1 LPAR memory can help reduce the translation lookaside buffer (TLB) misses and related PageTable overhead.
- File backed: These are used to create memory regions backed by memory mapped files on L1.
- Memory-mapped I/O (MMIO): Regions for emulated or mapped hardware.
The virtual memory model for KVM guests in an LPAR uses the RADIX-Tree translation model [10]to convert process virtual address (effective address) to physical addresses using two sets of Radix Tree data structures:
- Process-Scoped page table: Managed by the LPAR or L2, translating guest-virtual-addresses to guest-real-address
- Partition-scoped page table: Managed by the hypervisor, translating guest real address to host-real-address
The Radix-MMU traveses both the process-scoped page table and partition scoped page table to resolve a guest virtual address → host real address if the translation is not found in the TLB.
Challenges and solutions of L2 KVM guest memory management:
An L2-KVM guest’s memory resides within L1-LPAR memory, so its partition-scoped page table is located there as well. This poses challenges because the Radix-MMU is unaware of the page table location. Even if informed, pages might be paged out by the L1’s memory manager. KVM in a PowerVM LPAR addresses these challenges through:
Shadow page table
KVM in a LPAR maintains two copies of the L2 partition scoped page table: One with L1 KVM code and another with L0 PowerVM and is called the shadow page table as illustrated in Figure 4. The L0 is informed of the address of the page table managed by L1 KVM and can traverse to perform guest virtual addresses → host real addresses translation. The L0 synchronizes the shadow page table with the L2 partition scoped page table it maintains.
L2 guest page fault traps
If L0 PowerVM cannot manage an L2 guest page fault (Hypervisor DataStorage Interrupt) it sends a trap signal back to L1, indicating an L2 guest HDSI interrupt has occurred. The L1 KVM code handles this trap by performing swap-in of the required L2 guest page and updating its partition scoped page table. The L2 vCPU is then restarted, prompting L0 PowerVM to re-traverse the L1page table and copy the Page-Table-Entry into its L2 shadow page table, allowing the L2 vCPU to resume the faulting instruction.
This overview highlights L2-guest page fault handling. For more detailed information, refer to [7], which covers PPC64 guest page fault management comprehensively.
Configure KVM guest memory using Libvirt API
Libvirt API users can configure memory for KVM guests through various guest domain XML elements, as documented in [8]. The following example illustrates domain XML configuration for a KVM guest:
Example 6. Guest domain XML configuration
<domain>
...
<maxMemory slots='16' unit='KiB'>1524288</maxMemory>
<memory unit='KiB'>524288</memory>
<currentMemory unit='KiB'>524288</currentMemory>
..
<memoryBacking>
<hugepages>
<page size="2" unit="M" nodeset="4"/>
</hugepages>
</memoryBacking>
<memtune>
<hard_limit unit='G'>1</hard_limit>
<soft_limit unit='M'>128</soft_limit>
<swap_hard_limit unit='G'>2</swap_hard_limit>
<min_guarantee unit='bytes'>67108864</min_guarantee>
</memtune>
<numatune>
<memory mode="strict" nodeset="1-4,^3"/>
<memnode cellid="0" mode="strict" nodeset="1"/>
<memnode cellid="2" mode="preferred" nodeset="2"/>
</numatune>
<devices>
<memory model='dimm' access='private' discard='yes'>
<target>
<size unit='KiB'>524287</size>
<node>0</node>
</target>
</memory>
…
</device>
</domain>
The total memory allocated to the L2 guest is specified by the following XML elements:
- maxMemory: Specifies the maximum memory that can be hot plugged into the guest.
- memory: Specifies the amount of memory the guest has at boot.
- currentMemory: Specifies the current memory in use by the guest. If omitted, it is equal to the value in memory.
In example 6, the guest is configured to boot with 512 MiB of memory, with a maximum possible memory of approximately 1.45 TiB.
Retrieve guest memory allocation information
Use the virsh-dominfo command to retrieve memory allocation information of a L2 guest domain. The following example shows the output of this command for domain depicted in example 6:
Example 7. Retrieve memory allocation information
$ virsh dominfo <domain-name>
Id: 6
Name: <domain-name>
UUID: c0f80196-91b7-4507-a788-23111d9eb15b
OS Type: hvm
State: running
CPU(s): 2
CPU time: 11.6s
Max memory: 524288 KiB
Used memory: 524288 KiB
Persistent: yes
Autostart: disable
Managed save: no
Security model: none
Security DOI: 0
Memory hotplug and unplug in KVM
Memory hotplug and unplug in KVM allows adding or removing memory from a VM without stopping it. This operation is particularly useful for dynamically adjusting resources based on workload demands. Using Libvirt API, you can manage KVM VMs more conveniently.
Here’s a detailed explanation and process to perform memory hotplug in KVM systems using Libvirt API.
Memory Devices
Memory devices serve various roles, from volatile memory for active processes to non-volatile memory for data retention. In KVM with Libvirt API, there are two models supporting both volatile and non-volatile memory:
Dual inline memory module model:
In KVM, a dual inline memory module (DIMM) model provides a flexible and efficient way to manage virtual memory, allowing dynamic adjustments by adding or removing memory in a VM without restarting the VM, thus optimizing the resource use and boosting the performance.
The DIMM model supports hot plugging, you can add or remove memory while the VM is running. This is particularly useful for adjusting the memory allocation based on the workloads without causing downtime.
The following XML configuration illustrates how to define a memory device using the DIMM model for a KVM guest.
- Create a XML file named memory_device.xml with the following content to define a DIMM memory device:
$ cat memory_device.xml
<memory model='dimm'>
<target>
<size unit='KiB'>1000000</size>
<node>0</node>
</target>
<address type='dimm' slot='1'/>
</memory>
- Run the following command to display the current memory statistics of the guest before the memory hotplug:
$ virsh dommemstat <guest_name>
actual 12541952
swap_in 0
swap_out 0
major_fault 443
minor_fault 109769
unused 20753024
available 22385344
usable 20720192
last_update 1730270006
disk_caches 96832
hugetlb_pgalloc 0
hugetlb_pgfail 0
rss 3897152
$ free -h
total used free shared buff/cache available
Mem: 11Gi 1.7Gi 9.3Gi 12Mi 521Mi 9.6Gi
Swap: 8.0Gi 0B 8.0Gi
- Attach the 1GB memory device to the KVM guest using the following command:
$ virsh attach-device <guest-name> memory-device.xml
- Verify the updated memory of the KVM guest using the following command:
$ virsh dommemstat <guest_name>
actual 13590528
swap_in 0
swap_out 0
major_fault 443
minor_fault 109769
unused 20753024
available 22385344
usable 20720192
last_update 1730270006
disk_caches 96832
hugetlb_pgalloc 0
hugetlb_pgfail 0
rss 3897152
$ free -h
total used free shared buff/cache available
Mem: 12Gi 1.6Gi 10Gi 12Mi 521Mi 10Gi
Swap: 8.0Gi 0B 8.0Gi
- Create the following XML snippet in a file named memory_device_detach.xml to detach 1GB memory:
$ cat memory_device.xml
<memory model='dimm'>
<target>
<size unit='KiB'>1000000</size>
<node>0</node>
</target>
<address type='dimm' slot='1'/>
</memory>
$ virsh detach-device <guest_name> memory_device_detach.xml
- Verify the memory statistics of the KVM guest and host using the following commands:
$ virsh dommemstat <guest_name>
actual 12541952
swap_in 0
swap_out 0
major_fault 443
minor_fault 109769
unused 20753024
available 22385344
usable 20720192
last_update 1730270006
disk_caches 96832
hugetlb_pgalloc 0
hugetlb_pgfail 0
rss 3897152
$ free –h
total used free shared buff/cache available
Mem: 11Gi 1.1Gi 9.9Gi 12Mi 518Mi 10Gi
Swap: 8.0Gi 0B 8.0Gi
Additional options for memory attach or detach:
- --live: Applies when the VM is running, allowing memory to be added to the current memory of the VM.
- --config: Affects the next boot of the VM, enabling changes that do not impact the current running state until the VM is shut down and restarted.
- --persistent: Applies to the running VM and saves the changes to the VM configuration, ensuring persistence for the next boot. The --live and --config options can be used together to achieve the same outcome as --persistent.
- --current: Affects the VM's configuration regardless of its running state. If the VM is running, this option behaves like --live, and if the VM is not running, it acts like --config.
Virtio-mem model:
The virtio-mem model in KVM is a para virtualized memory device designed to provide dynamic memory management for virtual machines. It allows for the re-sizing of guest memory at run-time. In this model the memory is managed in blocks and assigned in substantially smaller granularity per NUMA node.
The following example illustrates the virtio-mem model:
$ cat mem_device.xml
<memory model='virtio-mem'>
<target>
<size unit='GiB'>48</size>
<node>0</node>
<block unit='MiB'>2</block>
<requested unit='KiB'>16777216</requested>
<current unit='KiB'>16777216</current>
</target>
<alias name='ua-virtiomem0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</memory>
Key attributes:
- Size: Defines the maximum size of the virtio-mem device. It must be a multiple of the block size.
- Node: Specifies the assigned vNUMA node for the virtio-mem device.
- Block: Indicates the block size of the virtio-mem device, determining the hot(un)plug granularity on the hypervisor side. This should be at least the page size of memory backing (1 GiB) or the transparent page size (2 MiB).
- Requested: Shows the amount of memory the guest is expected to consume via this specific virtio-mem device.
- Current: Indicates how much memory the virtio-mem device currently provides to the guest (i.e., "plugged memory").
- Alias: Allows the user to specify an alias for the virtio-mem device, which can be used to change the requested size.
- Address: Since virtio-mem devices are PCI devices, the address is typically auto-generated by libvirt but can also be specified manually, including the type and slot number in the address tags.
Adjust virtio-mem device size
The virsh update-memory-device command allows for adjusting the size of a virtio-mem device. This can be done while the virtual machine is running or for the next boot.Updating the memory size of a running VM acts as a hot plug or unplug request.
$ virsh update-memory-device <guest_name> --requested-size <memory_size>
There are various options to select a virtio-mem device to resize:
- --requested-size: Use this option to resize the memory for a domain with a single virtio-mem device.--node: Use this option if the domain has a single virtio-mem device per NUMA node
- --alias: Use this option if the domain has a multiple virtio-mem device per NUMA node
Troubleshooting memory hotplug configuration issues
- To perform memory hotplug, ensure that the maximum memory attribute is specified. This parameter defines the maximum memory that can be hot plugged to a device.
$ virsh attach-device <guest_name> memory-device.xml
error: Failed to attach device from memory-device.xml
error: unsupported configuration: cannot use/hotplug a memory device when domain 'maxMemory' is not defined
- Ensure that the node on which the memory is to be hot plugged or unplugged is specified in the XML file.
$ virsh attach-device <guest_name> memory-device.xml
error: Failed to attach device from memory-device.xml
error: unsupported configuration: target NUMA node needs to be specified for memory device
- If the attached device is currently in use by the guest, it cannot be detached until the load from the device is reduced. However, this should not apply to all the DIMMs.
$ virsh detach-device <guest_name> memory-device.xml
error: Failed to detach device from memory-device.xml
error: device not found: model 'dimm' memory device not present in the domain configuration
- The memory being hot plugged cannot exceed the maximum memory (maxMemory) defined in the guest's XML configuration.
$ virsh attach-device <guest_name> memory-device.xml
error: Failed to attach device from memory-device.xml
error: unsupported configuration: Attaching memory device with size '10485760' would exceed domain's maxMemory config size '41943040'
Summary
The previous entry in this blog series [3] covered the fundamentals of setting up a KVM Guest running in an LPAR and explored the underlying mechanics. This post expands on that foundation by detailing various options for configuring CPU and memory for KVM Guests in PowerVM. Additionally, we examined the intricacies of KVM Guest memory management. There is much more to this exciting technology, and future posts will delve into configuring networking, storage, and various performance tuning options for KVM Guests to optimize workload performance. Stay tuned…
Acknowledgements
Thanks to Meghana Prakash who manages the LTC - KVM team and has been the primary sponsor and impetus behind this blog series. Special thanks to the Co-Authors of this article which includes but not limited to:
- Amit Machhiwal
- Gautam Meghani
- Anushree Mathur
- Tasmiya Nalatwad
Thanks to following for spending their time reviewing this series and coming up great insights and review comments.
- Vaidyanathan Srinivasan
- Madhavan Srinivasan
References and footnotes
[1] https://www.ibm.com/docs/en/announcements/extends-hardware-capabilities-ddr5-memory-other-
[2] KVM in PowerVM LPAR IBM Knowledge Center Documentation: https://ibmdocs-test.dcs.ibm.com/docs/en/sflos?topic=linuxonibm_test/liabx/kvm_in_powervm_lpar.htm
[3] https://community.ibm.com/community/user/power/blogs/vaibhav-jain/2024/10/18/kvm-in-a-powervm-lpar-a-power-user-guide-part-i
[4] https://libvirt.org/kbase/memorydevices.html
[5] https://virtio-mem.gitlab.io/user-guide/user-guide-libvirt.html
[6] https://libvirt.org/manpages/virsh.html#update-memory-device
[7] Taking it to the Nest Level - Nested KVM on the POWER9 Processor - https://lca2019.linux.org.au/schedule/presentation/145/
[8] Virtual Memory, Paging , mmap anonymous / file-backed memory etc
[10] Power ISA . Book III Section 6.7.11 https://files.openpower.foundation/s/9izgC5Rogi5Ywmm
[11] https://libvirt.org/formatdomain.html#memory-allocation