Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(qemu): include the virtio_mem kernel module #29

Merged
merged 1 commit into from
Aug 3, 2023

Conversation

lnykryn
Copy link
Member

@lnykryn lnykryn commented Aug 2, 2023

This adds support for virtio-mem devices, which provide a dynamic amount of memory in a VM. Right now, the driver gets loaded and any memory gets added to the system when loading the kernel module from disk.

While not strictly required to boot, we want to be able to 1) add virito-mem provided memory to the system early while booting up 2) add virtio-mem provided memory even when booting without a disk 3) add virtio-mem devices without adding actual memory in kdump
environments such that we can query things like:
a) is a certain PFN currently plugged in the hypervisor and, therefore,
should actually be read when creating a system dump. (kexec-tools
prepares the vmcore header, like on x86-64)
b) which ranges of a virtio-mem device are currently plugged in the
hypervisor and, therefore, should be added to the dump. (vmcore header
gets prepared by the crashkernel, like on s390x)
Note that loading virtio-mem in kdump environments currently fails with
-EBUSY -- but there are plans to install proper hooks instead to support
especially a) in the near future.

  1. and 2) are only really effective when memory hotplug is configured to automatically online all added system RAM in the kernel (and not late, via udev rules): e.g., via "mhp_default_state=online" on the kernel cmdline or via CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE in the kernel.

Especially 2) and 3) require the module to be present inside the initial ramdisk. The primary use case for including it in the initial ramdisk is 3).

(cherry picked from commit f3dcb60)

Cherry-picked from: f3dcb60
Resolves: #2228422

This adds support for virtio-mem devices, which provide a dynamic
amount of memory in a VM. Right now, the driver gets loaded and any
memory gets added to the system when loading the kernel module from disk.

While not strictly required to boot, we want to be able to
1) add virito-mem provided memory to the system early while booting up
2) add virtio-mem provided memory even when booting without a disk
3) add virtio-mem devices without adding actual memory in kdump
   environments such that we can query things like:
 a) is a certain PFN currently plugged in the hypervisor and, therefore,
    should actually be read when creating a system dump. (kexec-tools
    prepares the vmcore header, like on x86-64)
 b) which ranges of a virtio-mem device are currently plugged in the
    hypervisor and, therefore, should be added to the dump. (vmcore header
    gets prepared by the crashkernel, like on s390x)
 Note that loading virtio-mem in kdump environments currently fails with
 -EBUSY -- but there are plans to install proper hooks instead to support
  especially a) in the near future.

1) and 2) are only really effective when memory hotplug is configured to
automatically online all added system RAM in the kernel (and not late,
via udev rules): e.g., via "mhp_default_state=online" on the kernel
cmdline or via CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE in the kernel.

Especially 2) and 3) require the module to be present inside the initial
ramdisk. The primary use case for including it in the initial ramdisk
is 3).

Signed-off-by: David Hildenbrand <[email protected]>
(cherry picked from commit f3dcb60)

Cherry-picked from: f3dcb60
Resolves: #2228422
@lnykryn lnykryn merged commit 883ad44 into redhat-plumbers:main Aug 3, 2023
13 checks passed
@lnykryn lnykryn deleted the bz2228422 branch August 3, 2023 10:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants