Skip to content

CVM Guest VSM: refactor the shared/encrypted bitmaps to be partition-wide #1370

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

sluck-msft
Copy link
Contributor

In preparation for VTL 1 memory support for CVMs, make the shared/encrypted bitmap tracking available on a partition-level, rather than in the GuestMemoryMapping (which ends up being per-VTL). Also includes some refactoring to isolate out the bitmap logic so that it can be reused for vtl protection bitmaps.

Tested: SNP +/- guest vsm boots

@sluck-msft sluck-msft force-pushed the gvsm/partition-wide-visibility-tracking branch from 7649ac4 to 1ae1a22 Compare May 20, 2025 22:20
}

impl<'a> GuestPartitionMemoryBuilder<'a> {
/// valid_bitmap_state is valid when allocating a tracking bitmap is needed
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: usually we markdown format variables, input parameters with . so it would look something like valid_bitmap_state`, since that renders nicer in rustdoc.

})
}

pub fn partition_valid_memory(&self) -> Option<Arc<GuestValidMemory>> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing doc comments on these two?

}

pub fn update_valid(&self, range: MemoryRange, state: bool) {
let _lock = self.valid_bitmap_lock.lock();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this lock here when we don't use it at all when checking validity? what's the point?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the bitmap is already being done with an atomic update with read_at/write_at - so why the lock? don't we already have issues around TOCTOU if one VP reads the bitmap before a update call on another VP completes?

Copy link
Contributor Author

@sluck-msft sluck-msft May 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This lock existed in GuestMemoryMapping already, so this change only moves it around. I think the lock helps guarantee consistency just around the entire update operation, but then John's RCU change should help with synchronizing those writes with reading the bitmaps.

registrar: Option<MemoryRegistrar<MshvVtlWithPolicy>>,
}

#[derive(Debug)]
struct GuestMemoryBitmap {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doc comments?

@sluck-msft sluck-msft added the backport_2505 Change should be backported to the release/2505 branch label May 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport_2505 Change should be backported to the release/2505 branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants