Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate use of 1GB or 2MB EPT mappings for MMIO #29

Open
tandasat opened this issue Sep 1, 2016 · 3 comments
Open

Investigate use of 1GB or 2MB EPT mappings for MMIO #29

tandasat opened this issue Sep 1, 2016 · 3 comments

Comments

@tandasat
Copy link
Owner

tandasat commented Sep 1, 2016

Description

HyperPlatform pre-allocates and assigned 4KB EPT entries for physical addresses (PA) used for memory mapped I/O. This design leads to two major disadvantages: complexity in code, and limited support for access to such PA ranges.
#19 identified a way to enumerate such PA ranges (NB: not idea is tested yet) so that HyperPlatform could allocate EPT entries for those PA ranges and get rid of the pre-allocation code. PA ranges reported in the way explained in #19 are too large to allocate EPT entries for them if only 4KB mapping is used, however.

This issue report is to analyze a way to utilize 1GB or 2MB EPT mappings to over come the challenge and simplify code.

Note that use of 1GB or 1MB mappings for normal PA pages is not aimed since we would like to have an ability to control PA access rights with fine (ie, 4KB) granularity for VMI. 2MB mapping can be used for reducing a number of EPT entries to manipulate, but it would introduce non negligible complexity.

@tandasat
Copy link
Owner Author

tandasat commented Sep 5, 2016

One idea is to map all 512GB (or whatever Windows supports) PA ranges except for ones backed by RAM with 1GB and 2MB mappings where possible. This, unfortunately, still requires fairly a large number of EPT entries. For example, to fill non-RAM-backed memory ranges on my laptop with 8GB of RAM, it needs the following number of entries.

07:23:37.223    DBG #2      4    7656   System          Physical Memory Range: 0000000000015000 - 0000000000058000 .. 0x15 4KB-enties (for 0x0000 - 0x14000)
07:23:37.224    DBG #2      4    7656   System          Physical Memory Range: 0000000000059000 - 000000000008f000 .. 1 4KB-entry (for 0x58000 - 0x59000)
07:23:37.226    DBG #2      4    7656   System          Physical Memory Range: 0000000000090000 - 000000000009f000 .. 1 4KB-entry ...
07:23:37.227    DBG #3      4    7656   System          Physical Memory Range: 0000000000100000 - 0000000000102000 .. 1 4KB-entry
07:23:37.229    DBG #2      4    7656   System          Physical Memory Range: 0000000000103000 - 000000008cd14000 .. 1 4KB-entry
07:23:37.230    DBG #1      4    7656   System          Physical Memory Range: 000000008cd53000 - 000000008cd64000 .. 0x1f 4KB-enties and 1 2MB-entry
07:23:37.232    DBG #0      4    7656   System          Physical Memory Range: 000000008cd8f000 - 000000008cf5e000 .. 0xb 4KB-enties and 1 2MB-entry
07:23:37.233    DBG #0      4    7656   System          Physical Memory Range: 000000008cff0000 - 000000008d000000 .. 0x12 4KB-enties and 4 2MB-enties
07:23:37.235    DBG #0      4    7656   System          Physical Memory Range: 0000000100000000 - 000000026f600000 .. 0x98 2MB-enties and 3 1GB-enties

+ entries to fill 0x000000026f600000 to wherever we want.

This is clearly much bigger than the current number of pre-allocated entries (which is 50), and not so small in general sense.

@ionescu007 let me know your thoughts.

@ionescu007
Copy link

246 entries if my count is right. That's more than the pre-allocated entries, but not that much more... and it avoids all the logic needed to handle pre-allocated entries and it doesn't kill the system if 50 is reached (or whatever hard-coded limit is set).

I don't know -- I think the approach is cleaner this way. Or another option would be for the OS Driver and Hypervisor to communicate between each others and pass physical memory from OS to hypervisor -- perhaps a large chunk initially (Hypervisor Heap) and then with a mechanism for the hypervisor to request more as needed (based on say a timer event)... this is how I believe Hyper-V does it. But that's a lot of code, potentially :)

One thing that I realized... right now, are you setting MMIO to "WriteBack"? Shouldn't such regions be Uncacheable, right? Not sure about this.

@tandasat
Copy link
Owner Author

tandasat commented Sep 6, 2016

I thought 250+ entries are a a lot, on second thought, it may be not as bad as I initially felt, especially because code architecture can be simpler and allocation failure just leads to load failure.

On "WriteBack", good question. I am not sure neither but will check it at #31. Thanks for your keen eyes!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants