Skip to content

Commit 0f64c34

Browse files
committed
1 parent e63e1a6 commit 0f64c34

File tree

2 files changed

+45
-0
lines changed

2 files changed

+45
-0
lines changed

kernel.spec.in

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -157,6 +157,8 @@ Patch61: xen-events-Add-wakeup-support-to-xen-pirq.patch
157157
Patch62: xen-pm-use-suspend.patch
158158
Patch63: xen-pciback-pm-suspend.patch
159159

160+
Patch99: test.patch
161+
160162
%description
161163
Qubes Dom0 kernel.
162164

test.patch

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
On Fri, Jan 03, 2025 at 02:00:44PM +0100, Borislav Petkov wrote:
2+
> Adding the author in Fixes to Cc
3+
4+
Thanks, Boris!
5+
6+
> On Fri, Jan 03, 2025 at 07:56:31AM +0100, Juergen Gross wrote:
7+
> > The recently introduced ROX cache for modules is assuming large page
8+
> > support in 64-bit mode without testing the related feature bit. This
9+
> > results in breakage when running as a Xen PV guest, as in this mode
10+
> > large pages are not supported.
11+
12+
The ROX cache does not assume support for large pages, it just had a bug
13+
when dealing with base pages and the patch below should fix it.
14+
15+
Restricting ROX cache only for configurations that support large pages
16+
makes sense on it's own because there's no real benefit from the cache on
17+
such systems, but it does not fix the issue but rather covers it up.
18+
19+
diff --git a/mm/execmem.c b/mm/execmem.c
20+
index be6b234c032e..0090a6f422aa 100644
21+
--- a/mm/execmem.c
22+
+++ b/mm/execmem.c
23+
@@ -266,6 +266,7 @@ static int execmem_cache_populate(struct execmem_range *range, size_t size)
24+
{
25+
unsigned long vm_flags = VM_ALLOW_HUGE_VMAP;
26+
unsigned long start, end;
27+
+ unsigned int page_shift;
28+
struct vm_struct *vm;
29+
size_t alloc_size;
30+
int err = -ENOMEM;
31+
@@ -296,8 +297,9 @@ static int execmem_cache_populate(struct execmem_range *range, size_t size)
32+
if (err)
33+
goto err_free_mem;
34+
35+
+ page_shift = get_vm_area_page_order(vm) + PAGE_SHIFT;
36+
err = vmap_pages_range_noflush(start, end, range->pgprot, vm->pages,
37+
- PMD_SHIFT);
38+
+ page_shift);
39+
if (err)
40+
goto err_free_mem;
41+
42+
--
43+
2.45.2

0 commit comments

Comments
 (0)