Skip to content
This repository has been archived by the owner on Jul 16, 2024. It is now read-only.

Livepatch backporting from Openeuler-kernel #141

Open
wants to merge 52 commits into
base: linux-5.10.y
Choose a base branch
from

Conversation

bobolmw
Copy link
Collaborator

@bobolmw bobolmw commented Jan 24, 2022

No description provided.

Cheng Jian and others added 30 commits January 16, 2022 11:06
euler inclusion
category: feature
bugzilla: 51921
CVE: NA

----------------------------------------

support for livepatch without ftrace mode

new config for WO_FTRACE
	CONFIG_LIVEPATCH_WO_FTRACE=y
	CONFIG_LIVEPATCH_STACK=y

Implements livepatch without ftrace by direct jump, we
directly modify the first few instructions(usually one,
but four for long jumps under ARM64) of the old function
as jump instructions by stop_machine, so it will jump to
the first address of the new function when livepatch enable

KERNEL/MODULE
call/bl A---------------old_A------------
                        | jump new_A----+--------|
                        |               |        |
                        |               |        |
                        -----------------        |
                                                 |
                                                 |
                                                 |
livepatch_module-------------                    |
|                           |                    |
|new_A <--------------------+--------------------|
|                           |
|                           |
|---------------------------|
| .plt                      |
| ......PLTS for livepatch  |
-----------------------------

something we need to consider under different architectures:

1. jump instruction
2. partial relocation in new function requires for livepatch.
3. long jumps may be required if the jump address exceeds the
   offset. both for livepatch relocation and livepatch enable.

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-by: Li Bin <[email protected]>
Signed-off-by: Yang Yingliang <[email protected]>

Signed-off-by: Wang ShaoBo <[email protected]>
Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
euler inclusion
category: feature
bugzilla: 51921
CVE: N/A

----------------------------------------

livepatch wo_ftrace and kprobe are in conflict, because kprobe
may modify the instructions anywhere in the function.

So it's dangerous to patched/unpatched an function when there are
some kprobes registered on it. Restrict these situation.

we should hold kprobe_mutex in klp_check_patch_kprobed, but it's
static and can't export, so protect klp_check_patch_probe in
stop_machine to avoid registing kprobes when patching.

we do nothing for (un)register kprobes on the (old) function
which has been patched. because there are sone engineers need this.
certainly, it will not lead to hangs, but not recommended.

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-by: Li Bin <[email protected]>
Signed-off-by: Yang Yingliang <[email protected]>

Signed-off-by: Wang ShaoBo <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
euler inclusion
category: feature
bugzilla: 51921
CVE: N/A

----------------------------------------

In the previous version we forced the association between
livepatch wo_ftrace and stop_machine. This is unwise and
obviously confusing.

commit d83a7cb ("livepatch: change to a per-task
consistency model") introduce a PER-TASK consistency model.
It's a hybrid of kGraft and kpatch: it uses kGraft's per-task
consistency and syscall barrier switching combined with
kpatch's stack trace switching. There are also a number of
fallback options which make it quite flexible.

So we split livepatch consistency for without ftrace to two model:
[1] PER-TASK consistency model.
per-task consistency and syscall barrier switching combined with
kpatch's stack trace switching.

[2] STOP-MACHINE consistency model.
stop-machine consistency and kpatch's stack trace switching.

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-by: Li Bin <[email protected]>
Signed-off-by: Yang Yingliang <[email protected]>

Signed-off-by: Wang ShaoBo <[email protected]>
Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
euler inclusion
category: feature
bugzilla: 51921
CVE: N/A

----------------------------------------

The front-tools kpatch-build support load and unload hooks
in the older version and already changed to use pre/post
callbacks after commit 93862e3 ("livepatch: add (un)patch
callbacks").

However, for livepatch based on stop machine consistency,
this callbacks will be called within stop_machine context if
we using it. This is dangerous because we can't known what
the user will do in the callbacks. It may trigger system
crash if using any function which internally might sleep.

Here we use the old load/unload hooks to allow user-defined
hooks. Although it's not good enough compared to pre/post
callbacks, it can meets user needs to some extent.
Of cource, this requires cooperation of kpatch-build tools.

Signed-off-by: Cheng Jian <[email protected]>
Signed-off-by: Yang Yingliang <[email protected]>
Signed-off-by: Wang ShaoBo <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 51921
CVE: NA

-----------------------------------------------

The kpatch-build processes the __jump_table special section,
and only the jump_lable used by the changed functions will be
included in __jump_table section, and the livepatch should
process the tracepoint again after the dynamic relocation.

NOTE: adding new tracepoints definition is not supported.

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-by: Xie XiuQi <[email protected]>
Signed-off-by: Yang Yingliang <[email protected]>

Signed-off-by: Wang ShaoBo <[email protected]>
Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 51921
CVE: N/A

----------------------------------------

support livepatch without ftrace for ARM64

supported now:
        livepatch relocation when init_patch after load_module;
        instruction patched when enable;
        activeness function check;
        enforcing the patch stacking principle;
        long jump (both livepatch relocation and insn patched)
        module plts request by livepatch-relocation

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-by: Li Bin <[email protected]>
Signed-off-by: Yang Yingliang <[email protected]>

Signed-off-by: Wang ShaoBo <[email protected]>
Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
euler inclusion
category: feature
bugzilla: 51921
CVE: N/A

----------------------------------------

We need to modify the first 4 instructions of a livepatch function to
complete the long jump if offset out of short-range. So it's important
that this function must have more than 4 instructions, so we checked it
when the livepatch module insmod.

In fact, this corner case is highly unlikely to occur on arm64, but it's
still an effective and meaningful check to avoid crash by doing this.

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-by: Li Bin <[email protected]>
Signed-off-by: Yang Yingliang <[email protected]>

Signed-off-by: Wang ShaoBo <[email protected]>
Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: bugfix
bugzilla: 51921
CVE: NA

-------------------------------------------------

We through stack checking to ensure the consistency of livepatch.
Task blocked in __switch_to when switch out, thread_saved_fs/pc
store the FP and PC when switching, it can be useful when tracing
blocked threads. For running task, __builtin_frame_address can be
used, but it's difficult to backtracking the running task on other
CPUs.

Fortunately, all CPUs will stay in this function, the current's
backtrace is so similar. so just backtracking the current on this
CPU, skip the current of other CPUs.

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-by: Li Bin <[email protected]>
Signed-off-by: Yang Yingliang <[email protected]>

Signed-off-by: Wang ShaoBo <[email protected]>
Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: bugfix
bugzilla: 51921
CVE: NA

---------------------------

When doing consistency stack checking, if we try to patch a
function which has been patched already. We should check the
new function(not the origin func) that is activeness currently,
it's always the first entry in list func_node->func_stack.

Example :
	module : origin			livepatch v1		livepatch v2
	func   : old func A -[enable]=> new func A' -[enable]=> new func A''
	check  :		A			A'

when we try to patch function A to new function A'' by livepatch
v2, but the func A has already patched to function A' by livepatch
v1, so function A' which provided in livepatch v1 is active in the
stack instead of origin function A. Even if the long jump method is
used, we jump to the new function A' using a call without LR, the
origin function A will not appear in the stack. We must check the
active function A' in consistency stack checking.

Reviewed-By: Xie XiuQi <[email protected]>
Signed-off-by: Cheng Jian <[email protected]>
Signed-off-by: Yang Yingliang <[email protected]>

Signed-off-by: Wang ShaoBo <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
euler inclusion
category: feature
Bugzilla: 51921
CVE: N/A

----------------------------------------

Now, arm64 don't support DYNAMIC_FTRACE_WITH_REGS and RELIABLE_STACKTRACE.
which the first is necessary to implement livepatch with ftrace and the
second allow to implement per-task consistency.

So. arm64 only support LIVEPATCH_WO_FTRACE and STOP_MACHINE_CONSISTENCY.
but other architectures can work under LIVEPATCH_FTRACE with
PER_TASK_CONSISTENCY. commit the depends to avoid incorrect configuration.

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-by: Li Bin <[email protected]>
Signed-off-by: Yang Yingliang <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 51921
CVE: NA

---------------------------

We are planning to add livepatch without ftrace support for arm
in the next commit. However after commit 425595a ("livepatch:
reuse module loader code to write relocations") merged, the klp
relocations is done by apply_relocate function.

The mod->arch.{core,init}.plt pointers were problematic for livepatch
because they pointed within temporary section headers (provided by the
module loader via info->sechdrs) that would be freed after module load.

Here we take same modification based on commit c8ebf64
("arm64/module: use plt section indices for relocations") to solve.

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 51923
CVE: NA

---------------------------

In the older version of livepatch implementation without ftrace on
arm, it use klp_relocs and do special relocation for klp syms. The
kpatch-build front-tools use kpatch version to generate klp_relocs.

After commit 7c8e2bd ("livepatch: Apply vmlinux-specific KLP
relocations early") and commit 425595a ("livepatch: reuse module
loader code to write relocations"), the mainline klp relocation flow
is always using ".klp.rela." section and kpatch-build front-tools use
klp version to generate klp module.

The default klp_apply_section_relocs is only for 64bit and modules
with rela support. Because CONFIG_MODULES_USE_ELF_REL is set in arm,
so we modify klp relocation to support 32bit and modules using rel.
Also the kpatch-build front-tools should adapter to support this.

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
euler inclusion
category: feature
bugzilla: 51923
CVE: N/A

----------------------------------------

support livepatch without ftrace for ARM

supported now:
        livepatch relocation when init_patch after load_module;
	instruction patched when enable;
	activeness function check;
	enforcing the patch stacking principle;

unsupport now:(willn't fix it feature)
	long jump (both livepatch relocation and insn patched)
	module plts request by livepatch-relocation

Because CONFIG_ARM_MODULE_PLTS will be not set in ARM, so we needn't
long jump and livepatch plts.

Signed-off-by: Cheng Jian <[email protected]>
Signed-off-by: Li Bin <[email protected]>
Tested-by: Cheng Jian <[email protected]>
Tested-by: Wang Feng <[email protected]>
Tested-by: Lin DingYu <[email protected]>
Reviewed-by: Xie XiuQi <[email protected]>
Signed-off-by: zhangyi (F) <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
euler inclusion
category: bugfix
bugzilla: 51923
CVE: NA

---------------------------------

We through stack checking to ensure the consistency of livepatch.
Task blocked in __switch_to when switch out, thread_saved_fs/pc
store the FP and PC when switching, it can be useful when tracing
blocked threads. For running task, __builtin_frame_address can be
used, but it's difficult to backtracking the running task on other
CPUs.

Fortunately, all CPUs will stay in this function, the current's
backtrace is so similar. so just backtracking the current on this
CPU, skip the current of other CPUs.

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-by: Xie XiuQi <[email protected]>
Signed-off-by: zhangyi (F) <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
euler inclusion
category: feature
bugzilla: 51923
CVE: N/A

----------------------------------------

The offset of the direct jump under ARM is 32M. Longer jumps are
required to exceed this range.

First-- long jump for relocations

If the jump address exceeds the range in these relocation, it
needs to be implemented with a long jump. but there is no
function for us to modify its first LJMP_INSN_SIZE instructions
like enable livepatch do, we should use module plts to store
the information. so we need enough PLTS to store the symbol.

The .klp.rela.objname.secname section store all symbols that required
relocate by livepatch. For commit 425595a ("livepatch: reuse
module loader code to write relocations") merged, load_module can
create enough plt entries for livepatch by module_frob_arch_sections.

However, the module loader only use rel section, this is will be
fixed in the next commits and need adapter kpatch-build front-tools.

Second-- long jump for call new function

We modify several instructions from the beginning of the function
to jump instructions, thus completing the jump from the old
function to the new function. Unlike the relocation information,
there is no plt sections to use here, so use the LDT instruction
to complete the long jump using the LDT instruction.

[PC+0]: ldr PC [PC+8]
[PC+4]: nop
[PC+8]: new_addr_to_jump

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-by: Bin Li <[email protected]>
Signed-off-by: zhangyi (F) <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: bugfix
bugzilla: 51923
CVE: NA

---------------------------

When doing consistency stack checking, if we try to patch a function
which has been patched already. We should check the new function(not
the origin func) that is activeness currently, it's always the first
entry in list func_node->func_stack.

Example :
	module : origin			livepatch v1		livepatch v2
	func   : old func A -[enable]=> new func A' -[enable]=> new func A''
	check  :                A			A'

when we try to patch function A to new function A'' by livepatch v2,
but the func A has already patched to function A' by livepatch v1, so
function A' which provided in livepatch v1 is active in the stack
instead of origin function A. Even if the long jump method is used, we
jump to the new function A' using a call without LR, the origin function
A will not appear in the stack. We must check the active function A' in
consistency stack checking.

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-By: Xie XiuQi <[email protected]>
Signed-off-by: yangerkun <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 51921
CVE: NA

---------------------------

After commit d556e1b ("livepatch: Remove module_disable_ro() usage")
and commit 0d9fbf7 ("module: Remove module_disable_ro()") and
commit e6eff43 ("module: Make module_enable_ro() static again") merged,
the module_disable_ro is removed and module_enable_ro is make static.

It's ok for x86/ppc platform because the livepatch module relocation is
done by text poke func which internally modify the text addr by remap
to high virtaddr which has write permission.

However for arm/arm64 platform, it's apply_relocate[_add] still directly
modify the text code so we should change the module text permission before
relocation. Otherwise it will lead to following problem:

  Unable to handle kernel write to read-only memory at virtual address ffff800008a95288
  Mem abort info:
  ESR = 0x9600004f
  EC = 0x25: DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
  Data abort info:
  ISV = 0, ISS = 0x0000004f
  CM = 0, WnR = 1
  swapper pgtable: 4k pages, 48-bit VAs, pgdp=000000004133c000
  [ffff800008a95288] pgd=00000000bdfff003, p4d=00000000bdfff003, pud=00000000bdffe003,
		     pmd=0000000080ce7003, pte=0040000080d5d783
  Internal error: Oops: 9600004f [AmpereComputing#1] PREEMPT SMP
  Modules linked in: livepatch_testmod_drv(OK+) testmod_drv(O)
  CPU: 0 PID: 139 Comm: insmod Tainted: G           O  K   5.10.0-01131-gf6b4602e09b2-dirty AmpereComputing#35
  Hardware name: linux,dummy-virt (DT)
  pstate: 80000005 (Nzcv daif -PAN -UAO -TCO BTYPE=--)
  pc : reloc_insn_imm+0x54/0x78
  lr : reloc_insn_imm+0x50/0x78
  sp : ffff800011cf3910
  ...
  Call trace:
   reloc_insn_imm+0x54/0x78
   apply_relocate_add+0x464/0x680
   klp_apply_section_relocs+0x11c/0x148
   klp_enable_patch+0x338/0x998
   patch_init+0x338/0x1000 [livepatch_testmod_drv]
   do_one_initcall+0x60/0x1d8
   do_init_module+0x58/0x1e0
   load_module+0x1fb4/0x2688
   __do_sys_finit_module+0xc0/0x128
   __arm64_sys_finit_module+0x20/0x30
   do_el0_svc+0x84/0x1b0
   el0_svc+0x14/0x20
   el0_sync_handler+0x90/0xc8
   el0_sync+0x158/0x180
   Code: 2a0503e0 9ad42a73 97d6a499 91000673 (b90002a0)
   ---[ end trace 67dd2ef1203ed335 ]---

Though the permission change is not necessary to x86/ppc platform, consider
that the jump_label_register api may modify the text code either, we just
put the change handle here instead of putting it in arch-specific relocate.

Besides, the jump_label_module_nb callback called in jump_label_register
also maybe need motify the module code either, it sort and swap the jump
entries if necessary. So just disable ro before jump_label handling and
restore it back.

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 51919
CVE: NA

----------------------------------------

support livepatch without ftrace for x86_64

supported now:
        livepatch relocation when init_patch after load_module;
        instruction patched when enable;
	activeness function check;
	enforcing the patch stacking principle;

x86_64 use variable length instruction, so there's no need to consider
extra implementation for long jumps.

Reviewed-by: Xie XiuQi <[email protected]>
Signed-off-by: Cheng Jian <[email protected]>
Signed-off-by: Li Bin <[email protected]>
Tested-by: Yang ZuoTing <[email protected]>
Tested-by: Cheng Jian <[email protected]>
Signed-off-by: zhangyi (F) <[email protected]>
Signed-off-by: Yang Yingliang <[email protected]>

Signed-off-by: Wang ShaoBo <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: bugfix
bugzilla: 51919
CVE: NA

---------------------------

When doing consistency stack checking, if we try to patch a
function which has been patched already. We should check the
new function(not the origin func) that is activeness currently,
it's always the first entry in list func_node->func_stack.

Example :
	module : origin			livepatch v1		livepatch v2
	func   : old func A -[enable]=> new func A' -[enable]=> new func A''
	check  :		A			A'

when we try to patch function A to new function A'' by livepatch
v2, but the func A has already patched to function A' by livepatch
v1, so function A' which provided in livepatch v1 is active in the
stack instead of origin function A. Even if the long jump method is
used, we jump to the new function A' using a call without LR, the
origin function A will not appear in the stack. We must check the
active function A' in consistency stack checking.

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-By: Xie XiuQi <[email protected]>
Signed-off-by: Yang Yingliang <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
euler inclusion
category: feature
bugzilla: 51924
CVE: NA

----------------------------------------

support livepatch without ftrace for powerpc

supported now:
	livepatch relocation when init_patch after load_module;
	instruction patched when enable;
	activeness function check;
	enforcing the patch stacking principle;

unsupport now:(will fix it feature)
	long jump (both livepatch relocation and insn patched)
	module plts request by livepatch-relocation

Signed-off-by: Cheng Jian <[email protected]>
Signed-off-by: Li Bin <[email protected]>
Tested-by: Wang Feng <[email protected]>
Reviewed-by: Xie XiuQi <[email protected]>
Signed-off-by: zhangyi (F) <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
euler inclusion
category: feature
bugzilla: 51924
CVE: NA

----------------------------------------

The offset of the direct jump under PPC is 32M. Longer jumps are required
to exceed this range.

Therefore, long jumps of instruction patched when enable livepatch module
is supported here.

Signed-off-by: Cheng Jian <[email protected]>
Signed-off-by: Li Bin <[email protected]>
Tested-by: Wang Feng <[email protected]>
Reviewed-by: Xie XiuQi <[email protected]>
Signed-off-by: zhangyi (F) <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
euler inclusion
category: feature
bugzilla: 51924
CVE: N/A

----------------------------------------

we need to modify the first 4 instructions of a livepatch function to
complete the long jump if offset out of short-range. So it's important
that this function must have more than 4 instructions, so we checked it
when the livepatch module insmod.

testcase : testEL_HOTPATCH_ADDFUNTOMULTIFILE_FUN-001
before this patch:
        insmod ./klp_patch.ko
        echo 1 > /sys/kernel/livepatch/klp_patch/enable
        echo 3 > /proc/sys/vm/drop_caches
        kernel crash, the call trace is like
        Call Trace:
                Unable to handler kernel paging request for instruction fetch
                Fualting instruction address: 0x00000000
                invalidate_mapping_pages+x0cc/0x180
                drop_pagecache_sb+0x84/0x94
                iterate_supers+0xf8/0xfc
                drop_caches_sysctl_handler+0x88/0x108
                proc_sys_call_handler+0xbc/0xfc
                __vfs_write+0x3c/0x154
                vfs_write+0xa0/0x114
                Sys_write+0x4c/0xc4
                ret_from_syscall+0x0/0x38

after this patch:
        insmod ./klp_patch.ko
                insmod: can't insert './klp_patch.ko': Operation not permitted

        dmesg -c
                livepatch: func drop_slab size(2) less than limit(4)

Signed-off-by: Cheng Jian <[email protected]>
Signed-off-by: Li Bin <[email protected]>
Tested-by: Cheng Jian <[email protected]>
Tested-by: Wang Feng <[email protected]>
Tested-by: Lin DingYu <[email protected]>
Tested-by: Yang ZuoTing <[email protected]>
Reviewed-by: Xie XiuQi <[email protected]>
Signed-off-by: zhangyi (F) <[email protected]>

Signed-off-by: Dong Kai <[email protected]>
Signed-off-by: Wang ShaoBo <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
euler inclusion
category: bugfix
bugzilla: 51924
CVE: NA

---------------------------------

We through stack checking to ensure the consistency of livepatch.
Task blocked in __switch_to when switch out, thread_saved_fs/pc
store the FP and PC when switching, it can be useful when tracing
blocked threads. For running task, current_stack_pointer can be
used, but it's difficult to backtracking the running task on other
CPUs.

Fortunately, all CPUs will stay in this function, the current's
backtrace is so similar. so just backtracking the current on this
CPU, skip the current of other CPUs.

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-by: Xie XiuQi <[email protected]>
Signed-off-by: zhangyi (F) <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: bugfix
bugzilla: 51924
CVE: NA

---------------------------

When doing consistency stack checking, if we try to patch a function
which has been patched already. We should check the new function(not
the origin func) that is activeness currently, it's always the first
entry in list func_node->func_stack.

Example :
        module : origin                 livepatch v1            livepatch v2
        func   : old func A -[enable]=> new func A' -[enable]=> new func A''
        check  :                A                       A'

when we try to patch function A to new function A'' by livepatch v2,
but the func A has already patched to function A' by livepatch v1, so
function A' which provided in livepatch v1 is active in the stack
instead of origin function A. Even if the long jump method is used, we
jump to the new function A' using a call without LR, the origin function
A will not appear in the stack. We must check the active function A' in
consistency stack checking.

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-By: Xie XiuQi <[email protected]>
Signed-off-by: yangerkun <[email protected]>

Signed-off-by: Dong Kai <[email protected]>
Signed-off-by: Wang ShaoBo <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
rtos inclusion
category: bugfix
bugzilla: 51924
CVE: NA

----------------------------------------

According to function _switch in entry_32/64.S, for non-current and
not-in-interrupt task, the LR is saved in the LR position in the 2nd
frame. The content in LR position in the 1st frame is not filled, so
it is left by previous stack frames and may be an address in a kernel
function, resulting in failure in applying a kernel patch even when the
target function is not actually in stack.

Therefore, we should ignore the first frame to get a more reliable
backtrace.

Signed-off-by: Lexi Shao <[email protected]>
Reviewed-by: Cheng Jian <[email protected]>
Signed-off-by: yangerkun <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 51924
CVE: NA

---------------------------

Initially completed the livepatch for ppc64be, the call from
the old function to the new function, using stub space, this
is actually problematic, because we cannot effectively recover
R2. This problem will be fixed later.

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-By: Xie XiuQi <[email protected]>
Signed-off-by: yangerkun <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 51924
CVE: NA

---------------------------

The ppc64 ABI V1 function pointer points to the function descriptor,
which we use in the sample demo.

        $cat /proc/kallsyms | grep  livepatch_cmdline_proc_show
        80000000000d4830 d livepatch_cmdline_proc_show  [livepatch_sample]      -=> func descr
        80000000000d40c0 t .livepatch_cmdline_proc_show [livepatch_sample]      -=> func addr

However, the livepatch module made by kpatch just passes the address of
the function to kernel(saved in func->new_func), so the kernel needs to
obtain the toc address and combine the function descriptors to implement
long jump.

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-By: Xie XiuQi <[email protected]>
Signed-off-by: yangerkun <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: bugfix
bugzilla: 51924
CVE: NA

---------------------------

When doing consistency stack checking, if we try to patch a function
which has been patched already. We should check the new function(not
the origin func) that is activeness currently, it's always the first
entry in list func_node->func_stack.

Example :
        module : origin                 livepatch v1            livepatch v2
        func   : old func A -[enable]=> new func A' -[enable]=> new func A''
        check  :                A                       A'

when we try to patch function A to new function A'' by livepatch v2,
but the func A has already patched to function A' by livepatch v1, so
function A' which provided in livepatch v1 is active in the stack
instead of origin function A. Even if the long jump method is used, we
jump to the new function A' using a call without LR, the origin function
A will not appear in the stack. We must check the active function A' in
consistency stack checking.

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-By: Xie XiuQi <[email protected]>
Signed-off-by: yangerkun <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 51924
CVE: NA

---------------------------

We call from old func to new func, when return form new func, we
need to restore R2. The previous module relocation was by adding
an extra nop space after the call (bxxx) instruction to restore R2,
but it is impossible to use extra space here, because we will not
return after calling new func, so we need to use a trampoline space.

We will call new func in trampoline and then restore R2 when we return.
Please note that we can also use old func as trampoline as a solution,
but we are afraid that old func often does not have that much space to
store trampoline instruction fragments.

The trampoline can be implemented as global. However we need to
implement a trampoline for each function and improve its stack
check.

Our call chain to the new function looks like this:

CALLER
        old_func        |       old_func
                        |       -=> trampoline
                        |               -=> new_func

So we can't simply check that new_func, old_func and trampoline are
both possible on the stack.

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-By: Xie XiuQi <[email protected]>
Signed-off-by: yangerkun <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 51924
CVE: NA

---------------------------

In the prev commit, we implement per func_node livepatch
trampoline. For elf abi v1, the trampoline area is also
malloced and it has no permission to execute. So we use
module_alloc to set trampoline executable.

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
Cheng Jian and others added 22 commits January 16, 2022 11:13
hulk inclusion
category: bugfix
bugzilla: 34578/46793/51924
CVE: NA

-------------------------------------------------------------------------

When we make a livepatch, since we did not save the stack frame
according to the call requirementsthen, we can't see that the
caller function in the stack after the patch is activated.

At this time, if we also patch the caller function, because it's
is not seen in the stack, then this patch can be enabled normally
without being checked by the stack check.

This is very dangerous. If there are other processes running or
sleeping in the context of the first patched callee function, then
after the caller function is patched, we have changed the first few
instructions of the caller to bstub to jump directly, so the context
of the process will be destroyed, after he returns to the caller,
the wrong instruction will be executed.

The following problems can occur in our testcase:
	Unrecoverable FP Unavailable Exception 800 at 80000000000c80d8
	Oops: Unrecoverable FP Unavailable Exception, sig: 6 [AmpereComputing#1]
	PREEMPT SMP NR_CPUS=4 QEMU e500
	Modules linked in: level2_delay_patch(O) delay_patch(O) delay(O)
	CPU: 1 PID: 328 Comm: cat Tainted: G           O  K 4.4.222 #334
	task: c0000000f02da100 task.stack: c00000007a52c000
	NIP: 80000000000c80d8 LR: 80000000000c80d8 CTR: c0000000003cef10
	REGS: c00000007a52eea0 TRAP: 0800   Tainted: G O  K  (4.4.222)
	MSR: 0000000080009000 <EE,ME>  CR: 28022882  XER: 00000000
	NIP [80000000000c80d8] .foo_show+0x18/0x48 [delay]
	LR [80000000000c80d8] .foo_show+0x18/0x48 [delay]
	Call Trace:
	[c00000007a52f120] [c00000007e015af8] 0xc00000007e015af8(unreliable)
	[c00000007a52f1a0] [c00000000032d11c].kobj_attr_show+0x2c/0x50
	[c00000007a52f210] [c000000000230b74].sysfs_kf_seq_show+0xf4/0x1d0
	[c00000007a52f2b0] [c00000000022ea2c].kernfs_seq_show+0x3c/0x50
	[c00000007a52f320] [c0000000001c1f88].seq_read+0x118/0x5c0
	[c00000007a52f420] [c00000000022fa04].kernfs_fop_read+0x194/0x240
	[c00000007a52f4c0] [c00000000018e27c].do_loop_readv_writev+0xac/0x100
	[c00000007a52f560] [c00000000018f284].do_readv_writev+0x2a4/0x2f0
	[c00000007a52f6d0] [c0000000001cf3cc].default_file_splice_read+0x22c/0x490
	[c00000007a52fa60] [c0000000001cd704].do_splice_to+0x94/0xe0
	[c00000007a52fb00] [c0000000001cd814].splice_direct_to_actor+0xc4/0x320
	[c00000007a52fbd0] [c0000000001cdb14].do_splice_direct+0xa4/0x120
	[c00000007a52fc90] [c00000000018f9fc].do_sendfile+0x27c/0x440
	[c00000007a52fd80] [c0000000001910f4].compat_SyS_sendfile64+0xe4/0x140
	[c00000007a52fe30] [c00000000000058c]system_call+0x40/0xc8
	Instruction dump:
	ebe1fff8 7c0803a6 4e800020 60000000 60000000 60000000 3d62ffff 396b7bf0
	e98b0018 7d8903a6 4e800420 73747563 <c0000000> f030a948 7fe3fb78 38a00001
	---[ end trace 07a14bdffccc341f ]---

We solve this problem by disguising the stack frame, so that the caller
function will appear in the stack, which can be detected by the stack
check, so that when the patch is enabled, it will be found

Ather this patch, when enable the second livepatch, we will find the
caller on the stack.

	livepatch_64: func .foo_show is in use!
	livepatch_64: PID: 328 Comm: cat
	Call Trace:
	[c00000007a596bd0] [c00000007a596cd0] 0xc00000007a596cd0(unreliable)
	[c00000007a596da0] [c000000000008b20].__switch_to+0x70/0xa0
	[c00000007a596e20] [c000000000557a5c].__schedule+0x2fc/0x830
	[c00000007a596ed0] [c0000000005581b8] .schedule+0x38/0xc0
	[c00000007a596f40] [c00000000055c7e8].schedule_timeout+0x148/0x210
	[c00000007a597030] [80000000000ff054].new_stack_func+0x54/0x90 [delay_patch]
	[c00000007a5970b0] [c0000000f025d67c] 0xc0000000f025d67c
	[c00000007a597120] [80000000000c80d8] .foo_show+0x18/0x48 [delay]
	[c00000007a5971a0] [c00000000032d11c].kobj_attr_show+0x2c/0x50
	[c00000007a597210] [c000000000230b74].sysfs_kf_seq_show+0xf4/0x1d0
	[c00000007a5972b0] [c00000000022ea2c].kernfs_seq_show+0x3c/0x50
	[c00000007a597320] [c0000000001c1f88].seq_read+0x118/0x5c0
	[c00000007a597420] [c00000000022fa04].kernfs_fop_read+0x194/0x240
	[c00000007a5974c0] [c00000000018e27c].do_loop_readv_writev+0xac/0x100
	[c00000007a597560] [c00000000018f284].do_readv_writev+0x2a4/0x2f0
	[c00000007a5976d0] [c0000000001cf3cc].default_file_splice_read+0x22c/0x490
	[c00000007a597a60] [c0000000001cd704].do_splice_to+0x94/0xe0
	[c00000007a597b00] [c0000000001cd814].splice_direct_to_actor+0xc4/0x320
	[c00000007a597bd0] [c0000000001cdb14].do_splice_direct+0xa4/0x120
	[c00000007a597c90] [c00000000018f9fc].do_sendfile+0x27c/0x440
	[c00000007a597d80] [c0000000001910f4].compat_SyS_sendfile64+0xe4/0x140
	[c00000007a597e30] [c00000000000058c]system_call+0x40/0xc8

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-by: Xie XiuQi <[email protected]>
Signed-off-by: yangerkun <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
rtos inclusion
category: bugfix
bugzilla: 42399/46793/51924
CVE: NA

----------------------------------------

According to function _switch in entry_32/64.S, for non-current and
not-in-interrupt task, the LR is saved in the LR position in the 2nd
frame. The content in LR position in the 1st frame is not filled, so
it is left by previous stack frames and may be an address in a kernel
function, resulting in failure in applying a kernel patch even when the
target function is not actually in stack.

Therefore, we should ignore the first frame to get a more reliable
backtrace.

Signed-off-by: Lexi Shao <[email protected]>
Reviewed-by: Cheng Jian <[email protected]>
Signed-off-by: yangerkun <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 51924
CVE: NA

---------------------------

The previous sample use case did not consider the APC and function
descriptors of PPC64

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-By: Xie XiuQi <[email protected]>
Signed-off-by: yangerkun <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 51924
CVE: NA

---------------------------

We have completed support for the ppc64de livepatch, and we
can now enable it

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-By: Xie XiuQi <[email protected]>
Signed-off-by: yangerkun <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
euler inclusion
category: feature
bugzilla: 51921
CVE: N/A

----------------------------------------

Some functions in the kernel are always on the stack of some
thread in the system. Attempts to patch these function will
currently always fail the activeness safety check.

However, through human inspection, it can be determined that,
for a particular function, consistency is maintained even if
the old and new versions of the function run concurrently.

commit 2e93c5e1e3dc ("support forced patching") in kpatch-build
introduces a KPATCH_FORCE_UNSAFE() macro to define patched
functions that such be exempted from the activeness safety
check. now kernel implement this feature.

Signed-off-by: Cheng Jian <[email protected]>
Reviewed-by: Li Bin <[email protected]>
Signed-off-by: Yang Yingliang <[email protected]>
Signed-off-by: Wang ShaoBo <[email protected]>

Signed-off-by: Dong Kai <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 51924
CVE: NA

---------------------------

When a livepatch is insmod, stop machine will stop other cores, which
interrupts services. Therefore, the shorter the stop machine duration,
the better. The application and release of memory from the stop machine
can shorten the time for stopping the machine.

Especially, module_alloc and module_memfree is a kind of vmalloc, that
may sleep when called. So it is not permitted to use them in stop machine
context.

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: bugfix
bugzilla: 110621
CVE: NA

--------------------------------

Livepatch can add some hook functions when inserting and disabling the
patch. The hook functions called during inserting is named load hooks,
and the hook functions called during disabling is named unload hooks.

During the test, it is found that unload hooks is not executed. The
reason is that the __klp_free_objects() is called before
klp_free_patch_finish() is executed. This function deletes obj from the
patch list. Therefore, klp_for_each_object in klp_free_patch_finish()
cannot fund obj. As a result, the klp_unload_hook() is not executed.

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>
Signed-off-by: Chen Jun <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 119440 https://gitee.com/openeuler/kernel/issues/I4DDEL

--------------------------------

The force field is divided into three states. KLP_NORMAL_FORCE
indicates that a hot patch is installed according to the initial
rule. KLP_ENFORCMENT indicates that the hot patch of the function
must be installed. KLP_STACK_OPTIMIZE is prepared for stack
optimization policy.

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>
Signed-off-by: Chen Jun <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 119440 https://gitee.com/openeuler/kernel/issues/I4DDEL

--------------------------------

When the CONFIG_LIVEPATCH_STOP_MACHINE_CONSISTENCY macro is turned
on, the system checks whether the function to patch is on the stack
under the stop_machine. If the function is on the stack, the livepatch
cannot be patched and returns a busy signal.

Hotspot functions are easily on the stack under the stop_machine
condition. As a result, the livpatch success rate is low when the
patch includes a hot function.

For the repalced function, only the first seceral instructions are
rewritten, and the rest of the instructions are the same as the
original ones. Therefore, if the force flag is KLP_STACK_OPTIMIZE,
only need to check whether the replaced instructions are on the
stack.

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>
Signed-off-by: Chen Jun <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 119440 https://gitee.com/openeuler/kernel/issues/I4DDEL

--------------------------------

Based on the commit 'livepatch: checks only of the replaced instruction
is on the stack', the livepatch only needs to check the replaced
instructions during stack check.

If the instructions to be replaced do not contain a jump instruction,
the instructions may only appear at the top of the stack. Thus, after
confirming that the instructions to be replaced do not contain a jump
instruction, only the top of the stack instead of entire stack may be
checked.

Each function in livepatch has a force tag. When the value is
KLP_STACK_OPTIMIZE, the function of checking only the top of the stack
is enabled to speed up the check.

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Chen Jun <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 119440 https://gitee.com/openeuler/kernel/issues/I4DDEL

--------------------------------

Enable stack optimize on arm.

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>
Signed-off-by: Chen Jun <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 119440 https://gitee.com/openeuler/kernel/issues/I4DDEL

--------------------------------

Enable stack optimize on ppc32.

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Chen Jun <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 119440 https://gitee.com/openeuler/kernel/issues/I4DDEL

--------------------------------

Enable stack optimize on ppc64.

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>
Signed-off-by: Chen Jun <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 119440 https://gitee.com/openeuler/kernel/issues/I4DDEL

--------------------------------

Enable stack optimize on x86.

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>
Signed-off-by: Chen Jun <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: bugfix
bugzilla: 176976 https://gitee.com/openeuler/kernel/issues/I4DDEL

--------------------------------

Before enable a livepatch, we apply for a piece of memory for func_node to
store function information and release it after disable this livepatch.

However, in some special cases, for example, the livepatch code is running,
disable fails. In these cases, the applied memory should not be released.
Otherwise, the livepatch cannot be disabled.

So, we move arch_klp_mem_recycle after the return value judgment to solve
this problem.

Fixes: ec7ce700674f ("livepatch: put memory alloc and free out stop machine")
Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Chen Jun <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: bugfix
bugzilla: 181325 https://gitee.com/openeuler/kernel/issues/I4DDEL

--------------------------------

An error is reported during version building: error: ISO C90 forbids mixed
declarations and code.

Fix it by moving the variable definition forward.

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Chen Jun <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 182230 https://gitee.com/openeuler/kernel/issues/I4EFWX

--------------------------------

To be consistent with interfaces and behavior of livepatch in version 4.4:
1. Export klp_{register,unregister}_patch symbols.
2. "enabled" interface of sysfs provides enable_patch and disable_patch functions.

Signed-off-by: Yang Jihong <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>

Signed-off-by: Chen Jun <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 182230 https://gitee.com/openeuler/kernel/issues/I4EFWX

--------------------------------

1. Adapt interfaces in module_init & module_exit
2. Add load_hook & unload_hook test

Reviewed-by: Kuohai Xu <[email protected]>
Signed-off-by: Yang Jihong <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>

Signed-off-by: Chen Jun <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: feature
bugzilla: 119440 https://gitee.com/openeuler/kernel/issues/I4DDEL

--------------------------------

The previous code does not consider the scenario where preemption
is enabled. In the kernel with CONFIG_PREEMPT enabled, even if the
function entry does not have a jump instruction, the function may
be interrupted by an interrupt. If preemption is triggered when
the interrupt is returned, the function is in the middle of the
call stack of the thread.

The stack depth optimization solution needs to be adjusted as
follows:
	1. For functions with jump instructions in the entry,
	check the entire thread stack in any case.
	2. For a function whose entry does not have a jump
	instruction:
		a. If CONFIG_PREEMPT is disabled, the function
		entry code cannot be on any thread stack.
		Therefore, no check is required.
		b. If CONFIG_PREEMPT is turned on, check entire
		thread stack.

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>

Signed-off-by: Chen Jun <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4IH1G
CVE: NA

---------------------------

Livepatch allows users to call hook functions to perform some customized
operations when insmod the .ko. During the test, it is found that in the
ARM64 architecture, if accesses the global variables defined by other ko in
hook functions, a crash occurs.

Since relocation is performed during the livepatch insertion, instructions
in the icache should be invalid. If the instructions in the icache are
directly obtained, incorrect addresses may be obtained, caseing crash.
Therefore, flush the icache before calling the hook functions.

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Jian Cheng <[email protected]>
Signed-off-by: Yang Yingliang <[email protected]>

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Yang Jihong <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
hulk inclusion
category: bugfix
bugzilla: 185757 https://gitee.com/openeuler/kernel/issues/I4DDEL

---------------------------

During the test, it is found that the running function to be patched
is not detected when enabling the livepatch. It will cause unkown
problems.

The cause is that the return value of the klp_check_jump_func() is
incorrect. To solve the problem, reverse the return value.

Signed-off-by: Ye Weihua <[email protected]>
Reviewed-by: Kuohai Xu <[email protected]>

Signed-off-by: Chen Jun <[email protected]>
Signed-off-by: Zheng Zengkai <[email protected]>
@adamliyi adamliyi mentioned this pull request Feb 11, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants