Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wasmtime: Check stack limits only on exit from wasm #9093

Draft
wants to merge 15 commits into
base: main
Choose a base branch
from

Conversation

jameysharp
Copy link
Contributor

Currently, we do an explicit check for stack overflow on entry to every WebAssembly function. But that costs some time, and is a significant performance hit for very short functions.

This commit instead switches Wasmtime to relying on guard pages at the end of the stack to catch stack overflow, so the MMU does this check for "free". This means we may allow deeper recursion in guest code than we did before.

To make this work, we need Wasmtime's signal handlers to recognize when a guest memory fault is in a stack guard page and report the appropriate stack-overflow trap code.

Note that we can't turn host-code signals into guest traps, so the signal handlers have to verify that the signal occurred in guest code.

When the guest calls host code (explicitly due to calling an imported host function, or implicitly due to a libcall inserted by Wasmtime or Cranelift), we also need to ensure that there is enough stack space available for the host code to not hit the guard pages. We do that by checking the stack limit that the embedder provided in the trampolines where we exit wasm.


I've been laid off from Fastly so I won't be pursuing this further, but I feel good about the work I put into it so far and want to leave it here in case anyone else wants to pick it up.

@github-actions github-actions bot added the wasmtime:api Related to the API of the `wasmtime` crate itself label Aug 8, 2024
@sunfishcode sunfishcode force-pushed the wasmtime-probe-stack branch 2 times, most recently from babdab0 to cf2a975 Compare December 13, 2024 20:40
@sunfishcode
Copy link
Member

I've now rebased this and ported it to main. There some test failures at the moment.

thread 'stack_overflow::host_always_has_some_stack' has overflowed its stack
fatal runtime error: stack overflow

which I'm looking into.

@github-actions github-actions bot added the pulley Issues related to the Pulley interpreter label Dec 13, 2024
Copy link

Subscribe to Label Action

cc @fitzgen

This issue or pull request has been labeled: "pulley", "wasmtime:api"

Thus the following users have been cc'd because of the following labels:

  • fitzgen: pulley

To subscribe or unsubscribe from this label, edit the .github/subscribe-to-label.json configuration file.

Learn more.

@sunfishcode
Copy link
Member

The stack overflow in the host appears to be because the stack limit checks are in the trampolines, however host libcalls don't use trampolines. The testcase disables sse4_1 and does f32.ceil, which causes it to get a host libcall, so it calls from guest to host without a trampoline, and therefore without checking the stack limit.

@cfallin
Copy link
Member

cfallin commented Dec 18, 2024

I remember talking about the libcall problem at the time with @jameysharp but have only vague recollections of the various answers we came up with; but IIRC we did decide that recognizing the overflow by fault address in guard page was more reliable than trying to deduce fault reason from PC (because we don't know the first instruction that might touch the guard page in the <1 page frame size case); I see this PR does that with some logic checking if the fault address is within a certain distance of SP; I remember us realizing that this might also (should also?) work for libcalls because natively compiled code is also compatible with a guard-page scheme. So I guess I'm curious: what's the reason that this condition isn't triggering in this case?

(Apologies if I've lost too much context -- it's been a long time and I skimmed this PR again but there are many subtleties!)

@fitzgen
Copy link
Member

fitzgen commented Dec 18, 2024

Also it seems like libcalls should probably do stack checks as well, since I would be hesitant to try and recover from runtime code hitting the guard page, so it might make sense to make the locals use trampolines like imports do.

@alexcrichton
Copy link
Member

If it helps I've got an old wasmtime branch which removed the stack limit checks from functions through generating trampolines for all libcalls. I never pushed that over the finish line though in terms of polish and testing

@jameysharp
Copy link
Contributor Author

I've lost the context I had on this.

I believe Wasmtime libcalls work correctly in this PR, because they use Wasmtime's trampolines, which this PR makes do stack limit checks. It's only the Cranelift libcalls that still don't work.

There was some reason why we can only safely use the guard page if we see that the faulting instruction is in Cranelift-compiled code. I think because our only recourse in the signal handler is to fully unwind the stack, and we can't trigger Rust panic unwinding let alone any libc cleanup, so there's no telling what invariants are broken if we just rewind the stack in native code. But the specific way we use Cranelift is always safe to just drop everything at any time.

I remember at one point intending to add support to Cranelift for inserting a stack limit check immediately before any libcall, or insert it in the prologue like today but only if some libcalls appear somewhere in the function. I don't remember if that was the last plan I had in mind or if something had superceded it.

I definitely carefully considered Alex's old branch. I only found out about it after landing on the idea of checking the stack pointer in the signal handler, and it was reassuring to see we'd both landed in the same place there.

I was not comfortable with the specific mechanism in that branch for getting stack limit checks into Cranelift libcalls.

  • The idea there was that Cranelift makes us supply the function for it to call, so let's supply a function that does a stack limit check.
  • But we need the vmctx to find the stack limit, and Cranelift doesn't pass that to libcalls. (I discussed changing that with Chris, IIRC; I think we decided not to.)
  • So Alex needed to look up the vmctx again, from thread-locals. Nobody wants to do that from Cranelift, so we need to call into Rust to do the thread-local lookup.
  • But we need to do a stack limit check before calling into Rust!
  • We can assume some upper bound on how much stack we'll need for the thread-local lookup. So Alex did something I think involving synthesizing a Cranelift function that would touch the stack some distance down; to hit the guard page and trigger the signal handler, maybe? And it was recursive for reasons I didn't understand, doing this four times or something.
  • If that function returns then it's safe to call the Rust function to get the vmctx, then do the stack limit check by hand, then call the real libcall. These steps can be represented in CLIF, so the signal handler protects this function too.

End result was calling two extra Cranelift functions plus a Rust function before the real libcall could run, and that felt like a lot of trouble to me.

I hope this helps!

jameysharp and others added 5 commits December 19, 2024 17:35
Nothing uses this yet but I want to get collection of this register
right on our many supported platforms before introducing the changes
that need it.
Currently, we do an explicit check for stack overflow on entry to every
WebAssembly function. But that costs some time, and is a significant
performance hit for very short functions.

This commit instead switches Wasmtime to relying on guard pages at the
end of the stack to catch stack overflow, so the MMU does this check for
"free". This means we may allow deeper recursion in guest code than we
did before.

To make this work, we need Wasmtime's signal handlers to recognize when
a guest memory fault is in a stack guard page and report the appropriate
stack-overflow trap code.

Note that we can't turn host-code signals into guest traps, so the
signal handlers have to verify that the signal occurred in guest code.

When the guest calls host code (explicitly due to calling an imported
host function, or implicitly due to a libcall inserted by Wasmtime or
Cranelift), we also need to ensure that there is enough stack space
available for the host code to not hit the guard pages. We do that by
checking the stack limit that the embedder provided in the trampolines
where we exit wasm.
And reinstate the old stack limit checks for when signal handling is
disabled.
@sunfishcode
Copy link
Member

@jameysharp Thanks, that's useful context!

@alexcrichton Thanks for that branch. It's worth considering that approach, however if possible, I'd like to try taking an approach where we just pass vmctx to the libcalls and use trampolines, so that they work the same as other Wasmtime builtins, rather than having the libcall code do special things.

I've now added patches to experiment with making the calls to ceil/floor/etc. work like normal Wasmtime builtins, meaning they get a vmctx and a trampoline in the normal way. This involves handling them in the Wasm-to-Cranelift translator, which is a little annoying, but has the nice side effect of eliminating a whole separate relocation mechanism from Wasmtime, so I think it's an overall simplification. See the changes in crates/cranelift/src/translate/code_translator.rs for the place where this hooks in.

There's a test failure in host_sefault.rs. The test attempts to trigger a crash by deliberately misconfiguring the host, and with this patch, we interpret the crash as a Wasm trap, which causes the test to fail. I need to investigate more.

There are lots of Pulley failures still. I haven't yet looked at how all this interacts with Pulley.

And the patch is a bit rough still. I need to tidy up all the loose ends around x86_pshufb and __m128i and things, and some bits of unnecessary relocation code left behind, and probably other things too.

@github-actions github-actions bot added cranelift Issues related to the Cranelift code generator cranelift:area:aarch64 Issues related to AArch64 backend. cranelift:area:x64 Issues related to x64 codegen labels Dec 20, 2024
@alexcrichton
Copy link
Member

This involves handling them in the Wasm-to-Cranelift translator, which is a little annoying, but has the nice side effect of eliminating a whole separate relocation mechanism from Wasmtime, so I think it's an overall simplification

Oh I really like that strategy, much better than the hack I had 👍

There are lots of Pulley failures still. I haven't yet looked at how all this interacts with Pulley.

Feel free to ignore any pulley failures where possible, but if that's causing issues I can help take a deeper look too!

@sunfishcode
Copy link
Member

The patch is now tidied up more. I still need to fix the host_segfault.rs test, and there's a todo to figure out the stack pointer on s390x, but I think it's otherwise in pretty good shape.

@alexcrichton The CI shows a lot of pulley failures; any help you could provide with those would be appreciated!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cranelift:area:aarch64 Issues related to AArch64 backend. cranelift:area:x64 Issues related to x64 codegen cranelift Issues related to the Cranelift code generator pulley Issues related to the Pulley interpreter wasmtime:api Related to the API of the `wasmtime` crate itself
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants