Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wasmtime fuel consumption #637

Open
dev-null-undefined opened this issue Dec 11, 2024 · 9 comments
Open

Wasmtime fuel consumption #637

dev-null-undefined opened this issue Dec 11, 2024 · 9 comments

Comments

@dev-null-undefined
Copy link

dev-null-undefined commented Dec 11, 2024

Maybe I missed something but is there any way to tell how much fuel was used by wasmtime instance?
Or even to set how much fuel it can use at maximum?

@thibaultcha
Copy link
Member

Hello,

No, there is no mechanism to interact with fuel consumption at the moment. We do not have short or medium term plan for it either. Documentation on fuel consumption seems sparse on the wasmtime side and I am unsure whether it is even exposed in the C API.

@dev-null-undefined
Copy link
Author

dev-null-undefined commented Dec 11, 2024

Should I move this into discussion?
It is exposed, I am currently modifying different wasmtime nginx integration and,
I did manage to make use of the wasmtime C API to get these metrics from the instance and it is working like a charm.

Would you be willing to accept merge request adding this to this project as well?
If so how would the interface for this look like?
(I am just starting to look at the code base of this project)
Would you be willing to add argument to the ngx_wrt_t->call with name something like fuel?

@thibaultcha
Copy link
Member

thibaultcha commented Dec 11, 2024

Do you have a branch somewhere where we could see your modifications on the other nginx integration? This would give me an idea of what to expect. As for the specific design and PR, maybe, but we have a very high bar for contribution so your PR would most likely be reworked before merging. We also have our hands full with other projects so it may take time for us to look at it. If I see what it looks like on your branch I would have a better idea of what to expect. Thanks.

@thibaultcha
Copy link
Member

thibaultcha commented Dec 11, 2024

Would you be willing to add argument to the ngx_wrt_t->call with name something like fuel?

Part of the issue (or should I say "requirements") is that our abstraction of the Wasm runtime (ngx_wrt_t) is meant to support several underlying runtimes (Wasmtime/Wasmer/V8 at the moment), so the exposed interface needs to be compatible for the fuel-consumption interface of each runtime. Do you have a link to the Wasmtime docs you used for fuel-consumption APIs as well?

@dev-null-undefined
Copy link
Author

This is the documentation that I used https://docs.wasmtime.dev/c-api/store_8h.html#ae9782930bfa96900c5171db1c86035a8
But since all of the C API is pretty much a wrapper around rust functions I also used the rust API documentation
https://docs.wasmtime.dev/api/wasmtime/struct.Store.html#method.get_fuel
https://docs.wasmtime.dev/api/wasmtime/struct.Store.html#method.set_fuel

@dev-null-undefined
Copy link
Author

Do you have a branch somewhere where we could see your modifications on the other nginx integration

The project is not public, but I will try next week to see if there is a way to give you at least the git diff of the commit in which we added the fuel limits.

@thibaultcha
Copy link
Member

Thank you. I also have other concerns with the fuel concept in the context of a reverse-proxy like ours, namely: what behavior to expect when the instance runs out of fuel. In our case with Kong Gateway, I do not picture a use-case that would call for it at the moment since any request being processed needs to run through our "plugins" (or "filters" for proxy-wasm extensions). A request that needs processing but does not receive any due to lack of fuel will most likely become an invalid request that the upstream (behind Kong Gateway) cannot process at all. Or in other cases such as security or consumption use-cases (e.g. a rate-limiting filter) also need to be executed 100% of the time. In other words when an instance runs out of fuel and traps, does this behavior has its place in a reverse-proxy? So far we have not heard of a use-case from users or customers, so we have not invested time in the fuel interface. Do you have a specific use-case in mind?

@dev-null-undefined
Copy link
Author

Would you mind me asking why are you using wasm in first place? For us it is used to run "unchecked" code without having to worry about it crashing or infinite looping. So something like fuel limit's is must have without it, it could easily stuck it self and stop the worker from responding to anything else.

For me it makes sense to limit each wasmtime phase call to certain emount of fuel, and it can also be usefull to debug performance issues IMHO.

If the plugin should use all of the fuel given to it, I would suggest returning internal server error just as if memory allocation would fail or any other nginx related issue.

@thibaultcha
Copy link
Member

thibaultcha commented Dec 11, 2024

Fair; we use Wasm as a way to extend Kong Gateway (https://github.com/Kong/kong) with non-Lua code (i.e. proxy-wasm-sdk filters). In our case, the code that we run inside of Kong Gateway must imperatively run 100% of the time, there is no instance in which a user extends the Gateway to add a proxy feature, and expects this feature not to be executed while proxying a request. Every added feature (aka Lua plugin/Wasm filter) must be executed for a request to be successfully proxied.
There could some day be a need for "optional filters" which can add some functionality (e.g. observability perhaps) but be skipped when the fuel is low, but most likely our users want all filters to be executed all of the time, even observability ones.

That said in our SaaS offering when we someday allow users to upload their own filters, I could see there being a need for fuel, as like you said the code is unchecked and we do not want to clog a SaaS node because of a user's erroneous code, so in this context it is a good feature to have. For now we do not offer Wasm filters on our SaaS offering yet, but someday perhaps.

This will probably boil down to how large the feature is to implement which I am curious to gauge based on your upcoming branch. I think there is room for it if the size and design are reasonable and non-intrusive on the code side.

For the user side, there needs to be a way to inject fuel into instances, which probably means directives and configuration options. This part can sometimes be as big as the PR itself, but without it the feature is unusable and even untestable. So there probably should be directive for enabling fuel, setting per-request fuel, maybe per-connection, etc... This would need to be thought out before implementing the feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants