diff --git a/docs/develop/deploy/cri-runtime/containerd.md b/docs/develop/deploy/cri-runtime/containerd.md index 002cb7985..5019e2d47 100644 --- a/docs/develop/deploy/cri-runtime/containerd.md +++ b/docs/develop/deploy/cri-runtime/containerd.md @@ -2,7 +2,7 @@ sidebar_position: 1 --- -# 8.6.1 Deploy with containerd's runwasi +# Deploy with containerd's runwasi :::info diff --git a/docs/embed/c++/intro.md b/docs/embed/c++/intro.md index 8c78bf19e..3904b6a7a 100644 --- a/docs/embed/c++/intro.md +++ b/docs/embed/c++/intro.md @@ -4,7 +4,95 @@ sidebar_position: 1 # WasmEdge C++ SDK Introduction - -:::info -Work in Progress -::: +The WasmEdge C++ SDK is a collection of headers and libraries that allow you to build and deploy WebAssembly (Wasm) modules for execution on WasmEdge devices. It includes a CMake project and a set of command-line tools that you can use to build and deploy your Wasm modules. + +## Quick Start Guide + +To get started with WasmEdge, follow these steps: + +Install the WasmEdge C/C++ SDK: Download C++ SDK from the WasmEdge [website](https://wasmedge.org/docs/embed/quick-start/install) and follow the instructions to install it on your development machine + +```cpp +#include +#include + +int main(int argc, char** argv) { + /* Create the configure context and add the WASI support. */ + /* This step is not necessary unless you need WASI support. */ + WasmEdge_ConfigureContext* conf_cxt = WasmEdge_ConfigureCreate(); + WasmEdge_ConfigureAddHostRegistration(conf_cxt, WasmEdge_HostRegistration_Wasi); + /* The configure and store context to the VM creation can be NULL. */ + WasmEdge_VMContext* vm_cxt = WasmEdge_VMCreate(conf_cxt, nullptr); + + /* The parameters and returns arrays. */ + WasmEdge_Value params[1] = { WasmEdge_ValueGenI32(40) }; + WasmEdge_Value returns[1]; + /* Function name. */ + WasmEdge_String func_name = WasmEdge_StringCreateByCString("fib"); + /* Run the WASM function from file. */ + WasmEdge_Result res = WasmEdge_VMRunWasmFromFile(vm_cxt, argv[1], func_name, params, 1, returns, 1); + + if (WasmEdge_ResultOK(res)) { + std::cout << "Get result: " << WasmEdge_ValueGetI32(returns[0]) << std::endl; + } else { + std::cout << "Error message: " << WasmEdge_ResultGetMessage(res) << std::endl; + } + + /* Resources deallocations. */ + WasmEdge_VMDelete(vm_cxt); + WasmEdge_ConfigureDelete(conf_cxt); + WasmEdge_StringDelete(func_name); + return 0; +} +``` + +You can use the -I flag to specify the include directories and the -L and -l flags to specify the library directories and library names, respectively. Then you can compile the code and run: ( the 40th fibonacci number is 102334155) + +```bash +gcc example.cpp -x c++ -I/path/to/wasmedge/include -L/path/to/wasmedge/lib -lwasmedge -o example +``` + +To run the `example` executable that was created in the previous step, you can use the following command + +```bash +./example +``` + +## Quick Start Guide in AOT compiler + +```cpp +#include +#include + +int main(int argc, const char* argv[]) { + // Create the configure context and add the WASI support. + // This step is not necessary unless you need WASI support. + wasmedge_configure_context* conf_cxt = wasmedge_configure_create(); + wasmedge_configure_add_host_registration(conf_cxt, WASMEDGE_HOST_REGISTRATION_WASI); + + // Create the VM context in AOT mode. + wasmedge_vm_context* vm_cxt = wasmedge_vm_create_aot(conf_cxt, NULL); + + // The parameters and returns arrays. + wasmedge_value params[1] = { wasmedge_value_gen_i32(32) }; + wasmedge_value returns[1]; + // Function name. + wasmedge_string func_name = wasmedge_string_create_by_cstring("fib"); + // Run the WASM function from file. + wasmedge_result res = wasmedge_vm_run_wasm_from_file(vm_cxt, argv[1], func_name, params, 1, returns, 1); + + if (wasmedge_result_ok(res)) { + printf("Get result: %d\n", wasmedge_value_get_i32(returns[0])); + } else { + printf("Error message: %s\n", wasmedge_result_get_message(res)); + } + + // Resources deallocations. + wasmedge_vm_delete(vm_cxt); + wasmedge_configure_delete(conf_cxt); + wasmedge_string_delete(func_name); + return 0; +} +``` + +In this example, the wasmedge_vm_create_aot function is used to create a wasmedge_vm_context object in AOT mode, which is then passed as the second argument to the wasmedge_vm_run_wasm_from_file function to execute the Wasm module in AOT mode. diff --git a/docs/start/usage/_category_.json b/docs/start/usage/_category_.json new file mode 100644 index 000000000..6fd885429 --- /dev/null +++ b/docs/start/usage/_category_.json @@ -0,0 +1,8 @@ +{ + "label": "WasmEdge Use-cases", + "position": 5, + "link": { + "type": "generated-index", + "description": "In this chapter, we will discuss use-cases of WasmEdge" + } +} diff --git a/docs/start/usage/serverless/_category_.json b/docs/start/usage/serverless/_category_.json new file mode 100644 index 000000000..075ab1a18 --- /dev/null +++ b/docs/start/usage/serverless/_category_.json @@ -0,0 +1,8 @@ +{ + "label": "Serverless Platforms", + "position": 9, + "link": { + "type": "generated-index", + "description": "Run WebAssembly as an alternative lightweight runtime side-by-side with Docker and microVMs in cloud native infrastructure" + } +} diff --git a/docs/start/usage/serverless/aws.md b/docs/start/usage/serverless/aws.md new file mode 100644 index 000000000..c23c56105 --- /dev/null +++ b/docs/start/usage/serverless/aws.md @@ -0,0 +1,272 @@ +--- +sidebar_position: 1 +--- + +# WebAssembly Serverless Functions in AWS Lambda + +In this article, we will show you two serverless functions in Rust and WasmEdge deployed on AWS Lambda. One is the image processing function, the other one is the TensorFlow inference function. + +> For the insight on why WasmEdge on AWS Lambda, please refer to the article [WebAssembly Serverless Functions in AWS Lambda](https://www.secondstate.io/articles/webassembly-serverless-functions-in-aws-lambda/) + +## Prerequisites + +Since our demo WebAssembly functions are written in Rust, you will need a [Rust compiler](https://www.rust-lang.org/tools/install). Make sure that you install the `wasm32-wasi` compiler target as follows, in order to generate WebAssembly bytecode. + +```bash +rustup target add wasm32-wasi +``` + +The demo application front end is written in [Next.js](https://nextjs.org/), and deployed on AWS Lambda. We will assume that you already have the basic knowledge of how to work with Next.js and Lambda. + +## Example 1: Image processing + +Our first demo application allows users to upload an image and then invoke a serverless function to turn it into black and white. A [live demo](https://second-state.github.io/aws-lambda-wasm-runtime/) deployed through GitHub Pages is available. + +Fork the [demo application’s GitHub repo](https://github.com/second-state/aws-lambda-wasm-runtime) to get started. To deploy the application on AWS Lambda, follow the guide in the repository [README](https://github.com/second-state/aws-lambda-wasm-runtime/blob/tensorflow/README.md). + +### Create the function + +This repo is a standard Next.js application. The backend serverless function is in the `api/functions/image_grayscale` folder. The `src/main.rs` file contains the Rust program’s source code. The Rust program reads image data from the `STDIN`, and then outputs the black-white image to the `STDOUT`. + +```rust +use hex; +use std::io::{self, Read}; +use image::{ImageOutputFormat, ImageFormat}; + +fn main() { + let mut buf = Vec::new(); + io::stdin().read_to_end(&mut buf).unwrap(); + + let image_format_detected: ImageFormat = image::guess_format(&buf).unwrap(); + let img = image::load_from_memory(&buf).unwrap(); + let filtered = img.grayscale(); + let mut buf = vec![]; + match image_format_detected { + ImageFormat::Gif => { + filtered.write_to(&mut buf, ImageOutputFormat::Gif).unwrap(); + }, + _ => { + filtered.write_to(&mut buf, ImageOutputFormat::Png).unwrap(); + }, + }; + io::stdout().write_all(&buf).unwrap(); + io::stdout().flush().unwrap(); +} +``` + +You can use Rust’s `cargo` tool to build the Rust program into WebAssembly bytecode or native code. + +```bash +cd api/functions/image-grayscale/ +cargo build --release --target wasm32-wasi +``` + +Copy the build artifacts to the `api` folder. + +```bash +cp target/wasm32-wasi/release/grayscale.wasm ../../ +``` + +> When we build the docker image, `api/pre.sh` is executed. `pre.sh` installs the WasmEdge runtime, and then compiles each WebAssembly bytecode program into a native `so` library for faster execution. + +### Create the service script to load the function + +The [`api/hello.js`](https://github.com/second-state/aws-lambda-wasm-runtime/blob/main/api/hello.js) script loads the WasmEdge runtime, starts the compiled WebAssembly program in WasmEdge, and passes the uploaded image data via `STDIN`. Notice that [`api/hello.js`](https://github.com/second-state/aws-lambda-wasm-runtime/blob/main/api/hello.js) runs the compiled `grayscale.so` file generated by [`api/pre.sh`](https://github.com/second-state/aws-lambda-wasm-runtime/blob/main/api/pre.sh) for better performance. + +```javascript +const { spawn } = require('child_process'); +const path = require('path'); + +function _runWasm(reqBody) { + return new Promise((resolve) => { + const wasmedge = spawn(path.join(__dirname, 'wasmedge'), [ + path.join(__dirname, 'grayscale.so'), + ]); + + let d = []; + wasmedge.stdout.on('data', (data) => { + d.push(data); + }); + + wasmedge.on('close', (code) => { + let buf = Buffer.concat(d); + resolve(buf); + }); + + wasmedge.stdin.write(reqBody); + wasmedge.stdin.end(''); + }); +} +``` + +The `exports.handler` part of `hello.js` exports an async function handler, used to handle different events every time the serverless function is called. In this example, we simply process the image by calling the function above and return the result, but more complicated event-handling behavior may be defined based on your need. We also need to return some `Access-Control-Allow` headers to avoid [Cross-Origin Resource Sharing (CORS)](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) errors when calling the serverless function from a browser. You can read more about CORS errors [here](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors) if you encounter them when replicating our example. + +```javascript +exports.handler = async function (event, context) { + var typedArray = new Uint8Array( + event.body.match(/[\da-f]{2}/gi).map(function (h) { + return parseInt(h, 16); + }), + ); + let buf = await _runWasm(typedArray); + return { + statusCode: 200, + headers: { + 'Access-Control-Allow-Headers': + 'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token', + 'Access-Control-Allow-Origin': '*', + 'Access-Control-Allow-Methods': + 'DELETE, GET, HEAD, OPTIONS, PATCH, POST, PUT', + }, + body: buf.toString('hex'), + }; +}; +``` + +### Build the Docker image for Lambda deployment + +Now we have the WebAssembly bytecode function and the script to load and connect to the web request. In order to deploy them as a function service on AWS Lambda, you still need to package the whole thing into a Docker image. + +We are not going to cover in detail about how to build the Docker image and deploy on AWS Lambda, as there are detailed steps in the [Deploy section of the repository README](https://github.com/second-state/aws-lambda-wasm-runtime/blob/tensorflow/README.md#deploy). However, we will highlight some lines in the [`Dockerfile`](https://github.com/second-state/aws-lambda-wasm-runtime/blob/tensorflow/api/Dockerfile) for you to avoid some pitfalls. + +```dockerfile +FROM public.ecr.aws/lambda/nodejs:14 + +# Change directory to /var/task +WORKDIR /var/task + +RUN yum update -y && yum install -y curl tar gzip + +# Bundle and pre-compile the wasm files +COPY *.wasm ./ +COPY pre.sh ./ +RUN chmod +x pre.sh +RUN ./pre.sh + +# Bundle the JS files +COPY *.js ./ + +CMD [ "hello.handler" ] +``` + +First, we are building the image from [AWS Lambda's Node.js base image](https://hub.docker.com/r/amazon/aws-lambda-nodejs). The advantage of using AWS Lambda's base image is that it includes the [Lambda Runtime Interface Client (RIC)](https://github.com/aws/aws-lambda-nodejs-runtime-interface-client), which we need to implement in our Docker image as it is required by AWS Lambda. The Amazon Linux uses `yum` as the package manager. + +> These base images contain the Amazon Linux Base operating system, the runtime for a given language, dependencies and the Lambda Runtime Interface Client (RIC), which implements the Lambda [Runtime API](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-api.html). The Lambda Runtime Interface Client allows your runtime to receive requests from and send requests to the Lambda service. + +Second, we need to put our function and all its dependencies in the `/var/task` directory. Files in other folders will not be executed by AWS Lambda. + +Third, we need to define the default command when we start our container. `CMD [ "hello.handler" ]` means that we will call the `handler` function in `hello.js` whenever our serverless function is called. Recall that we have defined and exported the handler function in the previous steps through `exports.handler = ...` in `hello.js`. + +### Optional: test the Docker image locally + +Docker images built from AWS Lambda's base images can be tested locally following [this guide](https://docs.aws.amazon.com/lambda/latest/dg/images-test.html). Local testing requires [AWS Lambda Runtime Interface Emulator (RIE)](https://github.com/aws/aws-lambda-runtime-interface-emulator), which is already installed in all of AWS Lambda's base images. To test your image, first, start the Docker container by running: + +```bash +docker run -p 9000:8080 myfunction:latest +``` + +This command sets a function endpoint on your local machine at `http://localhost:9000/2015-03-31/functions/function/invocations`. + +Then, from a separate terminal window, run: + +```bash +curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}' +``` + +And you should get your expected output in the terminal. + +If you don't want to use a base image from AWS Lambda, you can also use your own base image and install RIC and/or RIE while building your Docker image. Just follow **Create an image from an alternative base image** section from [this guide](https://docs.aws.amazon.com/lambda/latest/dg/images-create.html). + +That's it! After building your Docker image, you can deploy it to AWS Lambda following steps outlined in the repository [README](https://github.com/second-state/aws-lambda-wasm-runtime/blob/tensorflow/README.md#deploy). Now your serverless function is ready to rock! + +## Example 2: AI inference + +The [second demo](https://github.com/second-state/aws-lambda-wasm-runtime/tree/tensorflow) application allows users to upload an image and then invoke a serverless function to classify the main subject on the image. + +It is in [the same GitHub repo](https://github.com/second-state/aws-lambda-wasm-runtime/tree/tensorflow) as the previous example but in the `tensorflow` branch. The backend serverless function for image classification is in the `api/functions/image-classification` folder in the `tensorflow` branch. The `src/main.rs` file contains the Rust program’s source code. The Rust program reads image data from the `STDIN`, and then outputs the text output to the `STDOUT`. It utilizes the WasmEdge Tensorflow API to run the AI inference. + +```rust +pub fn main() { + // Step 1: Load the TFLite model + let model_data: &[u8] = include_bytes!("models/mobilenet_v1_1.0_224/mobilenet_v1_1.0_224_quant.tflite"); + let labels = include_str!("models/mobilenet_v1_1.0_224/labels_mobilenet_quant_v1_224.txt"); + + // Step 2: Read image from STDIN + let mut buf = Vec::new(); + io::stdin().read_to_end(&mut buf).unwrap(); + + // Step 3: Resize the input image for the tensorflow model + let flat_img = wasmedge_tensorflow_interface::load_jpg_image_to_rgb8(&buf, 224, 224); + + // Step 4: AI inference + let mut session = wasmedge_tensorflow_interface::Session::new(&model_data, wasmedge_tensorflow_interface::ModelType::TensorFlowLite); + session.add_input("input", &flat_img, &[1, 224, 224, 3]) + .run(); + let res_vec: Vec = session.get_output("MobilenetV1/Predictions/Reshape_1"); + + // Step 5: Find the food label that responds to the highest probability in res_vec + // ... ... + let mut label_lines = labels.lines(); + for _i in 0..max_index { + label_lines.next(); + } + + // Step 6: Generate the output text + let class_name = label_lines.next().unwrap().to_string(); + if max_value > 50 { + println!("It {} a {} in the picture", confidence.to_string(), class_name, class_name); + } else { + println!("It does not appears to be any food item in the picture."); + } +} +``` + +You can use the `cargo` tool to build the Rust program into WebAssembly bytecode or native code. + +```bash +cd api/functions/image-classification/ +cargo build --release --target wasm32-wasi +``` + +Copy the build artifacts to the `api` folder. + +```bash +cp target/wasm32-wasi/release/classify.wasm ../../ +``` + +Again, the `api/pre.sh` script installs WasmEdge runtime and its Tensorflow dependencies in this application. It also compiles the `classify.wasm` bytecode program to the `classify.so` native shared library at the time of deployment. + +The [`api/hello.js`](https://github.com/second-state/aws-lambda-wasm-runtime/blob/tensorflow/api/hello.js) script loads the WasmEdge runtime, starts the compiled WebAssembly program in WasmEdge, and passes the uploaded image data via `STDIN`. Notice [`api/hello.js`](https://github.com/second-state/aws-lambda-wasm-runtime/blob/tensorflow/api/hello.js) runs the compiled `classify.so` file generated by [`api/pre.sh`](https://github.com/second-state/aws-lambda-wasm-runtime/blob/tensorflow/api/pre.sh) for better performance. The handler function is similar to our previous example, and is omitted here. + +```javascript +const { spawn } = require('child_process'); +const path = require('path'); + +function _runWasm(reqBody) { + return new Promise(resolve => { + const wasmedge = spawn( + path.join(__dirname, 'wasmedge-tensorflow-lite'), + [path.join(__dirname, 'classify.so')], + {env: {'LD_LIBRARY_PATH': __dirname}} + ); + + let d = []; + wasmedge.stdout.on('data', (data) => { + d.push(data); + }); + + wasmedge.on('close', (code) => { + resolve(d.join('')); + }); + + wasmedge.stdin.write(reqBody); + wasmedge.stdin.end(''); + }); +} + +exports.handler = ... // _runWasm(reqBody) is called in the handler +``` + +You can build your Docker image and deploy the function in the same way as outlined in the previous example. Now you have created a web app for subject classification! + +Next, it's your turn to use the [aws-lambda-wasm-runtime repo](https://github.com/second-state/aws-lambda-wasm-runtime/tree/main) as a template to develop Rust serverless function on AWS Lambda. Looking forward to your great work. diff --git a/docs/start/usage/serverless/netlify.md b/docs/start/usage/serverless/netlify.md new file mode 100644 index 000000000..0f4b82db2 --- /dev/null +++ b/docs/start/usage/serverless/netlify.md @@ -0,0 +1,189 @@ +--- +sidebar_position: 2 +--- + +# WebAssembly Serverless Functions in Netlify + +In this article we will show you two serverless functions in Rust and WasmEdge deployed on Netlify. One is the image processing function, the other one is the TensorFlow inference function. + +> For more insights on why WasmEdge on Netlify, please refer to the article [WebAssembly Serverless Functions in Netlify](https://www.secondstate.io/articles/netlify-wasmedge-webassembly-rust-serverless/). + +## Prerequisite + +Since our demo WebAssembly functions are written in Rust, you will need a [Rust compiler](https://www.rust-lang.org/tools/install). Make sure that you install the `wasm32-wasi` compiler target as follows, in order to generate WebAssembly bytecode. + +```bash +rustup target add wasm32-wasi +``` + +The demo application front end is written in [Next.js](https://nextjs.org/), and deployed on Netlify. We will assume that you already have the basic knowledge of how to work with Next.js and Netlify. + +## Example 1: Image processing + +Our first demo application allows users to upload an image and then invoke a serverless function to turn it into black and white. A [live demo](https://60fe22f9ff623f0007656040--reverent-hodgkin-dc1f51.netlify.app/) deployed on Netlify is available. + +Fork the [demo application’s GitHub repo](https://github.com/second-state/netlify-wasm-runtime) to get started. To deploy the application on Netlify, just [add your github repo to Netlify](https://www.netlify.com/blog/2016/09/29/a-step-by-step-guide-deploying-on-netlify/). + +This repo is a standard Next.js application for the Netlify platform. The backend serverless function is in the [`api/functions/image_grayscale`](https://github.com/second-state/netlify-wasm-runtime/tree/main/api/functions/image-grayscale) folder. The [`src/main.rs`](https://github.com/second-state/netlify-wasm-runtime/blob/main/api/functions/image-grayscale/src/main.rs) file contains the Rust program’s source code. The Rust program reads image data from the `STDIN`, and then outputs the black-white image to the `STDOUT`. + +```rust +use hex; +use std::io::{self, Read}; +use image::{ImageOutputFormat, ImageFormat}; + +fn main() { + let mut buf = Vec::new(); + io::stdin().read_to_end(&mut buf).unwrap(); + + let image_format_detected: ImageFormat = image::guess_format(&buf).unwrap(); + let img = image::load_from_memory(&buf).unwrap(); + let filtered = img.grayscale(); + let mut buf = vec![]; + match image_format_detected { + ImageFormat::Gif => { + filtered.write_to(&mut buf, ImageOutputFormat::Gif).unwrap(); + }, + _ => { + filtered.write_to(&mut buf, ImageOutputFormat::Png).unwrap(); + }, + }; + io::stdout().write_all(&buf).unwrap(); + io::stdout().flush().unwrap(); +} +``` + +You can use Rust’s `cargo` tool to build the Rust program into WebAssembly bytecode or native code. + +```bash +cd api/functions/image-grayscale/ +cargo build --release --target wasm32-wasi +``` + +Copy the build artifacts to the `api` folder. + +```bash +cp target/wasm32-wasi/release/grayscale.wasm ../../ +``` + +> The Netlify function runs [`api/pre.sh`](https://github.com/second-state/netlify-wasm-runtime/blob/main/api/pre.sh) upon setting up the serverless environment. It installs the WasmEdge runtime, and then compiles each WebAssembly bytecode program into a native `so` library for faster execution. + +The [`api/hello.js`](https://github.com/second-state/netlify-wasm-runtime/blob/main/api/hello.js) script loads the WasmEdge runtime, starts the compiled WebAssembly program in WasmEdge, and passes the uploaded image data via `STDIN`. Notice [`api/hello.js`](https://github.com/second-state/netlify-wasm-runtime/blob/main/api/hello.js) runs the compiled `grayscale.so` file generated by [`api/pre.sh`](https://github.com/second-state/netlify-wasm-runtime/blob/main/api/pre.sh) for better performance. + +```javascript +const fs = require('fs'); +const { spawn } = require('child_process'); +const path = require('path'); + +module.exports = (req, res) => { + const wasmedge = spawn(path.join(__dirname, 'wasmedge'), [ + path.join(__dirname, 'grayscale.so'), + ]); + + let d = []; + wasmedge.stdout.on('data', (data) => { + d.push(data); + }); + + wasmedge.on('close', (code) => { + let buf = Buffer.concat(d); + + res.setHeader('Content-Type', req.headers['image-type']); + res.send(buf); + }); + + wasmedge.stdin.write(req.body); + wasmedge.stdin.end(''); +}; +``` + +That's it. [Deploy the repo to Netlify](https://www.netlify.com/blog/2016/09/29/a-step-by-step-guide-deploying-on-netlify/) and you now have a Netlify Jamstack app with a high-performance Rust and WebAssembly based serverless backend. + +## Example 2: AI inference + +The [second demo](https://60ff7e2d10fe590008db70a9--reverent-hodgkin-dc1f51.netlify.app/) application allows users to upload an image and then invoke a serverless function to classify the main subject on the image. + +It is in [the same GitHub repo](https://github.com/second-state/netlify-wasm-runtime/tree/tensorflow) as the previous example but in the `tensorflow` branch. The backend serverless function for image classification is in the [`api/functions/image-classification`](https://github.com/second-state/netlify-wasm-runtime/tree/tensorflow/api/functions/image-classification) folder in the `tensorflow` branch. The [`src/main.rs`](https://github.com/second-state/netlify-wasm-runtime/blob/tensorflow/api/functions/image-classification/src/main.rs) file contains the Rust program’s source code. The Rust program reads image data from the `STDIN`, and then outputs the text output to the `STDOUT`. It utilizes the WasmEdge Tensorflow API to run the AI inference. + +```rust +pub fn main() { + // Step 1: Load the TFLite model + let model_data: &[u8] = include_bytes!("models/mobilenet_v1_1.0_224/mobilenet_v1_1.0_224_quant.tflite"); + let labels = include_str!("models/mobilenet_v1_1.0_224/labels_mobilenet_quant_v1_224.txt"); + + // Step 2: Read image from STDIN + let mut buf = Vec::new(); + io::stdin().read_to_end(&mut buf).unwrap(); + + // Step 3: Resize the input image for the tensorflow model + let flat_img = wasmedge_tensorflow_interface::load_jpg_image_to_rgb8(&buf, 224, 224); + + // Step 4: AI inference + let mut session = wasmedge_tensorflow_interface::Session::new(&model_data, wasmedge_tensorflow_interface::ModelType::TensorFlowLite); + session.add_input("input", &flat_img, &[1, 224, 224, 3]) + .run(); + let res_vec: Vec = session.get_output("MobilenetV1/Predictions/Reshape_1"); + + // Step 5: Find the food label that responds to the highest probability in res_vec + // ... ... + let mut label_lines = labels.lines(); + for _i in 0..max_index { + label_lines.next(); + } + + // Step 6: Generate the output text + let class_name = label_lines.next().unwrap().to_string(); + if max_value > 50 { + println!("It {} a {} in the picture", confidence.to_string(), class_name, class_name); + } else { + println!("It does not appears to be any food item in the picture."); + } +} +``` + +You can use the `cargo` tool to build the Rust program into WebAssembly bytecode or native code. + +```bash +cd api/functions/image-classification/ +cargo build --release --target wasm32-wasi +``` + +Copy the build artifacts to the `api` folder. + +```bash +cp target/wasm32-wasi/release/classify.wasm ../../ +``` + +Again, the [`api/pre.sh`](https://github.com/second-state/netlify-wasm-runtime/blob/tensorflow/api/pre.sh) script installs WasmEdge runtime and its Tensorflow dependencies in this application. It also compiles the `classify.wasm` bytecode program to the `classify.so` native shared library at the time of deployment. + +The [`api/hello.js`](https://github.com/second-state/netlify-wasm-runtime/blob/tensorflow/api/hello.js) script loads the WasmEdge runtime, starts the compiled WebAssembly program in WasmEdge, and passes the uploaded image data via `STDIN`. Notice [`api/hello.js`](https://github.com/second-state/netlify-wasm-runtime/blob/tensorflow/api/hello.js) runs the compiled `classify.so` file generated by [`api/pre.sh`](https://github.com/second-state/netlify-wasm-runtime/blob/tensorflow/api/pre.sh) for better performance. + +```javascript +const fs = require('fs'); +const { spawn } = require('child_process'); +const path = require('path'); + +module.exports = (req, res) => { + const wasmedge = spawn( + path.join(__dirname, 'wasmedge-tensorflow-lite'), + [path.join(__dirname, 'classify.so')], + { env: { LD_LIBRARY_PATH: __dirname } }, + ); + + let d = []; + wasmedge.stdout.on('data', (data) => { + d.push(data); + }); + + wasmedge.on('close', (code) => { + res.setHeader('Content-Type', `text/plain`); + res.send(d.join('')); + }); + + wasmedge.stdin.write(req.body); + wasmedge.stdin.end(''); +}; +``` + +You can now [deploy your forked repo to Netlify](https://www.netlify.com/blog/2016/09/29/a-step-by-step-guide-deploying-on-netlify/) and have a web app for subject classification. + +Next, it's your turn to develop Rust serverless functions in Netlify using the [netlify-wasm-runtime repo](https://github.com/second-state/netlify-wasm-runtime) as a template. Looking forward to your great work. diff --git a/docs/start/usage/serverless/tencent.md b/docs/start/usage/serverless/tencent.md new file mode 100644 index 000000000..9937f7149 --- /dev/null +++ b/docs/start/usage/serverless/tencent.md @@ -0,0 +1,11 @@ +--- +sidebar_position: 4 +--- + +# WebAssembly serverless functions on Tencent Cloud + +As the main users of Tencent Cloud are from China, so the tutorial is [written in Chinese](https://my.oschina.net/u/4532842/blog/5172639). + +We also provide a code template for deploying serverless WebAssembly functions on Tencent Cloud, please check out [the tencent-scf-wasm-runtime repo](https://github.com/second-state/tencent-scf-wasm-runtime). + +Fork the repo and start writing your own rust functions. diff --git a/docs/start/usage/serverless/vercel.md b/docs/start/usage/serverless/vercel.md new file mode 100644 index 000000000..3ef87bd5c --- /dev/null +++ b/docs/start/usage/serverless/vercel.md @@ -0,0 +1,191 @@ +--- +sidebar_position: 5 +--- + +# Rust and WebAssembly Serverless functions in Vercel + +In this article, we will show you two serverless functions in Rust and WasmEdge deployed on Vercel. One is the image processing function, the other one is the TensorFlow inference function. + +> For more insights on why WasmEdge on Vercel, please refer to the article [Rust and WebAssembly Serverless Functions in Vercel](https://www.secondstate.io/articles/vercel-wasmedge-webassembly-rust/). + +## Prerequisite + +Since our demo WebAssembly functions are written in Rust, you will need a [Rust compiler](https://www.rust-lang.org/tools/install). Make sure that you install the `wasm32-wasi` compiler target as follows, in order to generate WebAssembly bytecode. + +```bash +rustup target add wasm32-wasi +``` + +The demo application front end is written in [Next.js](https://nextjs.org/), and deployed on Vercel. We will assume that you already have the basic knowledge of how to work with Vercel. + +## Example 1: Image processing + +Our first demo application allows users to upload an image and then invoke a serverless function to turn it into black and white. A [live demo](https://vercel-wasm-runtime.vercel.app/) deployed on Vercel is available. + +Fork the [demo application’s GitHub repo](https://github.com/second-state/vercel-wasm-runtime) to get started. To deploy the application on Vercel, just [import the Github repo](https://vercel.com/docs/git#deploying-a-git-repository) from [Vercel for Github](https://vercel.com/docs/git/vercel-for-github) web page. + +This repo is a standard Next.js application for the Vercel platform. The backend serverless function is in the [`api/functions/image_grayscale`](https://github.com/second-state/vercel-wasm-runtime/tree/main/api/functions/image-grayscale) folder. The [`src/main.rs`](https://github.com/second-state/vercel-wasm-runtime/blob/main/api/functions/image-grayscale/src/main.rs) file contains the Rust program’s source code. The Rust program reads image data from the `STDIN`, and then outputs the black-white image to the `STDOUT`. + +```rust +use hex; +use std::io::{self, Read}; +use image::{ImageOutputFormat, ImageFormat}; + +fn main() { + let mut buf = Vec::new(); + io::stdin().read_to_end(&mut buf).unwrap(); + + let image_format_detected: ImageFormat = image::guess_format(&buf).unwrap(); + let img = image::load_from_memory(&buf).unwrap(); + let filtered = img.grayscale(); + let mut buf = vec![]; + match image_format_detected { + ImageFormat::Gif => { + filtered.write_to(&mut buf, ImageOutputFormat::Gif).unwrap(); + }, + _ => { + filtered.write_to(&mut buf, ImageOutputFormat::Png).unwrap(); + }, + }; + io::stdout().write_all(&buf).unwrap(); + io::stdout().flush().unwrap(); +} +``` + +You can use Rust’s `cargo` tool to build the Rust program into WebAssembly bytecode or native code. + +```bash +cd api/functions/image-grayscale/ +cargo build --release --target wasm32-wasi +``` + +Copy the build artifacts to the `api` folder. + +```bash +cp target/wasm32-wasi/release/grayscale.wasm ../../ +``` + +> Vercel runs [`api/pre.sh`](https://github.com/second-state/vercel-wasm-runtime/blob/main/api/pre.sh) upon setting up the serverless environment. It installs the WasmEdge runtime, and then compiles each WebAssembly bytecode program into a native `so` library for faster execution. + +The [`api/hello.js`](https://github.com/second-state/vercel-wasm-runtime/blob/main/api/hello.js) file conforms Vercel serverless specification. It loads the WasmEdge runtime, starts the compiled WebAssembly program in WasmEdge, and passes the uploaded image data via `STDIN`. Notice [`api/hello.js`](https://github.com/second-state/vercel-wasm-runtime/blob/main/api/hello.js) runs the compiled `grayscale.so` file generated by [`api/pre.sh`](https://github.com/second-state/vercel-wasm-runtime/blob/main/api/pre.sh) for better performance. + +```javascript +const fs = require('fs'); +const { spawn } = require('child_process'); +const path = require('path'); + +module.exports = (req, res) => { + const wasmedge = spawn(path.join(__dirname, 'wasmedge'), [ + path.join(__dirname, 'grayscale.so'), + ]); + + let d = []; + wasmedge.stdout.on('data', (data) => { + d.push(data); + }); + + wasmedge.on('close', (code) => { + let buf = Buffer.concat(d); + + res.setHeader('Content-Type', req.headers['image-type']); + res.send(buf); + }); + + wasmedge.stdin.write(req.body); + wasmedge.stdin.end(''); +}; +``` + +That's it. [Deploy the repo to Vercel](https://vercel.com/docs/git#deploying-a-git-repository) and you now have a Vercel Jamstack app with a high-performance Rust and WebAssembly based serverless backend. + +## Example 2: AI inference + +The [second demo](https://vercel-wasm-runtime.vercel.app/) application allows users to upload an image and then invoke a serverless function to classify the main subject on the image. + +It is in [the same GitHub repo](https://github.com/second-state/vercel-wasm-runtime) as the previous example but in the `tensorflow` branch. Note: when you [import this GitHub repo](https://vercel.com/docs/git#deploying-a-git-repository) on the Vercel website, it will create a [preview URL](https://vercel.com/docs/platform/deployments#preview) for each branch. The `tensorflow` branch would have its own deployment URL. + +The backend serverless function for image classification is in the [`api/functions/image-classification`](https://github.com/second-state/vercel-wasm-runtime/tree/tensorflow/api/functions/image-classification) folder in the `tensorflow` branch. The [`src/main.rs`](https://github.com/second-state/vercel-wasm-runtime/blob/tensorflow/api/functions/image-classification/src/main.rs) file contains the Rust program’s source code. The Rust program reads image data from the `STDIN`, and then outputs the text output to the `STDOUT`. It utilizes the WasmEdge Tensorflow API to run the AI inference. + +```rust +pub fn main() { + // Step 1: Load the TFLite model + let model_data: &[u8] = include_bytes!("models/mobilenet_v1_1.0_224/mobilenet_v1_1.0_224_quant.tflite"); + let labels = include_str!("models/mobilenet_v1_1.0_224/labels_mobilenet_quant_v1_224.txt"); + + // Step 2: Read image from STDIN + let mut buf = Vec::new(); + io::stdin().read_to_end(&mut buf).unwrap(); + + // Step 3: Resize the input image for the tensorflow model + let flat_img = wasmedge_tensorflow_interface::load_jpg_image_to_rgb8(&buf, 224, 224); + + // Step 4: AI inference + let mut session = wasmedge_tensorflow_interface::Session::new(&model_data, wasmedge_tensorflow_interface::ModelType::TensorFlowLite); + session.add_input("input", &flat_img, &[1, 224, 224, 3]) + .run(); + let res_vec: Vec = session.get_output("MobilenetV1/Predictions/Reshape_1"); + + // Step 5: Find the food label that responds to the highest probability in res_vec + // ... ... + let mut label_lines = labels.lines(); + for _i in 0..max_index { + label_lines.next(); + } + + // Step 6: Generate the output text + let class_name = label_lines.next().unwrap().to_string(); + if max_value > 50 { + println!("It {} a {} in the picture", confidence.to_string(), class_name, class_name); + } else { + println!("It does not appears to be any food item in the picture."); + } +} +``` + +You can use the `cargo` tool to build the Rust program into WebAssembly bytecode or native code. + +```bash +cd api/functions/image-classification/ +cargo build --release --target wasm32-wasi +``` + +Copy the build artifacts to the `api` folder. + +```bash +cp target/wasm32-wasi/release/classify.wasm ../../ +``` + +Again, the [`api/pre.sh`](https://github.com/second-state/vercel-wasm-runtime/blob/tensorflow/api/pre.sh) script installs WasmEdge runtime and its Tensorflow dependencies in this application. It also compiles the `classify.wasm` bytecode program to the `classify.so` native shared library at the time of deployment. + +The [`api/hello.js`](https://github.com/second-state/vercel-wasm-runtime/blob/tensorflow/api/hello.js) file conforms Vercel serverless specification. It loads the WasmEdge runtime, starts the compiled WebAssembly program in WasmEdge, and passes the uploaded image data via `STDIN`. Notice [`api/hello.js`](https://github.com/second-state/vercel-wasm-runtime/blob/tensorflow/api/hello.js) runs the compiled `classify.so` file generated by [`api/pre.sh`](https://github.com/second-state/vercel-wasm-runtime/blob/tensorflow/api/pre.sh) for better performance. + +```javascript +const fs = require('fs'); +const { spawn } = require('child_process'); +const path = require('path'); + +module.exports = (req, res) => { + const wasmedge = spawn( + path.join(__dirname, 'wasmedge-tensorflow-lite'), + [path.join(__dirname, 'classify.so')], + { env: { LD_LIBRARY_PATH: __dirname } }, + ); + + let d = []; + wasmedge.stdout.on('data', (data) => { + d.push(data); + }); + + wasmedge.on('close', (code) => { + res.setHeader('Content-Type', `text/plain`); + res.send(d.join('')); + }); + + wasmedge.stdin.write(req.body); + wasmedge.stdin.end(''); +}; +``` + +You can now [deploy your forked repo to Vercel](https://vercel.com/docs/git#deploying-a-git-repository) and have a web app for subject classification. + +Next, it's your turn to use [the vercel-wasm-runtime repo](https://github.com/second-state/vercel-wasm-runtime) as a template to develop your own Rust serverless functions in Vercel. Looking forward to your great work. diff --git a/docs/start/usage/use-cases.md b/docs/start/usage/use-cases.md new file mode 100644 index 000000000..9218603d9 --- /dev/null +++ b/docs/start/usage/use-cases.md @@ -0,0 +1,23 @@ +--- +sidebar_position: 1 +--- + +# Use Cases + +Featuring AOT compiler optimization, WasmEdge is one of the fastest WebAssembly runtimes on the market today. Therefore WasmEdge is widely used in edge computing, automotive, Jamstack, serverless, SaaS, service mesh, and even blockchain applications. + +- Modern web apps feature rich UIs that are rendered in the browser and/or on the edge cloud. WasmEdge works with popular web UI frameworks, such as React, Vue, Yew, and Percy, to support isomorphic [server-side rendering (SSR)](../../embed/use-case/ssr-modern-ui.md) functions on edge servers. It could also support server-side rendering of Unity3D animations and AI-generated interactive videos for web applications on the edge cloud. + +- WasmEdge provides a lightweight, secure and high-performance runtime for microservices. It is fully compatible with application service frameworks such as Dapr, and service orchestrators like Kubernetes. WasmEdge microservices can run on edge servers, and have access to distributed cache, to support both stateless and stateful business logic functions for modern web apps. Also related: Serverless function-as-a-service in public clouds. + +- [Serverless SaaS (Software-as-a-Service)](/category/serverless-platforms) functions enables users to extend and customize their SaaS experience without operating their own API callback servers. The serverless functions can be embedded into the SaaS or reside on edge servers next to the SaaS servers. Developers simply upload functions to respond to SaaS events or to connect SaaS APIs. + +- [Smart device apps](./wasm-smart-devices.md) could embed WasmEdge as a middleware runtime to render interactive content on the UI, connect to native device drivers, and access specialized hardware features (i.e, the GPU for AI inference). The benefits of the WasmEdge runtime over native-compiled machine code include security, safety, portability, manageability, and developer productivity. WasmEdge runs on Android, OpenHarmony, and seL4 RTOS devices. + +- WasmEdge could support high performance DSLs (Domain Specific Languages) or act as a cloud-native JavaScript runtime by embedding a JS execution engine or interpreter. + +- Developers can leverage container tools such as [Kubernetes](../../develop/deploy/kubernetes/kubernetes-containerd-crun.md), Docker and CRI-O to deploy, manage, and run lightweight WebAssembly applications. + +- WasmEdge applications can be plugged into existing application frameworks or platforms. + +If you have any great ideas on WasmEdge, don't hesitate to open a GitHub issue to discuss together. \ No newline at end of file diff --git a/docs/embed/use-case/wasm-smart-devices.md b/docs/start/usage/wasm-smart-devices.md similarity index 98% rename from docs/embed/use-case/wasm-smart-devices.md rename to docs/start/usage/wasm-smart-devices.md index a69dbfeb1..17cd9ad77 100644 --- a/docs/embed/use-case/wasm-smart-devices.md +++ b/docs/start/usage/wasm-smart-devices.md @@ -1,5 +1,5 @@ --- -sidebar_position: 3 +sidebar_position: 4 --- # WasmEdge On Smart Devices diff --git a/docs/start/wasmedge/comparison.md b/docs/start/wasmedge/comparison.md new file mode 100644 index 000000000..20fc78cb4 --- /dev/null +++ b/docs/start/wasmedge/comparison.md @@ -0,0 +1,29 @@ +--- +sidebar_position: 5 +--- + +# Comparison + +## What's the relationship between WebAssembly and Docker? + +Check out our infographic [WebAssembly vs. Docker](https://wasmedge.org/wasm_docker/). WebAssembly runs side by side with Docker in cloud native and edge native applications. + +## What's the difference for Native clients (NaCl), Application runtimes, and WebAssembly? + +We created a handy table for the comparison. + +| | NaCl | Application runtimes (eg Node & Python) | Docker-like container | WebAssembly | +| --- | --- | --- | --- | --- | +| Performance | Great | Poor | OK | Great | +| Resource footprint | Great | Poor | Poor | Great | +| Isolation | Poor | OK | OK | Great | +| Safety | Poor | OK | OK | Great | +| Portability | Poor | Great | OK | Great | +| Security | Poor | OK | OK | Great | +| Language and framework choice | N/A | N/A | Great | OK | +| Ease of use | OK | Great | Great | OK | +| Manageability | Poor | Poor | Great | Great | + +## What's the difference between WebAssembly and eBPF? + +`eBPF` is the bytecode format for a Linux kernel space VM that is suitable for network or security related tasks. WebAssembly is the bytecode format for a user space VM that is suited for business applications. [See details here](https://medium.com/codex/ebpf-and-webassembly-whose-vm-reigns-supreme-c2861ce08f89). diff --git a/docusaurus.config.js b/docusaurus.config.js index dfd57902c..8bf570c1d 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -79,12 +79,24 @@ const config = { }, ], ], - + themeConfig: /** @type {import('@docusaurus/preset-classic').ThemeConfig} */ ({ - metadata: [{ name: 'keywords', content: 'wasmedge, wasm, web assembly, rust, cncf, edge devices, cloud, serverless' }, { name: 'twitter:card', content: 'summary' }], + metadata: [ + { name: 'keywords', content: 'wasmedge, wasm, web assembly, rust, cncf, edge devices, cloud, serverless' }, + { name: 'description', content: 'WasmEdge is a lightweight, high-performance, and extensible WebAssembly runtime for cloud native, edge, and decentralized applications. It powers serverless apps, embedded functions, microservices, smart contracts, and IoT devices.' }, + { name: 'og:title', content: 'WasmEdge' }, + { name: 'og:description', content: 'WasmEdge is a lightweight, high-performance, and extensible WebAssembly runtime for cloud native, edge, and decentralized applications. It powers serverless apps, embedded functions, microservices, smart contracts, and IoT devices.' }, + { name: 'og:url', content: 'https://wasmedge.org/' }, + { name: 'og:type', content: 'Documentation' }, + { name: 'twitter:card', content: 'summary' }, + { name: 'twitter:image', content: 'summary_large_image' }, + { name: 'twitter:url', content: 'https://wasmedge.org/' }, + { name: 'twitter:site', content: '@realwasmedge' }, + { name: 'twitter:title', content: 'WasmEdge' } + ], image: "./static/img/wasm_logo.png", announcementBar: { id: "start", @@ -128,6 +140,7 @@ const config = { href: 'https://github.com/WasmEdge/WasmEdge', className: "header-github-link", position: 'right', + alt: 'https://github.com/WasmEdge/WasmEdge' }, ], }, @@ -222,4 +235,4 @@ const extendedConfig = { } }; -module.exports = extendedConfig; +module.exports = extendedConfig; \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/develop/deploy/cri-runtime/containerd.md b/i18n/zh/docusaurus-plugin-content-docs/current/develop/deploy/cri-runtime/containerd.md index 002cb7985..5019e2d47 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/develop/deploy/cri-runtime/containerd.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/develop/deploy/cri-runtime/containerd.md @@ -2,7 +2,7 @@ sidebar_position: 1 --- -# 8.6.1 Deploy with containerd's runwasi +# Deploy with containerd's runwasi :::info diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/embed/c++/intro.md b/i18n/zh/docusaurus-plugin-content-docs/current/embed/c++/intro.md index 8c78bf19e..3904b6a7a 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/embed/c++/intro.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/embed/c++/intro.md @@ -4,7 +4,95 @@ sidebar_position: 1 # WasmEdge C++ SDK Introduction - -:::info -Work in Progress -::: +The WasmEdge C++ SDK is a collection of headers and libraries that allow you to build and deploy WebAssembly (Wasm) modules for execution on WasmEdge devices. It includes a CMake project and a set of command-line tools that you can use to build and deploy your Wasm modules. + +## Quick Start Guide + +To get started with WasmEdge, follow these steps: + +Install the WasmEdge C/C++ SDK: Download C++ SDK from the WasmEdge [website](https://wasmedge.org/docs/embed/quick-start/install) and follow the instructions to install it on your development machine + +```cpp +#include +#include + +int main(int argc, char** argv) { + /* Create the configure context and add the WASI support. */ + /* This step is not necessary unless you need WASI support. */ + WasmEdge_ConfigureContext* conf_cxt = WasmEdge_ConfigureCreate(); + WasmEdge_ConfigureAddHostRegistration(conf_cxt, WasmEdge_HostRegistration_Wasi); + /* The configure and store context to the VM creation can be NULL. */ + WasmEdge_VMContext* vm_cxt = WasmEdge_VMCreate(conf_cxt, nullptr); + + /* The parameters and returns arrays. */ + WasmEdge_Value params[1] = { WasmEdge_ValueGenI32(40) }; + WasmEdge_Value returns[1]; + /* Function name. */ + WasmEdge_String func_name = WasmEdge_StringCreateByCString("fib"); + /* Run the WASM function from file. */ + WasmEdge_Result res = WasmEdge_VMRunWasmFromFile(vm_cxt, argv[1], func_name, params, 1, returns, 1); + + if (WasmEdge_ResultOK(res)) { + std::cout << "Get result: " << WasmEdge_ValueGetI32(returns[0]) << std::endl; + } else { + std::cout << "Error message: " << WasmEdge_ResultGetMessage(res) << std::endl; + } + + /* Resources deallocations. */ + WasmEdge_VMDelete(vm_cxt); + WasmEdge_ConfigureDelete(conf_cxt); + WasmEdge_StringDelete(func_name); + return 0; +} +``` + +You can use the -I flag to specify the include directories and the -L and -l flags to specify the library directories and library names, respectively. Then you can compile the code and run: ( the 40th fibonacci number is 102334155) + +```bash +gcc example.cpp -x c++ -I/path/to/wasmedge/include -L/path/to/wasmedge/lib -lwasmedge -o example +``` + +To run the `example` executable that was created in the previous step, you can use the following command + +```bash +./example +``` + +## Quick Start Guide in AOT compiler + +```cpp +#include +#include + +int main(int argc, const char* argv[]) { + // Create the configure context and add the WASI support. + // This step is not necessary unless you need WASI support. + wasmedge_configure_context* conf_cxt = wasmedge_configure_create(); + wasmedge_configure_add_host_registration(conf_cxt, WASMEDGE_HOST_REGISTRATION_WASI); + + // Create the VM context in AOT mode. + wasmedge_vm_context* vm_cxt = wasmedge_vm_create_aot(conf_cxt, NULL); + + // The parameters and returns arrays. + wasmedge_value params[1] = { wasmedge_value_gen_i32(32) }; + wasmedge_value returns[1]; + // Function name. + wasmedge_string func_name = wasmedge_string_create_by_cstring("fib"); + // Run the WASM function from file. + wasmedge_result res = wasmedge_vm_run_wasm_from_file(vm_cxt, argv[1], func_name, params, 1, returns, 1); + + if (wasmedge_result_ok(res)) { + printf("Get result: %d\n", wasmedge_value_get_i32(returns[0])); + } else { + printf("Error message: %s\n", wasmedge_result_get_message(res)); + } + + // Resources deallocations. + wasmedge_vm_delete(vm_cxt); + wasmedge_configure_delete(conf_cxt); + wasmedge_string_delete(func_name); + return 0; +} +``` + +In this example, the wasmedge_vm_create_aot function is used to create a wasmedge_vm_context object in AOT mode, which is then passed as the second argument to the wasmedge_vm_run_wasm_from_file function to execute the Wasm module in AOT mode. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/_category_.json b/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/_category_.json new file mode 100644 index 000000000..6fd885429 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/_category_.json @@ -0,0 +1,8 @@ +{ + "label": "WasmEdge Use-cases", + "position": 5, + "link": { + "type": "generated-index", + "description": "In this chapter, we will discuss use-cases of WasmEdge" + } +} diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/serverless/_category_.json b/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/serverless/_category_.json new file mode 100644 index 000000000..075ab1a18 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/serverless/_category_.json @@ -0,0 +1,8 @@ +{ + "label": "Serverless Platforms", + "position": 9, + "link": { + "type": "generated-index", + "description": "Run WebAssembly as an alternative lightweight runtime side-by-side with Docker and microVMs in cloud native infrastructure" + } +} diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/serverless/aws.md b/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/serverless/aws.md new file mode 100644 index 000000000..c23c56105 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/serverless/aws.md @@ -0,0 +1,272 @@ +--- +sidebar_position: 1 +--- + +# WebAssembly Serverless Functions in AWS Lambda + +In this article, we will show you two serverless functions in Rust and WasmEdge deployed on AWS Lambda. One is the image processing function, the other one is the TensorFlow inference function. + +> For the insight on why WasmEdge on AWS Lambda, please refer to the article [WebAssembly Serverless Functions in AWS Lambda](https://www.secondstate.io/articles/webassembly-serverless-functions-in-aws-lambda/) + +## Prerequisites + +Since our demo WebAssembly functions are written in Rust, you will need a [Rust compiler](https://www.rust-lang.org/tools/install). Make sure that you install the `wasm32-wasi` compiler target as follows, in order to generate WebAssembly bytecode. + +```bash +rustup target add wasm32-wasi +``` + +The demo application front end is written in [Next.js](https://nextjs.org/), and deployed on AWS Lambda. We will assume that you already have the basic knowledge of how to work with Next.js and Lambda. + +## Example 1: Image processing + +Our first demo application allows users to upload an image and then invoke a serverless function to turn it into black and white. A [live demo](https://second-state.github.io/aws-lambda-wasm-runtime/) deployed through GitHub Pages is available. + +Fork the [demo application’s GitHub repo](https://github.com/second-state/aws-lambda-wasm-runtime) to get started. To deploy the application on AWS Lambda, follow the guide in the repository [README](https://github.com/second-state/aws-lambda-wasm-runtime/blob/tensorflow/README.md). + +### Create the function + +This repo is a standard Next.js application. The backend serverless function is in the `api/functions/image_grayscale` folder. The `src/main.rs` file contains the Rust program’s source code. The Rust program reads image data from the `STDIN`, and then outputs the black-white image to the `STDOUT`. + +```rust +use hex; +use std::io::{self, Read}; +use image::{ImageOutputFormat, ImageFormat}; + +fn main() { + let mut buf = Vec::new(); + io::stdin().read_to_end(&mut buf).unwrap(); + + let image_format_detected: ImageFormat = image::guess_format(&buf).unwrap(); + let img = image::load_from_memory(&buf).unwrap(); + let filtered = img.grayscale(); + let mut buf = vec![]; + match image_format_detected { + ImageFormat::Gif => { + filtered.write_to(&mut buf, ImageOutputFormat::Gif).unwrap(); + }, + _ => { + filtered.write_to(&mut buf, ImageOutputFormat::Png).unwrap(); + }, + }; + io::stdout().write_all(&buf).unwrap(); + io::stdout().flush().unwrap(); +} +``` + +You can use Rust’s `cargo` tool to build the Rust program into WebAssembly bytecode or native code. + +```bash +cd api/functions/image-grayscale/ +cargo build --release --target wasm32-wasi +``` + +Copy the build artifacts to the `api` folder. + +```bash +cp target/wasm32-wasi/release/grayscale.wasm ../../ +``` + +> When we build the docker image, `api/pre.sh` is executed. `pre.sh` installs the WasmEdge runtime, and then compiles each WebAssembly bytecode program into a native `so` library for faster execution. + +### Create the service script to load the function + +The [`api/hello.js`](https://github.com/second-state/aws-lambda-wasm-runtime/blob/main/api/hello.js) script loads the WasmEdge runtime, starts the compiled WebAssembly program in WasmEdge, and passes the uploaded image data via `STDIN`. Notice that [`api/hello.js`](https://github.com/second-state/aws-lambda-wasm-runtime/blob/main/api/hello.js) runs the compiled `grayscale.so` file generated by [`api/pre.sh`](https://github.com/second-state/aws-lambda-wasm-runtime/blob/main/api/pre.sh) for better performance. + +```javascript +const { spawn } = require('child_process'); +const path = require('path'); + +function _runWasm(reqBody) { + return new Promise((resolve) => { + const wasmedge = spawn(path.join(__dirname, 'wasmedge'), [ + path.join(__dirname, 'grayscale.so'), + ]); + + let d = []; + wasmedge.stdout.on('data', (data) => { + d.push(data); + }); + + wasmedge.on('close', (code) => { + let buf = Buffer.concat(d); + resolve(buf); + }); + + wasmedge.stdin.write(reqBody); + wasmedge.stdin.end(''); + }); +} +``` + +The `exports.handler` part of `hello.js` exports an async function handler, used to handle different events every time the serverless function is called. In this example, we simply process the image by calling the function above and return the result, but more complicated event-handling behavior may be defined based on your need. We also need to return some `Access-Control-Allow` headers to avoid [Cross-Origin Resource Sharing (CORS)](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) errors when calling the serverless function from a browser. You can read more about CORS errors [here](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors) if you encounter them when replicating our example. + +```javascript +exports.handler = async function (event, context) { + var typedArray = new Uint8Array( + event.body.match(/[\da-f]{2}/gi).map(function (h) { + return parseInt(h, 16); + }), + ); + let buf = await _runWasm(typedArray); + return { + statusCode: 200, + headers: { + 'Access-Control-Allow-Headers': + 'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token', + 'Access-Control-Allow-Origin': '*', + 'Access-Control-Allow-Methods': + 'DELETE, GET, HEAD, OPTIONS, PATCH, POST, PUT', + }, + body: buf.toString('hex'), + }; +}; +``` + +### Build the Docker image for Lambda deployment + +Now we have the WebAssembly bytecode function and the script to load and connect to the web request. In order to deploy them as a function service on AWS Lambda, you still need to package the whole thing into a Docker image. + +We are not going to cover in detail about how to build the Docker image and deploy on AWS Lambda, as there are detailed steps in the [Deploy section of the repository README](https://github.com/second-state/aws-lambda-wasm-runtime/blob/tensorflow/README.md#deploy). However, we will highlight some lines in the [`Dockerfile`](https://github.com/second-state/aws-lambda-wasm-runtime/blob/tensorflow/api/Dockerfile) for you to avoid some pitfalls. + +```dockerfile +FROM public.ecr.aws/lambda/nodejs:14 + +# Change directory to /var/task +WORKDIR /var/task + +RUN yum update -y && yum install -y curl tar gzip + +# Bundle and pre-compile the wasm files +COPY *.wasm ./ +COPY pre.sh ./ +RUN chmod +x pre.sh +RUN ./pre.sh + +# Bundle the JS files +COPY *.js ./ + +CMD [ "hello.handler" ] +``` + +First, we are building the image from [AWS Lambda's Node.js base image](https://hub.docker.com/r/amazon/aws-lambda-nodejs). The advantage of using AWS Lambda's base image is that it includes the [Lambda Runtime Interface Client (RIC)](https://github.com/aws/aws-lambda-nodejs-runtime-interface-client), which we need to implement in our Docker image as it is required by AWS Lambda. The Amazon Linux uses `yum` as the package manager. + +> These base images contain the Amazon Linux Base operating system, the runtime for a given language, dependencies and the Lambda Runtime Interface Client (RIC), which implements the Lambda [Runtime API](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-api.html). The Lambda Runtime Interface Client allows your runtime to receive requests from and send requests to the Lambda service. + +Second, we need to put our function and all its dependencies in the `/var/task` directory. Files in other folders will not be executed by AWS Lambda. + +Third, we need to define the default command when we start our container. `CMD [ "hello.handler" ]` means that we will call the `handler` function in `hello.js` whenever our serverless function is called. Recall that we have defined and exported the handler function in the previous steps through `exports.handler = ...` in `hello.js`. + +### Optional: test the Docker image locally + +Docker images built from AWS Lambda's base images can be tested locally following [this guide](https://docs.aws.amazon.com/lambda/latest/dg/images-test.html). Local testing requires [AWS Lambda Runtime Interface Emulator (RIE)](https://github.com/aws/aws-lambda-runtime-interface-emulator), which is already installed in all of AWS Lambda's base images. To test your image, first, start the Docker container by running: + +```bash +docker run -p 9000:8080 myfunction:latest +``` + +This command sets a function endpoint on your local machine at `http://localhost:9000/2015-03-31/functions/function/invocations`. + +Then, from a separate terminal window, run: + +```bash +curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}' +``` + +And you should get your expected output in the terminal. + +If you don't want to use a base image from AWS Lambda, you can also use your own base image and install RIC and/or RIE while building your Docker image. Just follow **Create an image from an alternative base image** section from [this guide](https://docs.aws.amazon.com/lambda/latest/dg/images-create.html). + +That's it! After building your Docker image, you can deploy it to AWS Lambda following steps outlined in the repository [README](https://github.com/second-state/aws-lambda-wasm-runtime/blob/tensorflow/README.md#deploy). Now your serverless function is ready to rock! + +## Example 2: AI inference + +The [second demo](https://github.com/second-state/aws-lambda-wasm-runtime/tree/tensorflow) application allows users to upload an image and then invoke a serverless function to classify the main subject on the image. + +It is in [the same GitHub repo](https://github.com/second-state/aws-lambda-wasm-runtime/tree/tensorflow) as the previous example but in the `tensorflow` branch. The backend serverless function for image classification is in the `api/functions/image-classification` folder in the `tensorflow` branch. The `src/main.rs` file contains the Rust program’s source code. The Rust program reads image data from the `STDIN`, and then outputs the text output to the `STDOUT`. It utilizes the WasmEdge Tensorflow API to run the AI inference. + +```rust +pub fn main() { + // Step 1: Load the TFLite model + let model_data: &[u8] = include_bytes!("models/mobilenet_v1_1.0_224/mobilenet_v1_1.0_224_quant.tflite"); + let labels = include_str!("models/mobilenet_v1_1.0_224/labels_mobilenet_quant_v1_224.txt"); + + // Step 2: Read image from STDIN + let mut buf = Vec::new(); + io::stdin().read_to_end(&mut buf).unwrap(); + + // Step 3: Resize the input image for the tensorflow model + let flat_img = wasmedge_tensorflow_interface::load_jpg_image_to_rgb8(&buf, 224, 224); + + // Step 4: AI inference + let mut session = wasmedge_tensorflow_interface::Session::new(&model_data, wasmedge_tensorflow_interface::ModelType::TensorFlowLite); + session.add_input("input", &flat_img, &[1, 224, 224, 3]) + .run(); + let res_vec: Vec = session.get_output("MobilenetV1/Predictions/Reshape_1"); + + // Step 5: Find the food label that responds to the highest probability in res_vec + // ... ... + let mut label_lines = labels.lines(); + for _i in 0..max_index { + label_lines.next(); + } + + // Step 6: Generate the output text + let class_name = label_lines.next().unwrap().to_string(); + if max_value > 50 { + println!("It {} a {} in the picture", confidence.to_string(), class_name, class_name); + } else { + println!("It does not appears to be any food item in the picture."); + } +} +``` + +You can use the `cargo` tool to build the Rust program into WebAssembly bytecode or native code. + +```bash +cd api/functions/image-classification/ +cargo build --release --target wasm32-wasi +``` + +Copy the build artifacts to the `api` folder. + +```bash +cp target/wasm32-wasi/release/classify.wasm ../../ +``` + +Again, the `api/pre.sh` script installs WasmEdge runtime and its Tensorflow dependencies in this application. It also compiles the `classify.wasm` bytecode program to the `classify.so` native shared library at the time of deployment. + +The [`api/hello.js`](https://github.com/second-state/aws-lambda-wasm-runtime/blob/tensorflow/api/hello.js) script loads the WasmEdge runtime, starts the compiled WebAssembly program in WasmEdge, and passes the uploaded image data via `STDIN`. Notice [`api/hello.js`](https://github.com/second-state/aws-lambda-wasm-runtime/blob/tensorflow/api/hello.js) runs the compiled `classify.so` file generated by [`api/pre.sh`](https://github.com/second-state/aws-lambda-wasm-runtime/blob/tensorflow/api/pre.sh) for better performance. The handler function is similar to our previous example, and is omitted here. + +```javascript +const { spawn } = require('child_process'); +const path = require('path'); + +function _runWasm(reqBody) { + return new Promise(resolve => { + const wasmedge = spawn( + path.join(__dirname, 'wasmedge-tensorflow-lite'), + [path.join(__dirname, 'classify.so')], + {env: {'LD_LIBRARY_PATH': __dirname}} + ); + + let d = []; + wasmedge.stdout.on('data', (data) => { + d.push(data); + }); + + wasmedge.on('close', (code) => { + resolve(d.join('')); + }); + + wasmedge.stdin.write(reqBody); + wasmedge.stdin.end(''); + }); +} + +exports.handler = ... // _runWasm(reqBody) is called in the handler +``` + +You can build your Docker image and deploy the function in the same way as outlined in the previous example. Now you have created a web app for subject classification! + +Next, it's your turn to use the [aws-lambda-wasm-runtime repo](https://github.com/second-state/aws-lambda-wasm-runtime/tree/main) as a template to develop Rust serverless function on AWS Lambda. Looking forward to your great work. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/serverless/netlify.md b/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/serverless/netlify.md new file mode 100644 index 000000000..0f4b82db2 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/serverless/netlify.md @@ -0,0 +1,189 @@ +--- +sidebar_position: 2 +--- + +# WebAssembly Serverless Functions in Netlify + +In this article we will show you two serverless functions in Rust and WasmEdge deployed on Netlify. One is the image processing function, the other one is the TensorFlow inference function. + +> For more insights on why WasmEdge on Netlify, please refer to the article [WebAssembly Serverless Functions in Netlify](https://www.secondstate.io/articles/netlify-wasmedge-webassembly-rust-serverless/). + +## Prerequisite + +Since our demo WebAssembly functions are written in Rust, you will need a [Rust compiler](https://www.rust-lang.org/tools/install). Make sure that you install the `wasm32-wasi` compiler target as follows, in order to generate WebAssembly bytecode. + +```bash +rustup target add wasm32-wasi +``` + +The demo application front end is written in [Next.js](https://nextjs.org/), and deployed on Netlify. We will assume that you already have the basic knowledge of how to work with Next.js and Netlify. + +## Example 1: Image processing + +Our first demo application allows users to upload an image and then invoke a serverless function to turn it into black and white. A [live demo](https://60fe22f9ff623f0007656040--reverent-hodgkin-dc1f51.netlify.app/) deployed on Netlify is available. + +Fork the [demo application’s GitHub repo](https://github.com/second-state/netlify-wasm-runtime) to get started. To deploy the application on Netlify, just [add your github repo to Netlify](https://www.netlify.com/blog/2016/09/29/a-step-by-step-guide-deploying-on-netlify/). + +This repo is a standard Next.js application for the Netlify platform. The backend serverless function is in the [`api/functions/image_grayscale`](https://github.com/second-state/netlify-wasm-runtime/tree/main/api/functions/image-grayscale) folder. The [`src/main.rs`](https://github.com/second-state/netlify-wasm-runtime/blob/main/api/functions/image-grayscale/src/main.rs) file contains the Rust program’s source code. The Rust program reads image data from the `STDIN`, and then outputs the black-white image to the `STDOUT`. + +```rust +use hex; +use std::io::{self, Read}; +use image::{ImageOutputFormat, ImageFormat}; + +fn main() { + let mut buf = Vec::new(); + io::stdin().read_to_end(&mut buf).unwrap(); + + let image_format_detected: ImageFormat = image::guess_format(&buf).unwrap(); + let img = image::load_from_memory(&buf).unwrap(); + let filtered = img.grayscale(); + let mut buf = vec![]; + match image_format_detected { + ImageFormat::Gif => { + filtered.write_to(&mut buf, ImageOutputFormat::Gif).unwrap(); + }, + _ => { + filtered.write_to(&mut buf, ImageOutputFormat::Png).unwrap(); + }, + }; + io::stdout().write_all(&buf).unwrap(); + io::stdout().flush().unwrap(); +} +``` + +You can use Rust’s `cargo` tool to build the Rust program into WebAssembly bytecode or native code. + +```bash +cd api/functions/image-grayscale/ +cargo build --release --target wasm32-wasi +``` + +Copy the build artifacts to the `api` folder. + +```bash +cp target/wasm32-wasi/release/grayscale.wasm ../../ +``` + +> The Netlify function runs [`api/pre.sh`](https://github.com/second-state/netlify-wasm-runtime/blob/main/api/pre.sh) upon setting up the serverless environment. It installs the WasmEdge runtime, and then compiles each WebAssembly bytecode program into a native `so` library for faster execution. + +The [`api/hello.js`](https://github.com/second-state/netlify-wasm-runtime/blob/main/api/hello.js) script loads the WasmEdge runtime, starts the compiled WebAssembly program in WasmEdge, and passes the uploaded image data via `STDIN`. Notice [`api/hello.js`](https://github.com/second-state/netlify-wasm-runtime/blob/main/api/hello.js) runs the compiled `grayscale.so` file generated by [`api/pre.sh`](https://github.com/second-state/netlify-wasm-runtime/blob/main/api/pre.sh) for better performance. + +```javascript +const fs = require('fs'); +const { spawn } = require('child_process'); +const path = require('path'); + +module.exports = (req, res) => { + const wasmedge = spawn(path.join(__dirname, 'wasmedge'), [ + path.join(__dirname, 'grayscale.so'), + ]); + + let d = []; + wasmedge.stdout.on('data', (data) => { + d.push(data); + }); + + wasmedge.on('close', (code) => { + let buf = Buffer.concat(d); + + res.setHeader('Content-Type', req.headers['image-type']); + res.send(buf); + }); + + wasmedge.stdin.write(req.body); + wasmedge.stdin.end(''); +}; +``` + +That's it. [Deploy the repo to Netlify](https://www.netlify.com/blog/2016/09/29/a-step-by-step-guide-deploying-on-netlify/) and you now have a Netlify Jamstack app with a high-performance Rust and WebAssembly based serverless backend. + +## Example 2: AI inference + +The [second demo](https://60ff7e2d10fe590008db70a9--reverent-hodgkin-dc1f51.netlify.app/) application allows users to upload an image and then invoke a serverless function to classify the main subject on the image. + +It is in [the same GitHub repo](https://github.com/second-state/netlify-wasm-runtime/tree/tensorflow) as the previous example but in the `tensorflow` branch. The backend serverless function for image classification is in the [`api/functions/image-classification`](https://github.com/second-state/netlify-wasm-runtime/tree/tensorflow/api/functions/image-classification) folder in the `tensorflow` branch. The [`src/main.rs`](https://github.com/second-state/netlify-wasm-runtime/blob/tensorflow/api/functions/image-classification/src/main.rs) file contains the Rust program’s source code. The Rust program reads image data from the `STDIN`, and then outputs the text output to the `STDOUT`. It utilizes the WasmEdge Tensorflow API to run the AI inference. + +```rust +pub fn main() { + // Step 1: Load the TFLite model + let model_data: &[u8] = include_bytes!("models/mobilenet_v1_1.0_224/mobilenet_v1_1.0_224_quant.tflite"); + let labels = include_str!("models/mobilenet_v1_1.0_224/labels_mobilenet_quant_v1_224.txt"); + + // Step 2: Read image from STDIN + let mut buf = Vec::new(); + io::stdin().read_to_end(&mut buf).unwrap(); + + // Step 3: Resize the input image for the tensorflow model + let flat_img = wasmedge_tensorflow_interface::load_jpg_image_to_rgb8(&buf, 224, 224); + + // Step 4: AI inference + let mut session = wasmedge_tensorflow_interface::Session::new(&model_data, wasmedge_tensorflow_interface::ModelType::TensorFlowLite); + session.add_input("input", &flat_img, &[1, 224, 224, 3]) + .run(); + let res_vec: Vec = session.get_output("MobilenetV1/Predictions/Reshape_1"); + + // Step 5: Find the food label that responds to the highest probability in res_vec + // ... ... + let mut label_lines = labels.lines(); + for _i in 0..max_index { + label_lines.next(); + } + + // Step 6: Generate the output text + let class_name = label_lines.next().unwrap().to_string(); + if max_value > 50 { + println!("It {} a {} in the picture", confidence.to_string(), class_name, class_name); + } else { + println!("It does not appears to be any food item in the picture."); + } +} +``` + +You can use the `cargo` tool to build the Rust program into WebAssembly bytecode or native code. + +```bash +cd api/functions/image-classification/ +cargo build --release --target wasm32-wasi +``` + +Copy the build artifacts to the `api` folder. + +```bash +cp target/wasm32-wasi/release/classify.wasm ../../ +``` + +Again, the [`api/pre.sh`](https://github.com/second-state/netlify-wasm-runtime/blob/tensorflow/api/pre.sh) script installs WasmEdge runtime and its Tensorflow dependencies in this application. It also compiles the `classify.wasm` bytecode program to the `classify.so` native shared library at the time of deployment. + +The [`api/hello.js`](https://github.com/second-state/netlify-wasm-runtime/blob/tensorflow/api/hello.js) script loads the WasmEdge runtime, starts the compiled WebAssembly program in WasmEdge, and passes the uploaded image data via `STDIN`. Notice [`api/hello.js`](https://github.com/second-state/netlify-wasm-runtime/blob/tensorflow/api/hello.js) runs the compiled `classify.so` file generated by [`api/pre.sh`](https://github.com/second-state/netlify-wasm-runtime/blob/tensorflow/api/pre.sh) for better performance. + +```javascript +const fs = require('fs'); +const { spawn } = require('child_process'); +const path = require('path'); + +module.exports = (req, res) => { + const wasmedge = spawn( + path.join(__dirname, 'wasmedge-tensorflow-lite'), + [path.join(__dirname, 'classify.so')], + { env: { LD_LIBRARY_PATH: __dirname } }, + ); + + let d = []; + wasmedge.stdout.on('data', (data) => { + d.push(data); + }); + + wasmedge.on('close', (code) => { + res.setHeader('Content-Type', `text/plain`); + res.send(d.join('')); + }); + + wasmedge.stdin.write(req.body); + wasmedge.stdin.end(''); +}; +``` + +You can now [deploy your forked repo to Netlify](https://www.netlify.com/blog/2016/09/29/a-step-by-step-guide-deploying-on-netlify/) and have a web app for subject classification. + +Next, it's your turn to develop Rust serverless functions in Netlify using the [netlify-wasm-runtime repo](https://github.com/second-state/netlify-wasm-runtime) as a template. Looking forward to your great work. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/serverless/tencent.md b/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/serverless/tencent.md new file mode 100644 index 000000000..9937f7149 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/serverless/tencent.md @@ -0,0 +1,11 @@ +--- +sidebar_position: 4 +--- + +# WebAssembly serverless functions on Tencent Cloud + +As the main users of Tencent Cloud are from China, so the tutorial is [written in Chinese](https://my.oschina.net/u/4532842/blog/5172639). + +We also provide a code template for deploying serverless WebAssembly functions on Tencent Cloud, please check out [the tencent-scf-wasm-runtime repo](https://github.com/second-state/tencent-scf-wasm-runtime). + +Fork the repo and start writing your own rust functions. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/serverless/vercel.md b/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/serverless/vercel.md new file mode 100644 index 000000000..3ef87bd5c --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/serverless/vercel.md @@ -0,0 +1,191 @@ +--- +sidebar_position: 5 +--- + +# Rust and WebAssembly Serverless functions in Vercel + +In this article, we will show you two serverless functions in Rust and WasmEdge deployed on Vercel. One is the image processing function, the other one is the TensorFlow inference function. + +> For more insights on why WasmEdge on Vercel, please refer to the article [Rust and WebAssembly Serverless Functions in Vercel](https://www.secondstate.io/articles/vercel-wasmedge-webassembly-rust/). + +## Prerequisite + +Since our demo WebAssembly functions are written in Rust, you will need a [Rust compiler](https://www.rust-lang.org/tools/install). Make sure that you install the `wasm32-wasi` compiler target as follows, in order to generate WebAssembly bytecode. + +```bash +rustup target add wasm32-wasi +``` + +The demo application front end is written in [Next.js](https://nextjs.org/), and deployed on Vercel. We will assume that you already have the basic knowledge of how to work with Vercel. + +## Example 1: Image processing + +Our first demo application allows users to upload an image and then invoke a serverless function to turn it into black and white. A [live demo](https://vercel-wasm-runtime.vercel.app/) deployed on Vercel is available. + +Fork the [demo application’s GitHub repo](https://github.com/second-state/vercel-wasm-runtime) to get started. To deploy the application on Vercel, just [import the Github repo](https://vercel.com/docs/git#deploying-a-git-repository) from [Vercel for Github](https://vercel.com/docs/git/vercel-for-github) web page. + +This repo is a standard Next.js application for the Vercel platform. The backend serverless function is in the [`api/functions/image_grayscale`](https://github.com/second-state/vercel-wasm-runtime/tree/main/api/functions/image-grayscale) folder. The [`src/main.rs`](https://github.com/second-state/vercel-wasm-runtime/blob/main/api/functions/image-grayscale/src/main.rs) file contains the Rust program’s source code. The Rust program reads image data from the `STDIN`, and then outputs the black-white image to the `STDOUT`. + +```rust +use hex; +use std::io::{self, Read}; +use image::{ImageOutputFormat, ImageFormat}; + +fn main() { + let mut buf = Vec::new(); + io::stdin().read_to_end(&mut buf).unwrap(); + + let image_format_detected: ImageFormat = image::guess_format(&buf).unwrap(); + let img = image::load_from_memory(&buf).unwrap(); + let filtered = img.grayscale(); + let mut buf = vec![]; + match image_format_detected { + ImageFormat::Gif => { + filtered.write_to(&mut buf, ImageOutputFormat::Gif).unwrap(); + }, + _ => { + filtered.write_to(&mut buf, ImageOutputFormat::Png).unwrap(); + }, + }; + io::stdout().write_all(&buf).unwrap(); + io::stdout().flush().unwrap(); +} +``` + +You can use Rust’s `cargo` tool to build the Rust program into WebAssembly bytecode or native code. + +```bash +cd api/functions/image-grayscale/ +cargo build --release --target wasm32-wasi +``` + +Copy the build artifacts to the `api` folder. + +```bash +cp target/wasm32-wasi/release/grayscale.wasm ../../ +``` + +> Vercel runs [`api/pre.sh`](https://github.com/second-state/vercel-wasm-runtime/blob/main/api/pre.sh) upon setting up the serverless environment. It installs the WasmEdge runtime, and then compiles each WebAssembly bytecode program into a native `so` library for faster execution. + +The [`api/hello.js`](https://github.com/second-state/vercel-wasm-runtime/blob/main/api/hello.js) file conforms Vercel serverless specification. It loads the WasmEdge runtime, starts the compiled WebAssembly program in WasmEdge, and passes the uploaded image data via `STDIN`. Notice [`api/hello.js`](https://github.com/second-state/vercel-wasm-runtime/blob/main/api/hello.js) runs the compiled `grayscale.so` file generated by [`api/pre.sh`](https://github.com/second-state/vercel-wasm-runtime/blob/main/api/pre.sh) for better performance. + +```javascript +const fs = require('fs'); +const { spawn } = require('child_process'); +const path = require('path'); + +module.exports = (req, res) => { + const wasmedge = spawn(path.join(__dirname, 'wasmedge'), [ + path.join(__dirname, 'grayscale.so'), + ]); + + let d = []; + wasmedge.stdout.on('data', (data) => { + d.push(data); + }); + + wasmedge.on('close', (code) => { + let buf = Buffer.concat(d); + + res.setHeader('Content-Type', req.headers['image-type']); + res.send(buf); + }); + + wasmedge.stdin.write(req.body); + wasmedge.stdin.end(''); +}; +``` + +That's it. [Deploy the repo to Vercel](https://vercel.com/docs/git#deploying-a-git-repository) and you now have a Vercel Jamstack app with a high-performance Rust and WebAssembly based serverless backend. + +## Example 2: AI inference + +The [second demo](https://vercel-wasm-runtime.vercel.app/) application allows users to upload an image and then invoke a serverless function to classify the main subject on the image. + +It is in [the same GitHub repo](https://github.com/second-state/vercel-wasm-runtime) as the previous example but in the `tensorflow` branch. Note: when you [import this GitHub repo](https://vercel.com/docs/git#deploying-a-git-repository) on the Vercel website, it will create a [preview URL](https://vercel.com/docs/platform/deployments#preview) for each branch. The `tensorflow` branch would have its own deployment URL. + +The backend serverless function for image classification is in the [`api/functions/image-classification`](https://github.com/second-state/vercel-wasm-runtime/tree/tensorflow/api/functions/image-classification) folder in the `tensorflow` branch. The [`src/main.rs`](https://github.com/second-state/vercel-wasm-runtime/blob/tensorflow/api/functions/image-classification/src/main.rs) file contains the Rust program’s source code. The Rust program reads image data from the `STDIN`, and then outputs the text output to the `STDOUT`. It utilizes the WasmEdge Tensorflow API to run the AI inference. + +```rust +pub fn main() { + // Step 1: Load the TFLite model + let model_data: &[u8] = include_bytes!("models/mobilenet_v1_1.0_224/mobilenet_v1_1.0_224_quant.tflite"); + let labels = include_str!("models/mobilenet_v1_1.0_224/labels_mobilenet_quant_v1_224.txt"); + + // Step 2: Read image from STDIN + let mut buf = Vec::new(); + io::stdin().read_to_end(&mut buf).unwrap(); + + // Step 3: Resize the input image for the tensorflow model + let flat_img = wasmedge_tensorflow_interface::load_jpg_image_to_rgb8(&buf, 224, 224); + + // Step 4: AI inference + let mut session = wasmedge_tensorflow_interface::Session::new(&model_data, wasmedge_tensorflow_interface::ModelType::TensorFlowLite); + session.add_input("input", &flat_img, &[1, 224, 224, 3]) + .run(); + let res_vec: Vec = session.get_output("MobilenetV1/Predictions/Reshape_1"); + + // Step 5: Find the food label that responds to the highest probability in res_vec + // ... ... + let mut label_lines = labels.lines(); + for _i in 0..max_index { + label_lines.next(); + } + + // Step 6: Generate the output text + let class_name = label_lines.next().unwrap().to_string(); + if max_value > 50 { + println!("It {} a {} in the picture", confidence.to_string(), class_name, class_name); + } else { + println!("It does not appears to be any food item in the picture."); + } +} +``` + +You can use the `cargo` tool to build the Rust program into WebAssembly bytecode or native code. + +```bash +cd api/functions/image-classification/ +cargo build --release --target wasm32-wasi +``` + +Copy the build artifacts to the `api` folder. + +```bash +cp target/wasm32-wasi/release/classify.wasm ../../ +``` + +Again, the [`api/pre.sh`](https://github.com/second-state/vercel-wasm-runtime/blob/tensorflow/api/pre.sh) script installs WasmEdge runtime and its Tensorflow dependencies in this application. It also compiles the `classify.wasm` bytecode program to the `classify.so` native shared library at the time of deployment. + +The [`api/hello.js`](https://github.com/second-state/vercel-wasm-runtime/blob/tensorflow/api/hello.js) file conforms Vercel serverless specification. It loads the WasmEdge runtime, starts the compiled WebAssembly program in WasmEdge, and passes the uploaded image data via `STDIN`. Notice [`api/hello.js`](https://github.com/second-state/vercel-wasm-runtime/blob/tensorflow/api/hello.js) runs the compiled `classify.so` file generated by [`api/pre.sh`](https://github.com/second-state/vercel-wasm-runtime/blob/tensorflow/api/pre.sh) for better performance. + +```javascript +const fs = require('fs'); +const { spawn } = require('child_process'); +const path = require('path'); + +module.exports = (req, res) => { + const wasmedge = spawn( + path.join(__dirname, 'wasmedge-tensorflow-lite'), + [path.join(__dirname, 'classify.so')], + { env: { LD_LIBRARY_PATH: __dirname } }, + ); + + let d = []; + wasmedge.stdout.on('data', (data) => { + d.push(data); + }); + + wasmedge.on('close', (code) => { + res.setHeader('Content-Type', `text/plain`); + res.send(d.join('')); + }); + + wasmedge.stdin.write(req.body); + wasmedge.stdin.end(''); +}; +``` + +You can now [deploy your forked repo to Vercel](https://vercel.com/docs/git#deploying-a-git-repository) and have a web app for subject classification. + +Next, it's your turn to use [the vercel-wasm-runtime repo](https://github.com/second-state/vercel-wasm-runtime) as a template to develop your own Rust serverless functions in Vercel. Looking forward to your great work. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/use-cases.md b/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/use-cases.md new file mode 100644 index 000000000..9218603d9 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/use-cases.md @@ -0,0 +1,23 @@ +--- +sidebar_position: 1 +--- + +# Use Cases + +Featuring AOT compiler optimization, WasmEdge is one of the fastest WebAssembly runtimes on the market today. Therefore WasmEdge is widely used in edge computing, automotive, Jamstack, serverless, SaaS, service mesh, and even blockchain applications. + +- Modern web apps feature rich UIs that are rendered in the browser and/or on the edge cloud. WasmEdge works with popular web UI frameworks, such as React, Vue, Yew, and Percy, to support isomorphic [server-side rendering (SSR)](../../embed/use-case/ssr-modern-ui.md) functions on edge servers. It could also support server-side rendering of Unity3D animations and AI-generated interactive videos for web applications on the edge cloud. + +- WasmEdge provides a lightweight, secure and high-performance runtime for microservices. It is fully compatible with application service frameworks such as Dapr, and service orchestrators like Kubernetes. WasmEdge microservices can run on edge servers, and have access to distributed cache, to support both stateless and stateful business logic functions for modern web apps. Also related: Serverless function-as-a-service in public clouds. + +- [Serverless SaaS (Software-as-a-Service)](/category/serverless-platforms) functions enables users to extend and customize their SaaS experience without operating their own API callback servers. The serverless functions can be embedded into the SaaS or reside on edge servers next to the SaaS servers. Developers simply upload functions to respond to SaaS events or to connect SaaS APIs. + +- [Smart device apps](./wasm-smart-devices.md) could embed WasmEdge as a middleware runtime to render interactive content on the UI, connect to native device drivers, and access specialized hardware features (i.e, the GPU for AI inference). The benefits of the WasmEdge runtime over native-compiled machine code include security, safety, portability, manageability, and developer productivity. WasmEdge runs on Android, OpenHarmony, and seL4 RTOS devices. + +- WasmEdge could support high performance DSLs (Domain Specific Languages) or act as a cloud-native JavaScript runtime by embedding a JS execution engine or interpreter. + +- Developers can leverage container tools such as [Kubernetes](../../develop/deploy/kubernetes/kubernetes-containerd-crun.md), Docker and CRI-O to deploy, manage, and run lightweight WebAssembly applications. + +- WasmEdge applications can be plugged into existing application frameworks or platforms. + +If you have any great ideas on WasmEdge, don't hesitate to open a GitHub issue to discuss together. \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/embed/use-case/wasm-smart-devices.md b/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/wasm-smart-devices.md similarity index 98% rename from i18n/zh/docusaurus-plugin-content-docs/current/embed/use-case/wasm-smart-devices.md rename to i18n/zh/docusaurus-plugin-content-docs/current/start/usage/wasm-smart-devices.md index a69dbfeb1..17cd9ad77 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/embed/use-case/wasm-smart-devices.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/start/usage/wasm-smart-devices.md @@ -1,5 +1,5 @@ --- -sidebar_position: 3 +sidebar_position: 4 --- # WasmEdge On Smart Devices diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/start/wasmedge/comparison.md b/i18n/zh/docusaurus-plugin-content-docs/current/start/wasmedge/comparison.md new file mode 100644 index 000000000..20fc78cb4 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/start/wasmedge/comparison.md @@ -0,0 +1,29 @@ +--- +sidebar_position: 5 +--- + +# Comparison + +## What's the relationship between WebAssembly and Docker? + +Check out our infographic [WebAssembly vs. Docker](https://wasmedge.org/wasm_docker/). WebAssembly runs side by side with Docker in cloud native and edge native applications. + +## What's the difference for Native clients (NaCl), Application runtimes, and WebAssembly? + +We created a handy table for the comparison. + +| | NaCl | Application runtimes (eg Node & Python) | Docker-like container | WebAssembly | +| --- | --- | --- | --- | --- | +| Performance | Great | Poor | OK | Great | +| Resource footprint | Great | Poor | Poor | Great | +| Isolation | Poor | OK | OK | Great | +| Safety | Poor | OK | OK | Great | +| Portability | Poor | Great | OK | Great | +| Security | Poor | OK | OK | Great | +| Language and framework choice | N/A | N/A | Great | OK | +| Ease of use | OK | Great | Great | OK | +| Manageability | Poor | Poor | Great | Great | + +## What's the difference between WebAssembly and eBPF? + +`eBPF` is the bytecode format for a Linux kernel space VM that is suitable for network or security related tasks. WebAssembly is the bytecode format for a user space VM that is suited for business applications. [See details here](https://medium.com/codex/ebpf-and-webassembly-whose-vm-reigns-supreme-c2861ce08f89).