Skip to content

Commit

Permalink
add nebullvm to docs
Browse files Browse the repository at this point in the history
  • Loading branch information
morgoth95 committed Apr 26, 2022
1 parent 5d77df2 commit 3be83e4
Showing 1 changed file with 33 additions and 2 deletions.
35 changes: 33 additions & 2 deletions docs/user-guides/server.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,15 +41,22 @@ One may wonder where is this `onnx_flow.yml` come from. Must be a typo? Believe

The procedure and UI of ONNX runtime would look the same as Pytorch runtime.

Recently we added the support for another AI-accelerator backend: nebullvm.
It can be used in a similar way to `onnxruntime`, running:
```
pip install "clip_server[nebullvm]"
python -m clip_server nebullvm_flow.yml
```

## YAML config

You may notice that there is a YAML file in our last ONNX example. All configurations are stored in this file. In fact, `python -m clip_server` does **not support** any other argument besides a YAML file. So it is the only source of the truth of your configs.

And to answer your doubt, `clip_server` has two built-in YAML configs as a part of the package resources: one for PyTorch backend, one for ONNX backend. When you do `python -m clip_server` it loads the Pytorch config, and when you do `python -m clip_server onnx-flow.yml` it loads the ONNX config.
And to answer your doubt, `clip_server` has two built-in YAML configs as a part of the package resources: one for PyTorch backend, one for ONNX backend. When you do `python -m clip_server` it loads the Pytorch config, and when you do `python -m clip_server onnx-flow.yml` it loads the ONNX config.
In the same way, when you do `python -m clip_server nebullvm-flow.yml`, it loads the nebullvm config.

Let's look at these two built-in YAML configs:
Let's look at these three built-in YAML configs:

````{tab} torch-flow.yml
Expand Down Expand Up @@ -85,6 +92,22 @@ executors:
```
````

````{tab} nebullvm-flow.yml
```yaml
jtype: Flow
version: '1'
with:
port: 51000
executors:
- name: clip_n
uses:
jtype: CLIPEncoder
metas:
py_modules:
- executors/clip_nebullvm.py
```
````
Basically, each YAML file defines a [Jina Flow](https://docs.jina.ai/fundamentals/flow/). The complete Jina Flow YAML syntax [can be found here](https://docs.jina.ai/fundamentals/flow/flow-yaml/#configure-flow-meta-information). General parameters of the Flow and Executor can be used here as well. But now we only highlight the most important parameters.

Looking at the YAML file again, we can put it into three subsections as below:
Expand Down Expand Up @@ -189,6 +212,14 @@ There are also runtime-specific parameters listed below:
````

For nebullvm backend, you just need to set name and mini_batch size

| Parameter | Description |
|-----------|--------------------------------------------------------------------------------------------------------------------------------|
| `name` | Model weights, default is `ViT-B/32`. Support all OpenAI released pretrained models. | |
| `minibatch_size` | The size of a minibatch for CPU preprocessing and GPU encoding, default 64. Reduce the size of it if you encounter OOM on GPU. |


For example, to turn on JIT and force PyTorch running on CPU, one can do:

```{code-block} yaml
Expand Down

0 comments on commit 3be83e4

Please sign in to comment.