You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+12-21Lines changed: 12 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
# Vector Inference: Easy inference on Slurm clusters
2
-
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). All scripts in this repository runs natively on the Vector Institute cluster environment, and can be easily adapted to other environments.
2
+
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **All scripts in this repository runs natively on the Vector Institute cluster environment**. To adapt to other environments, update the config files in the `models` folder and the environment variables in the model launching scripts accordingly.
3
3
4
4
## Installation
5
5
If you are using the Vector cluster environment, and you don't need any customization to the inference server environment, you can go to the next section as we have a default container environment in place. Otherwise, you might need up to 10GB of storage to setup your own virtual environment. The following steps needs to be run only once for each user.
@@ -29,7 +29,7 @@ pip install vllm-flash-attn
29
29
## Launch an inference server
30
30
We will use the Llama 3 model as example, to launch an inference server for Llama 3 8B, run
31
31
```bash
32
-
bash models/llama3/launch_server.sh
32
+
bash src/launch_server.sh --model-family llama3
33
33
```
34
34
You should see an output like the following:
35
35
> Job Name: vLLM/Meta-Llama-3-8B
@@ -44,35 +44,26 @@ You should see an output like the following:
44
44
45
45
If you want to use your own virtual environment, you can run this instead:
46
46
```bash
47
-
bash models/llama3/launch_server.sh -e$(poetry env info --path)
47
+
bash src/launch_server.sh --model-family llama3 --venv$(poetry env info --path)
48
48
```
49
-
By default, the `launch_server.sh` script in Llama 3 folder uses the 8B variant, you can switch to other variants with the `-v` flag, and make sure to change the requested resource accordingly. More information about the flags and customizations can be found in the [`models`](models) folder. The inference server is compatible with the OpenAI `Completion` and `ChatCompletion` API. You can inspect the Slurm output files to check the inference server status.
49
+
By default, the `launch_server.sh` script is set to use the 8B variant for Llama 3 based on the config file in `models/llama3` folder, you can switch to other variants with the `--model-variant` argument, and make sure to change the requested resource accordingly. More information about the flags and customizations can be found in the [`models`](models) folder. The inference server is compatible with the OpenAI `Completion` and `ChatCompletion` API. You can inspect the Slurm output files to check the inference server status.
50
50
51
51
Here is a more complicated example that launches a model variant using multiple nodes, say we want to launch Mixtral 8x22B, run
The default partition for Mixtral models is a40, and we need 8 a40 GPUs to load Mixtral 8x22B, so we requested 2 a40 nodes with 4 GPUs per node. You should see an output like the following:
56
-
> Number of nodes set to: 2
57
-
>
58
-
> Number of GPUs set to: 4
59
-
>
60
-
> Model variant set to: 8x22B-v0.1
61
-
>
62
-
> Job Name: vLLM/Mixtral-8x22B-v0.1
63
-
>
64
-
> Partition: a40
65
-
>
66
-
> Generic Resource Scheduling: gpu:8
67
-
>
68
-
> Data Type: auto
69
-
>
70
-
> Submitted batch job 12430232
71
60
72
61
## Send inference requests
73
62
Once the inference server is ready, you can start sending in inference requests. We provide example scripts for sending inference requests in [`examples`](examples) folder. Make sure to update the model server URL and the model weights location in the scripts. For example, you can run `python examples/inference/llm/completions.py`, and you should expect to see an output like the following:
74
63
> {"id":"cmpl-bdf43763adf242588af07af88b070b62","object":"text_completion","created":2983960,"model":"/model-weights/Llama-2-7b-hf","choices":[{"index":0,"text":"\nCanada is close to the actual continent of North America. Aside from the Arctic islands","logprobs":null,"finish_reason":"length"}],"usage":{"prompt_tokens":8,"total_tokens":28,"completion_tokens":20}}
75
64
65
+
**NOTE**: For multimodal models, currently only `ChatCompletion` is available, and only one image can be provided for each prompt.
66
+
76
67
## SSH tunnel from your local device
77
68
If you want to run inference from your local device, you can open a SSH tunnel to your cluster environment like the following:
Copy file name to clipboardExpand all lines: models/README.md
+10-4Lines changed: 10 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,10 @@
1
1
# Environment Variables
2
2
The following environment variables all have default values that's suitable for the Vector cluster environment. You can use flags to modify certain environment variable values.
3
3
4
-
***MODEL_NAME**: Name of model family.
4
+
***MODEL_FAMILY**: Directory name of the model family.
5
+
***SRC_DIR**: Relative path for the `[src](../src/)` folder.
6
+
***CONFIG_FILE**: Config file containing default values for some environment variables in the **MODEL_FAMILY** diretory.
7
+
***MODEL_NAME**: Name of model family according to the actual model weights.
5
8
***MODEL_VARIANT**: Variant of the model, the variants available are listed in respective model folders. Default variant is bolded in the corresponding README.md file.
6
9
***MODEL_DIR**: Path to model's directory in vector-inference repo.
7
10
***VLLM_BASE_URL_FILENAME**: The file to store the inference server URL, this file would be generated after launching an inference server, and it would be located in the corresponding model folder with the name `.vllm_{model-name}-{model-variant}_url`.
@@ -13,26 +16,29 @@ The following environment variables all have default values that's suitable for
13
16
***NUM_NODES**: Numeber of nodes scheduled. Default to suggested resource allocation.
14
17
***NUM_GPUS**: Number of GPUs scheduled. Default to suggested resource allocation.
15
18
***JOB_PARTITION**: Type of compute partition. Default to suggested resource allocation.
16
-
***QOS**: Quality of Service
17
-
***TIME**: Max Walltime
19
+
***QOS**: Quality of Service.
20
+
***TIME**: Max Walltime.
18
21
19
22
The following environment variables are only for Vision Language Models
20
23
24
+
***CHAT_TEMPLATE**: The relative path to the chat template if no default chat template is available.
21
25
***IMAGE_INPUT_TYPE**: Possible choices: `pixel_values`, `image_features`. The image input type passed into vLLM, default to `pixel_values`.
22
26
***IMAGE_TOKEN_ID**: Input ID for image token. Default to HF Config value. Default value set according to model.
23
27
***IMAGE_INPUT_SHAPE**: The biggest image input shape (worst for memory footprint) given an input type. Only used for vLLM’s profile_run. Default value set according to model.
24
28
***IMAGE_FEATURE_SIZE**: The image feature size along the context dimension. Default value set according to model.
25
29
26
30
# Named Arguments
27
31
NOTE: Arguments like `--num-nodes` or `model-variant` might not be available to certain model families because they should fit inside a single node or there is no variant availble in `/model-weights` yet. You can manually add these options in launch scripts if you need, or make a request to download weights for other variants.
32
+
*`--model-family`: Sets **MODEL_FAMILY**, the available options are the names of each sub-directory in this directory. **This argument MUST be set.**
33
+
*`--model-variant`: Overrides **MODEL_VARIANT**
28
34
*`--partition`: Overrides **JOB_PARTITION**.
29
35
*`--num-nodes`: Overrides **NUM_NODES**.
30
36
*`--num-gpus`: Overrides **NUM_GPUS**.
31
37
*`--qos`: Overrides **QOS**.
32
38
*`--time`: Overrides **TIME**.
33
39
*`--data-type`: Overrides **VLLM_DATA_TYPE**.
34
40
*`--venv`: Overrides **VENV_BASE**.
35
-
*`--model-variant`: Overrides **MODEL_VARIANT**
41
+
*`--is-vlm`: Specifies this is a Vision Language Model, no value needed.
36
42
37
43
The following flags are only available to Vision Language Models
0 commit comments