Skip to content

Commit bf0b8d4

Browse files
authored
Merge pull request #5 from VectorInstitute/refactor
Refactor model launching scripts
2 parents da86986 + cabc739 commit bf0b8d4

24 files changed

+159
-917
lines changed

README.md

Lines changed: 12 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# Vector Inference: Easy inference on Slurm clusters
2-
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). All scripts in this repository runs natively on the Vector Institute cluster environment, and can be easily adapted to other environments.
2+
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **All scripts in this repository runs natively on the Vector Institute cluster environment**. To adapt to other environments, update the config files in the `models` folder and the environment variables in the model launching scripts accordingly.
33

44
## Installation
55
If you are using the Vector cluster environment, and you don't need any customization to the inference server environment, you can go to the next section as we have a default container environment in place. Otherwise, you might need up to 10GB of storage to setup your own virtual environment. The following steps needs to be run only once for each user.
@@ -29,7 +29,7 @@ pip install vllm-flash-attn
2929
## Launch an inference server
3030
We will use the Llama 3 model as example, to launch an inference server for Llama 3 8B, run
3131
```bash
32-
bash models/llama3/launch_server.sh
32+
bash src/launch_server.sh --model-family llama3
3333
```
3434
You should see an output like the following:
3535
> Job Name: vLLM/Meta-Llama-3-8B
@@ -44,35 +44,26 @@ You should see an output like the following:
4444
4545
If you want to use your own virtual environment, you can run this instead:
4646
```bash
47-
bash models/llama3/launch_server.sh -e $(poetry env info --path)
47+
bash src/launch_server.sh --model-family llama3 --venv $(poetry env info --path)
4848
```
49-
By default, the `launch_server.sh` script in Llama 3 folder uses the 8B variant, you can switch to other variants with the `-v` flag, and make sure to change the requested resource accordingly. More information about the flags and customizations can be found in the [`models`](models) folder. The inference server is compatible with the OpenAI `Completion` and `ChatCompletion` API. You can inspect the Slurm output files to check the inference server status.
49+
By default, the `launch_server.sh` script is set to use the 8B variant for Llama 3 based on the config file in `models/llama3` folder, you can switch to other variants with the `--model-variant` argument, and make sure to change the requested resource accordingly. More information about the flags and customizations can be found in the [`models`](models) folder. The inference server is compatible with the OpenAI `Completion` and `ChatCompletion` API. You can inspect the Slurm output files to check the inference server status.
5050

5151
Here is a more complicated example that launches a model variant using multiple nodes, say we want to launch Mixtral 8x22B, run
5252
```bash
53-
bash models/mixtral/launch_server.sh -v 8x22B-v0.1 -N 2 -n 4
53+
bash src/launch_server.sh --model-family mixtral --model-variant 8x22B-v0.1 --num-nodes 2 --num-gpus 4
54+
```
55+
56+
And for launching a multimodal model, here is an example for launching LLaVa-NEXT Mistral 7B (default variant)
57+
```bash
58+
bash src/launch_server.sh --model-family llava-next --is-vlm
5459
```
55-
The default partition for Mixtral models is a40, and we need 8 a40 GPUs to load Mixtral 8x22B, so we requested 2 a40 nodes with 4 GPUs per node. You should see an output like the following:
56-
> Number of nodes set to: 2
57-
>
58-
> Number of GPUs set to: 4
59-
>
60-
> Model variant set to: 8x22B-v0.1
61-
>
62-
> Job Name: vLLM/Mixtral-8x22B-v0.1
63-
>
64-
> Partition: a40
65-
>
66-
> Generic Resource Scheduling: gpu:8
67-
>
68-
> Data Type: auto
69-
>
70-
> Submitted batch job 12430232
7160

7261
## Send inference requests
7362
Once the inference server is ready, you can start sending in inference requests. We provide example scripts for sending inference requests in [`examples`](examples) folder. Make sure to update the model server URL and the model weights location in the scripts. For example, you can run `python examples/inference/llm/completions.py`, and you should expect to see an output like the following:
7463
> {"id":"cmpl-bdf43763adf242588af07af88b070b62","object":"text_completion","created":2983960,"model":"/model-weights/Llama-2-7b-hf","choices":[{"index":0,"text":"\nCanada is close to the actual continent of North America. Aside from the Arctic islands","logprobs":null,"finish_reason":"length"}],"usage":{"prompt_tokens":8,"total_tokens":28,"completion_tokens":20}}
7564
65+
**NOTE**: For multimodal models, currently only `ChatCompletion` is available, and only one image can be provided for each prompt.
66+
7667
## SSH tunnel from your local device
7768
If you want to run inference from your local device, you can open a SSH tunnel to your cluster environment like the following:
7869
```bash

examples/quantization/quantization.py

Lines changed: 0 additions & 21 deletions
This file was deleted.

models/README.md

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,10 @@
11
# Environment Variables
22
The following environment variables all have default values that's suitable for the Vector cluster environment. You can use flags to modify certain environment variable values.
33

4-
* **MODEL_NAME**: Name of model family.
4+
* **MODEL_FAMILY**: Directory name of the model family.
5+
* **SRC_DIR**: Relative path for the `[src](../src/)` folder.
6+
* **CONFIG_FILE**: Config file containing default values for some environment variables in the **MODEL_FAMILY** diretory.
7+
* **MODEL_NAME**: Name of model family according to the actual model weights.
58
* **MODEL_VARIANT**: Variant of the model, the variants available are listed in respective model folders. Default variant is bolded in the corresponding README.md file.
69
* **MODEL_DIR**: Path to model's directory in vector-inference repo.
710
* **VLLM_BASE_URL_FILENAME**: The file to store the inference server URL, this file would be generated after launching an inference server, and it would be located in the corresponding model folder with the name `.vllm_{model-name}-{model-variant}_url`.
@@ -13,26 +16,29 @@ The following environment variables all have default values that's suitable for
1316
* **NUM_NODES**: Numeber of nodes scheduled. Default to suggested resource allocation.
1417
* **NUM_GPUS**: Number of GPUs scheduled. Default to suggested resource allocation.
1518
* **JOB_PARTITION**: Type of compute partition. Default to suggested resource allocation.
16-
* **QOS**: Quality of Service
17-
* **TIME**: Max Walltime
19+
* **QOS**: Quality of Service.
20+
* **TIME**: Max Walltime.
1821

1922
The following environment variables are only for Vision Language Models
2023

24+
* **CHAT_TEMPLATE**: The relative path to the chat template if no default chat template is available.
2125
* **IMAGE_INPUT_TYPE**: Possible choices: `pixel_values`, `image_features`. The image input type passed into vLLM, default to `pixel_values`.
2226
* **IMAGE_TOKEN_ID**: Input ID for image token. Default to HF Config value. Default value set according to model.
2327
* **IMAGE_INPUT_SHAPE**: The biggest image input shape (worst for memory footprint) given an input type. Only used for vLLM’s profile_run. Default value set according to model.
2428
* **IMAGE_FEATURE_SIZE**: The image feature size along the context dimension. Default value set according to model.
2529

2630
# Named Arguments
2731
NOTE: Arguments like `--num-nodes` or `model-variant` might not be available to certain model families because they should fit inside a single node or there is no variant availble in `/model-weights` yet. You can manually add these options in launch scripts if you need, or make a request to download weights for other variants.
32+
* `--model-family`: Sets **MODEL_FAMILY**, the available options are the names of each sub-directory in this directory. **This argument MUST be set.**
33+
* `--model-variant`: Overrides **MODEL_VARIANT**
2834
* `--partition`: Overrides **JOB_PARTITION**.
2935
* `--num-nodes`: Overrides **NUM_NODES**.
3036
* `--num-gpus`: Overrides **NUM_GPUS**.
3137
* `--qos`: Overrides **QOS**.
3238
* `--time`: Overrides **TIME**.
3339
* `--data-type`: Overrides **VLLM_DATA_TYPE**.
3440
* `--venv`: Overrides **VENV_BASE**.
35-
* `--model-variant`: Overrides **MODEL_VARIANT**
41+
* `--is-vlm`: Specifies this is a Vision Language Model, no value needed.
3642

3743
The following flags are only available to Vision Language Models
3844

models/command-r/config.sh

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
export MODEL_NAME="c4ai-command-r"
2+
export MODEL_VARIANT="plus"
3+
export NUM_NODES=2
4+
export NUM_GPUS=4
5+
export VLLM_MAX_LOGPROBS=256000

models/command-r/launch_server.sh

Lines changed: 0 additions & 109 deletions
This file was deleted.

models/dbrx/config.sh

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
export MODEL_NAME="dbrx"
2+
export MODEL_VARIANT="instruct"
3+
export NUM_NODES=2
4+
export NUM_GPUS=4
5+
export VLLM_MAX_LOGPROBS=100352

models/dbrx/launch_server.sh

Lines changed: 0 additions & 109 deletions
This file was deleted.

models/llama2/config.sh

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
export MODEL_NAME="Llama-2"
2+
export MODEL_VARIANT="7b-hf"
3+
export NUM_NODES=1
4+
export NUM_GPUS=1
5+
export VLLM_MAX_LOGPROBS=32000

0 commit comments

Comments
 (0)