Skip to content

Commit

Permalink
Fix typo in vLLM CPU docker guide (intel-analytics#11188)
Browse files Browse the repository at this point in the history
  • Loading branch information
xiangyuT authored Jun 3, 2024
1 parent 15a6205 commit ff83fad
Showing 1 changed file with 1 addition and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ After the container is booted, you could get into the container through `docker
docker exec -it ipex-llm-serving-cpu-container /bin/bash
```

## Running vLLM serving with IPEX-LLM on Intel GPU in Docker
## Running vLLM serving with IPEX-LLM on Intel CPU in Docker

We have included multiple vLLM-related files in `/llm/`:
1. `vllm_offline_inference.py`: Used for vLLM offline inference example
Expand Down

0 comments on commit ff83fad

Please sign in to comment.