Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expected speed for llama3-70b-instruct #18

Open
ethxnp opened this issue Jun 4, 2024 · 1 comment
Open

Expected speed for llama3-70b-instruct #18

ethxnp opened this issue Jun 4, 2024 · 1 comment

Comments

@ethxnp
Copy link

ethxnp commented Jun 4, 2024

Hello - I quantized Llama3-70B-Instruct with g128 (model here), and ran the benchmarking script on an L40s with the below command:

> export GLOBAL_BATCH_SIZE=4 NUM_GPU_PAGE_BLOCKS=100
> python qserve_benchmark.py --model $MODEL_PATH --benchmarking --precision w4a8kv4 --group-size 128

I get ~60 tokens/s, is this the expected throughput? I was hoping for a bit closer to llama2-70b at ~280 tokens/s.

@ys-2020
Copy link
Contributor

ys-2020 commented Jun 18, 2024

Hi @ethxnp , thank you very much for your interests in QServe!

Yes. The expected throughput should be close to 280 tokens/sec. It might be slightly smaller since Llama3 models have a much larger vocabulary size.

The reason why you get ~60 tokens/s is that you have set the batch size to 4. To maximize the throughput, you'll need to take full advantage of the device's capacity. For example, on L40s, the max batch size for 70B models should be close to 24.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants