You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @ethxnp , thank you very much for your interests in QServe!
Yes. The expected throughput should be close to 280 tokens/sec. It might be slightly smaller since Llama3 models have a much larger vocabulary size.
The reason why you get ~60 tokens/s is that you have set the batch size to 4. To maximize the throughput, you'll need to take full advantage of the device's capacity. For example, on L40s, the max batch size for 70B models should be close to 24.
Hello - I quantized Llama3-70B-Instruct with g128 (model here), and ran the benchmarking script on an L40s with the below command:
I get ~60 tokens/s, is this the expected throughput? I was hoping for a bit closer to llama2-70b at ~280 tokens/s.
The text was updated successfully, but these errors were encountered: