Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better token consumption estimation #235

Closed
RyanMarten opened this issue Dec 9, 2024 · 1 comment
Closed

Better token consumption estimation #235

RyanMarten opened this issue Dec 9, 2024 · 1 comment
Assignees

Comments

@RyanMarten
Copy link
Contributor

For our client side rate limit control, we need to estimate the number of tokens that will be used by each request we send to an LLM completions API.

Right now we are naively estimating that the output of the call will use (max_output_tokens // 4).

Instead we should implement a moving average based on the responses we have gotten so far.

This will help with leaving extra throughput on the table as discovered when looking into #223.

@RyanMarten
Copy link
Contributor Author

Closed as duplicate to #206

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants