Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[OnlineRequestProcessor Enhancement] Better way to do output token estimation #206

Open
Tracked by #204
CharlieJCJ opened this issue Dec 4, 2024 · 0 comments
Open
Tracked by #204

Comments

@CharlieJCJ
Copy link
Contributor

CharlieJCJ commented Dec 4, 2024

For our client side rate limit control, we need to estimate the number of tokens that will be used by each request we send to an LLM completions API.

Right now we are naively estimating that the output of the call will use (max_output_tokens // 4).

Instead we should implement a moving average based on the responses we have gotten so far.

This will help with leaving extra throughput on the table as discovered when looking into #223.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant