Replies: 1 comment
-
To address the token limit issue with the
Consider these strategies to manage and optimize your workflow to stay within token limits. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Subject: Assistance Needed: Exceeding Token Limits in gpt-4o-mini with LangFlow Workflow
I’m currently facing an issue with the gpt-4o-mini model in a LangFlow workflow. The error message states that my workflow exceeds the 200,000 tokens per minute (TPM) limit. Below are the details of the problem and my setup:
Error Message
Workflow Overview
Input Prompt: The workflow begins with a user query, e.g.,
Tools Used:
Model Configuration:
gpt-4o-mini
Current Issues
200,000 TPM
limit.Steps Taken So Far
max_tokens
to a lower value but still encountered token issues.Questions
gpt-4-turbo
) be advisable, or should I focus on optimizing the workflow?Any help is appreciated. Thanks!
Beta Was this translation helpful? Give feedback.
All reactions