deep research team in autogenstudio fails with error "This model's maximum context length is 128000 tokens. However, your messages resulted in 209101 tokens" #5690
dctmfoo
announced in
Announcements
Replies: 1 comment 1 reply
-
Hi @dctmfoo Good observations. A good starting point is to review how the deep research team works Specifically, the research_assistant agent in the team uses several tools - google_search and fetch_webpage. These tools in turn return the content of a webpage as markdown . All of this data is added to context. This is what leads to hitting the maximum model context limit error you see as many turns are taken - There are two potential fixes.
Would you be interested in exploring any one of these and sharing your findings? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
What happened?
i sent a query to the deep research team in autogenstudio using "Test Team". After a couple of mins of researching it failed with the below error
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 209101 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
What did you expect to happen?
I expected the research to complete and print the results.
How can we reproduce it (as minimally and precisely as possible)?
In Test Team of Deep Research Team, enter this query and press enter
"are the valuations of Nifty Small Cap favorable to buy now?"
AutoGen version
latest
Which package was this bug in
AutoGen Studio
Model used
gpt-4o
Python version
No response
Operating system
No response
Any additional info you think would be helpful for fixing this bug
No response
Beta Was this translation helpful? Give feedback.
All reactions