-
Notifications
You must be signed in to change notification settings - Fork 19.2k
fix: resolve OutputParserException when using Ollama instead of Gemini #33140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
fix: resolve OutputParserException when using Ollama instead of Gemini #33140
Conversation
- Fixed output parsing compatibility issues between Ollama and Gemini models - Updated output parser to handle different response formats from Ollama - Added proper error handling for malformed responses - Ensured consistent behavior across different LLM providers Fixes langchain-ai#33016
The latest updates on your projects. Learn more about Vercel for GitHub.
|
CodSpeed Instrumentation Performance ReportMerging #33140 will not alter performanceComparing Summary
Footnotes
|
Description
This PR fixes the OutputParserException that occurs when using Ollama models instead of Gemini models.
Problem
When switching from Gemini to Ollama, users encountered OutputParserException due to different response formats between the two LLM providers. The output parser was expecting Gemini-specific formatting.
Solution
Changes Made
langchain/output_parsers/base.py
to handle Ollama response formatTesting
Fixes
Closes #33016
Checklist