Skip to content

Conversation

devin-ai-integration[bot]
Copy link
Contributor

Fix langchain-google-genai integration by stripping 'models/' prefix

Summary

Fixes #3702 - langchain-google-genai integration failing with "LLM Provider NOT provided" error.

The issue occurs when using langchain-google-genai's ChatGoogleGenerativeAI with CrewAI. The langchain library stores the model name with a models/ prefix (e.g., models/gemini/gemini-pro) which is used internally by Google's API, but LiteLLM doesn't recognize this format and throws an error.

Changes:

  • Modified create_llm() in llm_utils.py to strip the models/ prefix when extracting model names from langchain-like objects
  • Added 4 comprehensive unit tests covering various scenarios (with prefix, without prefix, case sensitivity)
  • All existing tests continue to pass

The fix is backward compatible - models without the prefix are unchanged.

Review & Testing Checklist for Human

⚠️ CRITICAL - This PR is based on assumptions and needs real-world validation:

  • Most Important: Install langchain-google-genai and test the actual failing case from issue langchain-google-genai integration fails with LLM Provider NOT provided on macOS #3702. Create an agent with ChatGoogleGenerativeAI(model="gemini/gemini-pro") and verify it works now.
  • Test with other langchain integrations (e.g., langchain-openai) to ensure no regressions
  • Verify the case-sensitivity logic is correct (should we strip uppercase Models/ too?)
  • Check if other similar prefixes need to be handled (I only implemented models/ based on the issue)
  • Run a full crew execution with langchain-google-genai end-to-end to ensure it works in practice

Test Plan Recommendation

from crewai import Agent, Task, Crew
from langchain_google_genai import ChatGoogleGenerativeAI
import os

# Set your GOOGLE_API_KEY
os.environ["GOOGLE_API_KEY"] = "your-key"

llm = ChatGoogleGenerativeAI(model="gemini/gemini-pro")

agent = Agent(
    role="Test Agent",
    goal="Test the fix",
    backstory="Testing langchain-google-genai integration",
    llm=llm
)

task = Task(
    description="Say hello",
    expected_output="A greeting",
    agent=agent
)

crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()
print(result)

Notes

  • Limitation: I wasn't able to test with the actual langchain-google-genai package due to environment constraints. The fix is based on analyzing the error message and creating mock objects that simulate the expected behavior.
  • The fix only strips lowercase models/ prefix (case-sensitive) - this seemed like the safe default but should be validated.
  • Unit tests all pass, but integration testing with the real library is essential before merging.

Link to Devin run: https://app.devin.ai/sessions/f38edeb68fa7484faaa5c6f9d5329afe
Requested by: João ([email protected])

Fixes #3702

When using langchain-google-genai with CrewAI, the model name would include
a 'models/' prefix (e.g., 'models/gemini/gemini-pro') which is added by
Google's API internally. However, LiteLLM does not recognize this format,
causing a 'LLM Provider NOT provided' error.

This fix strips the 'models/' prefix from model names when extracting them
from langchain model objects, ensuring compatibility with LiteLLM while
maintaining backward compatibility with models that don't have this prefix.

Changes:
- Modified create_llm() in llm_utils.py to strip 'models/' prefix
- Added comprehensive tests covering various scenarios:
  - Model with 'models/' prefix in model attribute
  - Model with 'models/' prefix in model_name attribute
  - Model without prefix (no change)
  - Case-sensitive prefix handling (only lowercase 'models/' is stripped)

Co-Authored-By: João <[email protected]>
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

langchain-google-genai integration fails with LLM Provider NOT provided on macOS

0 participants