Skip to content

Fix: Set output_schema correctly for LiteLllm #580

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

whoisarpit
Copy link

@whoisarpit whoisarpit commented May 6, 2025

Fixes #217

Description

This PR fixes an issue with schema validation when using non-Google models through the LiteLlm wrapper. Previously, when using models like openai/gpt-4o with a specified output schema, the model would either not return JSON or would return JSON that didn't follow the required schema, resulting in Pydantic validation errors.

Changes

  • Added support for properly passing the response_schema configuration from LlmRequest to the LiteLlm client's response_format parameter.

Testing

Tested with a sample agent that uses:

  • Model: openai/gpt-4o
  • A structured output schema defined with Pydantic

Before the fix, the agent would fail with Pydantic validation errors. After the fix, the model properly returns JSON in the expected format that passes validation.

Related Issues

Fixes an issue where structured output validation fails with non-Google models when using the LiteLlm wrapper.

Test File:

import asyncio
import logging

from google.genai import types
from pydantic import BaseModel

from google.adk.agents.llm_agent import Agent
from google.adk.models.lite_llm import LiteLlm
from google.adk.runners import InMemoryRunner

logging.basicConfig(level=logging.INFO)

app_name = "test"
user_id = "123"


class User(BaseModel):
  name_value: str
  age_value: int


agent = Agent(
    name="test",
    instruction="Output a name as 'John' and age as 30",
    model=LiteLlm(model="gpt-4o-mini"),
    output_schema=User,
    output_key="user",
)

runner = InMemoryRunner(
    agent=agent,
    app_name=app_name,
)


async def main():
  session = runner.session_service.create_session(
      app_name=app_name,
      user_id=user_id,
  )
  async for event in runner.run_async(
      session_id=session.id,
      user_id=user_id,
      new_message=types.Content(
          role="user",
          parts=[types.Part(text="Do what the system instruction says")],
      ),
  ):
    logging.info(event)

  session = runner.session_service.get_session(
      session_id=session.id,
      app_name=app_name,
      user_id=user_id,
  )
  logging.info("------ Output from the agent ------")
  logging.info(session.state["user"])


def run():
  asyncio.run(main())


if __name__ == "__main__":
  run()

@whoisarpit whoisarpit force-pushed the fix/lite_llm_response_schema branch from cbc2246 to bdb2ddf Compare May 7, 2025 05:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants