Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.Net: Bug: Conflict between Response Format property & function calling #9768

Open
apg-developer opened this issue Nov 20, 2024 · 6 comments
Assignees
Labels
bug Something isn't working .NET Issue or Pull requests regarding .NET code

Comments

@apg-developer
Copy link

Describe the bug
A HTTP 500 server_error is thrown when a ChatCompletion Agent uses function calling to retrieve data. The error only happens if Microsoft.SemanticKernel.Connectors.OpenAI.OpenAIPromptExecutionSettings.ResponseFormat property is enable on the settings of the agent. Otherwise error does not happen.

To Reproduce
Steps to reproduce the behavior:

  1. Copy the sample code provided.
  2. Use a model based in GPT4o version: 2024-08-06
  3. Run the console project & type the word "chicken" to get the custom data default set in the plugin.
  4. A HTTP 500 error is thrown on the agent's method: await chatCompletionService.GetChatMessageContentAsync()
  5. Note that despite of the error SK adds to the history a correct tool entry with the data provided by the plugin.

Expected behavior
await chatCompletionService.GetChatMessageContentAsync() method should return without errors the data provided by the plugin using the structure provided in ResponseFormat property.

Screenshots

Platform

  • OS: Windows
  • IDE: Visual Studio
  • Language: C#
  • Source: [e.g. NuGet package version 0.1.0, pip package version 0.1.0, main branch of repository]

Additional context
I have tried several combinations to try to achieve the goal

  1. Using the ResponseFormat property through the type-based schema (Like the example code below).
  2. Using the ResponseFormat property through a JSON schema.
  3. Returning a “RecipeOut” object from the plugin (As the provided example).
  4. Returning a string from the plugin (Like the deserialization commented out in the plugin).
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;

namespace EntitiesAssistant.Recipes {
  public class RecipeCopilot {
    private const string _systemPrompt = "You are an expert recipe assistant." + 
"Your key objective is guide to the user to find the ingredients & steps necessary to prepare the dishes suggested by the user. " + 
"Friendly reminder: You have a repository with various recipes prepared earlier.";

    private ChatHistory _history = new();
    
    private Kernel BuildKernel() {
      var builder = Kernel.CreateBuilder().AddAzureOpenAIChatCompletion(
        deploymentName: ProjectSettings.DeploymentName, 
        apiKey: ProjectSettings.ApiKey, 
        serviceId: ProjectSettings.ServiceId, 
        endpoint: ProjectSettings.Endpoint);

      builder.Plugins.AddFromObject(new RecipePlugin(), pluginName: "recipe");
      return builder.Build();
    }

    public async Task ChatAsync() {
      Console.WriteLine("What would you like to cook today ?");
      string userInput = Console.ReadLine() ?? "I am not hungry";

      var kernel = BuildKernel();
      OpenAIPromptExecutionSettings settings = new() {
        ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions,
        ResponseFormat = typeof (RecipeOutput),
      };

      _history.AddSystemMessage(_systemPrompt);
      _history.AddUserMessage(userInput);

      try {
        var chatCompletionService = kernel.GetRequiredService < IChatCompletionService > (ProjectSettings.ServiceId);
        var response = await chatCompletionService.GetChatMessageContentAsync(_history, settings, kernel);
        Console.WriteLine(response);
      } catch (Exception e) {
        var toolResponse = _history.LastOrDefault(x => x.Role == AuthorRole.Tool)?.Content;
        if (toolResponse is not null) Console.WriteLine(toolResponse);
        Console.WriteLine($"{e.Message} ::: {e.StackTrace}");
        throw;
      }
    }
  }
}



using Microsoft.SemanticKernel;
using System.ComponentModel;

namespace EntitiesAssistant.Recipes {
    public class RecipePlugin {
      RecipeOutput _defaultRecipe = new() {
        Name = "Chicken Curry by APG", 
        Ingredients = [
            new Ingredient {Name = "Money", Quantity = "5", MeasureUnit = "Dollars"},
            ], 
        Instructions = "Step 1: Check your pocket. " + 
        "Step 2: Goes to the restaurant. " + 
        "Step 3: Place your order. " + 
        "Step 4: Pay & enjoy it. "
      };

      [KernelFunction("FindRecipe")][Description("Search for all internal recipes. This is a customized repository for those recipes prepared earlier.")] 
      public async Task < RecipeOutput > FindRecipeAsync() {
          await Task.Delay(1000);
          return _defaultRecipe; 
          //return System.Text.Json.JsonSerializer.Serialize(_defaultRecipe);        
          }    
    }
} 
@apg-developer apg-developer added the bug Something isn't working label Nov 20, 2024
@markwallace-microsoft markwallace-microsoft added .NET Issue or Pull requests regarding .NET code triage labels Nov 20, 2024
@github-actions github-actions bot changed the title Bug: Conflict between Response Format property & function calling .Net: Bug: Conflict between Response Format property & function calling Nov 20, 2024
@dmytrostruk
Copy link
Member

@apg-developer Could you please share how the RecipeOutput model looks like? Thanks in advance!

@apg-developer
Copy link
Author

@dmytrostruk sure, the definition is on the plugin but here you can see the class definition for RecipeOutput. Thank you so much for your help !!!

public class RecipeOutput {

  public string Name { get;    set;  }

  public string Description {    get;    set;  }

  public Ingredient[] Ingredients {    get;    set;  }

  public string Instructions {    get;    set;  }
}

public class Ingredient {

  public string Name {    get;    set;  }

  public string Quantity {    get;    set;  }

  public string MeasureUnit {    get;    set;  }

}

@dmytrostruk
Copy link
Member

@apg-developer Thanks a lot! Could you please try to run the same example but also specify apiVersion parameter when you register Azure OpenAI chat completion service like in this example and see if that works?

Kernel kernel = Kernel.CreateBuilder()
.AddAzureOpenAIChatCompletion(
deploymentName: TestConfiguration.AzureOpenAI.ChatDeploymentName,
endpoint: TestConfiguration.AzureOpenAI.Endpoint,
credentials: new AzureCliCredential(),
apiVersion: "2024-08-01-preview")

@apg-developer
Copy link
Author

@dmytrostruk the result was the same. Note the following stack trace & the provided screenshot. Thanks for your help !

General stack trace

  at Microsoft.SemanticKernel.Connectors.OpenAI.ClientCore.<RunRequestAsync>d__73`1.MoveNext()
   at Microsoft.SemanticKernel.Connectors.OpenAI.ClientCore.<GetChatMessageContentsAsync>d__16.MoveNext()
   at Microsoft.SemanticKernel.ChatCompletion.ChatCompletionServiceExtensions.<GetChatMessageContentAsync>d__2.MoveNext()
   at EntitiesAssistant.Recipes.RecipeCopilot.<ChatAsync>d__3.MoveNext() in D:\Apg\SampleWithProcessFramework\EntitiesAssistant\Recipes\RecipeCopilot.cs:line 44
   at Program.<<Main>$>d__0.MoveNext() in ..\EntitiesAssistant\Program.cs:line 8

Inner exception stack trace

at Azure.AI.OpenAI.ClientPipelineExtensions.<ProcessMessageAsync>d__0.MoveNext()
   at System.Threading.Tasks.ValueTask`1.get_Result()
   at Azure.AI.OpenAI.Chat.AzureChatClient.<CompleteChatAsync>d__14.MoveNext()
   at OpenAI.Chat.ChatClient.<CompleteChatAsync>d__8.MoveNext()
   at Microsoft.SemanticKernel.Connectors.OpenAI.ClientCore.<RunRequestAsync>d__73`1.MoveNext()

Modified snippet code

var builder = Kernel.CreateBuilder()
       .AddAzureOpenAIChatCompletion(deploymentName: ProjectSettings.DeploymentName,
       apiKey: ProjectSettings.ApiKey,
       serviceId: ProjectSettings.ServiceId,
       endpoint: ProjectSettings.Endpoint,
       apiVersion: "2024-08-01-preview");

Image

@deq3
Copy link

deq3 commented Nov 21, 2024

Hello, I am currently facing the same issue when returning data from a plugin function to a ChatCompletionAgent with the ResponseFormat property set. Other ChatCompletionAgents that have ResponseFormat set work fine. Only those that also use a plugin that returns data return a 500 error after calling the function and the results should be processed to the ResponseFormat by gpt-4o. I was surprised by this issue because I didn't change anything in my code and didn't update any dependencies, but it stopped working because of this.

Expected behaviour

agentKernel.ImportPluginFromObject(new SomePluginForDataRetrieval());

var agent = new ChatCompletionAgent()
{
    Name = "ToolAndResponseFormatAgent",
    Instructions = """
        A Prompt guiding how to use the plugin
    """,
    Kernel = agentKernel,
    Arguments = new KernelArguments(new OpenAIPromptExecutionSettings()
    {
        FunctionChoiceBehavior = FunctionChoiceBehavior.Auto(),
        ResponseFormat = typeof(FormattedResponse)
    })
};

await agent.InvokeAsync(chatHistory);

Should not return a 500 error when using ResponseFormat and a plugin that returns data to the agent in the form of an object

Platform

  • OS: Windows
  • IDE: Visual Studio 2022
  • Language: C# .NET 8
  • Source:
    • Microsoft.SemanticKernel 1.28.0 (But also newer versions),
    • Microsoft.SemanticKernel.Agents.Core 1.28.0-alpha (Also newer versions)
  • Model: gpt-4o version 2024-08-06 running on Azure OpenAI. I tried both eastus and westeurope for the Deployments, but both didn't work

Hope that this issue can be resolved soon, thanks for already pointing it out @apg-developer and for the support so far @dmytrostruk

@dmytrostruk
Copy link
Member

@apg-developer @deq3 Thanks again for reporting this issue. It appeared that Structured Outputs feature in Azure OpenAI (i.e. Response Format as JSON Schema) doesn't work with parallel function calls:
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/structured-outputs?tabs=python-secure#function-calling-with-structured-outputs
Image

When you enable function calling by using old ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions approach or new FunctionChoiceBehavior = FunctionChoiceBehavior.Auto() approach with Azure OpenAI models, by default parallel calls are enabled, which is causing this error.

In order to avoid the error, you need to explicitly disable parallel calls with following syntax:

var executionSettings = new OpenAIPromptExecutionSettings
{
    ResponseFormat = typeof(FormattedResponse),
    FunctionChoiceBehavior = FunctionChoiceBehavior.Auto(options: new() { AllowParallelCalls = false })
};

When you disable parallel calls explicitly, the issue should be resolved for you. Please let me know if that works. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working .NET Issue or Pull requests regarding .NET code
Projects
Status: No status
Development

No branches or pull requests

4 participants