Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conditional flow with tool calls #33

Open
r-leyshon opened this issue Jan 13, 2025 · 6 comments
Open

Conditional flow with tool calls #33

r-leyshon opened this issue Jan 13, 2025 · 6 comments

Comments

@r-leyshon
Copy link

r-leyshon commented Jan 13, 2025

Hi there,

I'm interested in this package for a more complex chatbot that I've been developing. I need to preferably stream openAI responses with tools. Getting this to work with the shiny chat component has been a little tricky, but then I came across a reference to chatlas in the docstring for the shiny Chat class. Working through your docs and examples, I can get so far with it.

This approach allows for streaming responses with tool calls and shiny ui
widgets. Though the tool calls need to be self-contained. Whatever the tool
returns is fed into the model for a subsequent response, rather than being
available for additional conditional flow.

I would like to instead receive the function name and parameter values and
call get_current_temperature(), passing the json response back to model.
This would allow me to display a notification without relying on a side
effect.

from chatlas import ChatOpenAI, Turn
from shiny.express import ui
import dotenv
import requests

openai_key = dotenv.dotenv_values()["OPENAI_KEY"]
messages = [Turn(role="system", contents="You are a helpful but terse assistant.")]
messages.append(Turn(role="assistant", contents = "Hi! How can I help you today?"))


# func contains side effect which I would prefer to handle outside
def get_current_temperature(latitude: float, longitude: float):
    """
    Get the current weather given a latitude and longitude.

    Parameters
    ----------
    latitude
        The latitude of the location.
    longitude
        The longitude of the location.
    """
    lat_lng = f"latitude={latitude}&longitude={longitude}"
    url = f"https://api.open-meteo.com/v1/forecast?{lat_lng}&current=temperature_2m,wind_speed_10m&hourly=temperature_2m,relative_humidity_2m,wind_speed_10m"
    response = requests.get(url)
    json = response.json()
    weather_now = json['current']
    ui.notification_show(f"Queried weather API... {weather_now}")
    return weather_now


chat_model = ChatOpenAI(
    api_key=openai_key,
    turns=messages,
    )
chat_model.register_tool(get_current_temperature)
chat = ui.Chat(
    id="ui_chat",
    messages=["Hi! How can I help you today?"],
)
chat.ui()


@chat.on_user_submit
async def handle_user_input():
    response = chat_model.stream(chat.user_input())
    await chat.append_message_stream(response)

@gadenbuie
Copy link
Contributor

Hi, thanks for your question @r-leyshon. I'm having a hard time understanding the question though. Could you outline, maybe in pseudo-code, what you're wanting to do?

I think in particular I'm having a hard time seeing what you mean with "the tool call response ... being available for additional conditional flow".

@r-leyshon
Copy link
Author

@gadenbuie thanks for the quick look at that, I'll try to show what I intend to do with the streamed model response below:

response_stream = chat_model.stream(chat.user_input())
async for part in response_stream:
    if part.tool_call:
        ui.notification_show("A tool call is required")
        tool_name = # collect func name
        tool_args = # collect func args
        if tool_name == "foo":
            foo(tool_args)
    else:
        await chat.append_message_stream(response_stream)

I've struggled to inspect the streamed parts and do things with them, at least when trying to await certain events for async appending to the message stream. My fallback is to switch off streaming but I'd rather not do that.

@gadenbuie
Copy link
Contributor

Thanks for the clarification. Right now, I don't think it's possible to do what you're trying to do, at least not with the streaming response approach. Given that, I think it's very reasonable to use something like a notification to indicate that a tool was called. We'll definitely keep your use case in mind as we develop chatlas and the Shiny chat component.

@r-leyshon
Copy link
Author

r-leyshon commented Jan 15, 2025

Gotcha, no worries I can fall back to removing streaming. Incase I wasn't clear, I need to handle the response for reasons other than showing a notification to the user. The notification would be a placeholder for the conditional logic that I'm using to pull the model response apart for other things that would be too onerous to share in the little example above.

I wanted to say that the chatlas approach seems very friendly and in the future, if I can keep each tool as a functional unit, I shall prefer to use chatlas over handling the response myself.

@gadenbuie
Copy link
Contributor

The notification would be a placeholder for the conditional logic that I'm using to pull the model response apart for other things that would be too onerous to share in the little example above.

Could you describe the kinds of things you're wanting to do with the model response? That would help us understand the use case.

I'm going to reopen this issue so that we make sure we keep it in mind moving forward.

@gadenbuie gadenbuie reopened this Jan 15, 2025
@r-leyshon
Copy link
Author

Sure thing, the code I've fallen back to lives here: https://github.com/ministryofjustice/github-chat ,

Specifically this bit:

...
            elif (tool_call := resp.message.tool_calls):
                function_name = tool_call[0].function.name
                arguments = tool_call[0].function.arguments
                sanitised_func_nm = sanitise_string(function_name)
                sanitised_args = [
                    sanitise_string(arg) for arg in
                    json.loads(arguments)["keywords"]
                    ]
                if sanitised_func_nm == "ExtractKeywordEntities":
                    # Pydantic will raise if keywords violate schema rules
                    extracted_terms = ExtractKeywordEntities(
                        keywords=sanitised_args
                        )
                    ui.notification_show(
                        ("Searching database for keywords:"
                        f" {', '.join(extracted_terms.keywords)}")
                        )
                    summarise_this = chroma_pipeline.execute_pipeline(
                        keywords=extracted_terms.keywords,
                        n_results=input.selected_n(),
                        distance_threshold=input.dist_thresh(),
                        sanitised_prompt=sanitised_prompt
                    )
                    if (n_removed := chroma_pipeline.total_removed) > 0:
                        ui.notification_show(
                            f"{n_removed} results were removed."
                            )
                    if len(chroma_pipeline.results) == 0:
                        ui.notification_show(
                            "No results shown, increase distance threshold"
                            )
                    stream.append(summarise_this)
                    response = openai_client.chat.completions.create(
                        **completions_params
                        )
                    meta_resp = {
                        "role": "assistant",
                        "content": response.choices[0].message.content
                        }
                    await chat.append_message(response)
                    await chat.append_message(
                        {
                            "role": "asisstant",
                            "content": chroma_pipeline.chat_ui_results
                            })
                    stream.append(meta_resp)
...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants