Skip to content

Conversation

davezuckerman
Copy link
Contributor

No description provided.

@Copilot Copilot AI review requested due to automatic review settings October 3, 2025 16:46
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds tracing support for Langfuse prompts by modifying the prompt retrieval system to return ChatPromptTemplate objects with metadata for tracing instead of plain strings.

Key changes:

  • Modified get_initial_prompt() to get_langfuse_prompt() which returns a tuple of (prompt, found_flag)
  • Updated the response generation logic to handle both Langfuse prompts and fallback prompts differently
  • Added proper imports for ChatPromptTemplate and ChatPromptClient

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.

File Description
willa/config/init.py Refactored prompt retrieval to return ChatPromptTemplate with Langfuse metadata for tracing
willa/chatbot/graph_manager.py Updated response generation to handle both Langfuse and fallback prompts with different invocation patterns

# Will grab a langfuse prompt. If not found will used a Fallback prompt
prompt, found = get_langfuse_prompt()
if found and hasattr(prompt, "invoke"):
# system_messages = prompt.invoke(input={'context': docs_context,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: remove this comment

Comment on lines 135 to 140
system_message = SystemMessage(content=prompt.format(
context=docs_context,
question=latest_message.content
))

all_messages = summarized_conversation + [system_message]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think if you apply something like this (i.e., use the same the variable from above to reduce the number of unique local variables) you won't have to disable the too-many-locals linter.

Suggested change
system_message = SystemMessage(content=prompt.format(
context=docs_context,
question=latest_message.content
))
all_messages = summarized_conversation + [system_message]
system_messages = [SystemMessage(content=prompt.format(
context=docs_context,
question=latest_message.content
))]
all_messages = summarized_conversation + system_messages

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it looks like that is right. I'll add this and remove the pylint disable for that.


def get_initial_prompt() -> str:
"""Get the prompt from langfuse or default from config if not specified or found"""
def get_langfuse_prompt() -> tuple[ChatPromptTemplate | str, bool]:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. is it possible to have this function only return a ChatPromptTemplate + bool instead ChatPromptTemplate | str + bool?

  2. if we're trying to track whether it's thefall back prompt, we can probably do something like set tags: ['fallback-prompt'] on the ChatPromptTemplate and modify the check in lines 125-126 of willa.chatbot.graph_manager.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just pushed a change before seeing Anna's comments. I don't think I even need a boolean if I'm always returning a ChatPromptTemplate. If it's the fallback I just wouldn't set the prompt metadata etc..

Copy link
Member

@awilfox awilfox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a way to both simplify generate_response and make get_langfuse_prompt more deterministic with a single return type.

Comment on lines 124 to 138
# Will grab a langfuse prompt. If not found will used a Fallback prompt
prompt, found = get_langfuse_prompt()
if found and hasattr(prompt, "invoke"):
# system_messages = prompt.invoke(input={'context': docs_context,
system_messages = prompt.invoke({'context': docs_context,
'question': latest_message.content})
if hasattr(system_messages, "messages"):
all_messages = summarized_conversation + system_messages.messages
else:
all_messages = summarized_conversation + [system_messages]
else:
system_message = SystemMessage(content=prompt.format(
context=docs_context,
question=latest_message.content
))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that a good way to resolve this would be to put this logic in get_langfuse_prompt and have that method return list[SystemMessage]. Then…

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: get_langfuse_prompt obviously wouldn't have any access to all_messages; it would be returning either system_message.messages, [system_messages], or [system_message].

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've also just realised that the prompt function would then need to take two arguments, context and question. That way it can fill out the template as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just pushed some changes based on what Maria was saying before I saw this. I think what I just checked in simplifies it quite a bit

Copy link
Member

@awilfox awilfox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This way works too!

Remove the now-unnecessary PyLint comments and this should be good to go.

@anarchivist anarchivist self-requested a review October 7, 2025 16:32
Copy link
Member

@anarchivist anarchivist left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wonderful - this iteration looks great! thanks @davezuckerman.

did some refactoring based on copilot suggestions

fixed mypy error

returning default chatbottemplate instead of string if no prompt is found in langfuse

removed unneccessary pylint disable
@davezuckerman davezuckerman force-pushed the AP-448-prompt-tracing branch from 363a42d to 2751e6f Compare October 7, 2025 16:41
@davezuckerman davezuckerman merged commit 052fca0 into main Oct 7, 2025
6 checks passed
@davezuckerman davezuckerman deleted the AP-448-prompt-tracing branch October 7, 2025 16:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants