-
Notifications
You must be signed in to change notification settings - Fork 0
Adds tracing for langfuse prompts #71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds tracing support for Langfuse prompts by modifying the prompt retrieval system to return ChatPromptTemplate objects with metadata for tracing instead of plain strings.
Key changes:
- Modified
get_initial_prompt()
toget_langfuse_prompt()
which returns a tuple of (prompt, found_flag) - Updated the response generation logic to handle both Langfuse prompts and fallback prompts differently
- Added proper imports for ChatPromptTemplate and ChatPromptClient
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.
File | Description |
---|---|
willa/config/init.py | Refactored prompt retrieval to return ChatPromptTemplate with Langfuse metadata for tracing |
willa/chatbot/graph_manager.py | Updated response generation to handle both Langfuse and fallback prompts with different invocation patterns |
willa/chatbot/graph_manager.py
Outdated
# Will grab a langfuse prompt. If not found will used a Fallback prompt | ||
prompt, found = get_langfuse_prompt() | ||
if found and hasattr(prompt, "invoke"): | ||
# system_messages = prompt.invoke(input={'context': docs_context, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: remove this comment
willa/chatbot/graph_manager.py
Outdated
system_message = SystemMessage(content=prompt.format( | ||
context=docs_context, | ||
question=latest_message.content | ||
)) | ||
|
||
all_messages = summarized_conversation + [system_message] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think if you apply something like this (i.e., use the same the variable from above to reduce the number of unique local variables) you won't have to disable the too-many-locals
linter.
system_message = SystemMessage(content=prompt.format( | |
context=docs_context, | |
question=latest_message.content | |
)) | |
all_messages = summarized_conversation + [system_message] | |
system_messages = [SystemMessage(content=prompt.format( | |
context=docs_context, | |
question=latest_message.content | |
))] | |
all_messages = summarized_conversation + system_messages |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it looks like that is right. I'll add this and remove the pylint disable for that.
willa/config/__init__.py
Outdated
|
||
def get_initial_prompt() -> str: | ||
"""Get the prompt from langfuse or default from config if not specified or found""" | ||
def get_langfuse_prompt() -> tuple[ChatPromptTemplate | str, bool]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
is it possible to have this function only return a
ChatPromptTemplate
+bool
insteadChatPromptTemplate | str
+bool
? -
if we're trying to track whether it's thefall back prompt, we can probably do something like set
tags: ['fallback-prompt']
on theChatPromptTemplate
and modify the check in lines 125-126 ofwilla.chatbot.graph_manager
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just pushed a change before seeing Anna's comments. I don't think I even need a boolean if I'm always returning a ChatPromptTemplate. If it's the fallback I just wouldn't set the prompt metadata etc..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a way to both simplify generate_response
and make get_langfuse_prompt
more deterministic with a single return type.
willa/chatbot/graph_manager.py
Outdated
# Will grab a langfuse prompt. If not found will used a Fallback prompt | ||
prompt, found = get_langfuse_prompt() | ||
if found and hasattr(prompt, "invoke"): | ||
# system_messages = prompt.invoke(input={'context': docs_context, | ||
system_messages = prompt.invoke({'context': docs_context, | ||
'question': latest_message.content}) | ||
if hasattr(system_messages, "messages"): | ||
all_messages = summarized_conversation + system_messages.messages | ||
else: | ||
all_messages = summarized_conversation + [system_messages] | ||
else: | ||
system_message = SystemMessage(content=prompt.format( | ||
context=docs_context, | ||
question=latest_message.content | ||
)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe that a good way to resolve this would be to put this logic in get_langfuse_prompt
and have that method return list[SystemMessage]
. Then…
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: get_langfuse_prompt
obviously wouldn't have any access to all_messages
; it would be returning either system_message.messages
, [system_messages]
, or [system_message]
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've also just realised that the prompt function would then need to take two arguments, context
and question
. That way it can fill out the template as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just pushed some changes based on what Maria was saying before I saw this. I think what I just checked in simplifies it quite a bit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This way works too!
Remove the now-unnecessary PyLint comments and this should be good to go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wonderful - this iteration looks great! thanks @davezuckerman.
did some refactoring based on copilot suggestions fixed mypy error returning default chatbottemplate instead of string if no prompt is found in langfuse removed unneccessary pylint disable
363a42d
to
2751e6f
Compare
No description provided.