-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zach prompt engineering #105
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
|
||
After posing your question, I will assume the role of your guest and provide an answer. Assess the depth and relevance of my response without mirroring my words. Following that, continue the conversation by generating another question or a follow-up, keeping in mind the goal to maintain an engaging and thought-provoking dialogue that touches on a multitude of subjects. | ||
You don't know much about your guest, so be very curious. Ask about FORD: family, occupation, recreation, dreams. If your guest isn't interested in a certain question, don't worry about it, but if they say something interesting, try to hook into their interest and ask curious questions. Focus on teasing out stories, not just getting facts or being helpful. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TIL FORD
else: | ||
messages = chat.history_json["messages"] | ||
if messages[0]["role"] == "system": | ||
messages[0] = system_prompt | ||
chat.history_json["messages"] = messages |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this if statement make any changes? It looks to me like you set it equal to the code below if the role in the messages is set to system – but that is idempotent
system_prompt = {
"role": "system",
"content": system_prompt,
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah it's a bit confusing. on the first round, _generate_and_speak is called with a different system prompt: https://github.com/uberduck-ai/openduck/blob/main/openduck-py/openduck_py/routers/voice.py#L305
and then this was being used as the system prompt for all future messages, because it was stored in the DB.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Responded... Lmk if it makes sense
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok got it, so this just overwrites the first system prompt with the new version?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep!
User description
Description
log_audio_to_slack
function by removing an unnecessary parameter.ResponseAgent
to handle the system prompt correctly and to use a configurable chat model.Changes walkthrough
slack.py
Simplify Slack logging function signature
openduck-py/openduck_py/logging/slack.py
remote_path
parameter from thelog_audio_to_slack
function.
response_agent.py
Update logging and chat model handling
openduck-py/openduck_py/response_agent.py
remote_path
argument when callinglog_audio_to_slack
.the first message.
chat_model
parameter fromCHAT_MODEL_GPT4
toCHAT_MODEL
when calling
_generate_and_speak
.voice.py
Improve Daily recording and Slack logging
openduck-py/openduck_py/routers/voice.py
404 responses.
recording status.
greeting.txt
Update greeting prompts topics
openduck-py/openduck_py/prompts/intros/greeting.txt
podcast_host.md
Refine podcast host prompt for better engagement
openduck-py/openduck_py/prompts/most-interesting-bot/podcast_host.md
guest.
conversation.
💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.