Skip to content

Commit

Permalink
remove system prompt + typos
Browse files Browse the repository at this point in the history
  • Loading branch information
zack-anthropic committed Sep 18, 2023
1 parent bd42739 commit 57de851
Showing 1 changed file with 14 additions and 10 deletions.
24 changes: 14 additions & 10 deletions 04_Chatbot/00_Chatbot_Claude.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -208,16 +208,21 @@
")\n",
"\n",
"# langchain prompts do not always work with all the models. This prompt is tuned for Claude\n",
"claude_prompt = PromptTemplate.from_template(\"\"\"The following is a friendly conversation between a human and an AI.\n",
"claude_prompt = PromptTemplate.from_template(\"\"\"\n",
"\n",
"Human: The following is a friendly conversation between a human and an AI.\n",
"The AI is talkative and provides lots of specific details from its context. If the AI does not know\n",
"the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"<conversation_history>\n",
"{history}\n",
"</conversation_history>\n",
"\n",
"\n",
"Human: {input}\n",
"\n",
"Here is the human's next reply:\n",
"<human_reply>\n",
"{input}\n",
"</human_reply>\n",
"\n",
"Assistant:\n",
"\"\"\")\n",
Expand Down Expand Up @@ -294,7 +299,7 @@
"source": [
"### Interactive session using ipywidgets\n",
"\n",
"The following utility class allows us to interact with Claude in a more natural way. We write out question in an input box, and get Claude answer. We can then continue our conversation."
"The following utility class allows us to interact with Claude in a more natural way. We write out the question in an input box, and get Claude's answer. We can then continue our conversation."
]
},
{
Expand Down Expand Up @@ -329,7 +334,7 @@
" else: \n",
" prompt = self.name.value\n",
" if 'q' == prompt or 'quit' == prompt or 'Q' == prompt:\n",
" print(\"Thank you , that was a nice chat !!\")\n",
" print(\"Thank you , that was a nice chat!!\")\n",
" return\n",
" elif len(prompt) > 0:\n",
" with self.out:\n",
Expand Down Expand Up @@ -404,7 +409,7 @@
"# store previous interactions using ConversationalBufferMemory and add custom prompts to the chat.\n",
"memory = ConversationBufferMemory()\n",
"memory.chat_memory.add_user_message(\"You will be acting as a career coach. Your goal is to give career advice to users\")\n",
"memory.chat_memory.add_ai_message(\"I am career coach and give career advice\")\n",
"memory.chat_memory.add_ai_message(\"I am a career coach and give career advice\")\n",
"cl_llm = Bedrock(model_id=\"anthropic.claude-v1\",client=boto3_bedrock)\n",
"conversation = ConversationChain(\n",
" llm=cl_llm, verbose=True, memory=memory\n",
Expand Down Expand Up @@ -448,7 +453,7 @@
"metadata": {},
"source": [
"## Chatbot with Context \n",
"In this use case we will ask the Chatbot to answer question from some external corpus it has likely never seen before. To do this we apply a pattern called RAG (Retrieval Augmented Generation): the idea is to index the corpus in chunks, then lookup which sections of the corpus might be relevant to provide an answer by using semantic similarity between the chunks and the question. Finally the most relevant chunks are aggregated and passed as context to the ConversationChain, similar to providing an history.\n",
"In this use case we will ask the Chatbot to answer question from some external corpus it has likely never seen before. To do this we apply a pattern called RAG (Retrieval Augmented Generation): the idea is to index the corpus in chunks, then look up which sections of the corpus might be relevant to provide an answer by using semantic similarity between the chunks and the question. Finally the most relevant chunks are aggregated and passed as context to the ConversationChain, similar to providing a history.\n",
"\n",
"We will take a csv file and use **Titan Embeddings Model** to create vectors for each line of the csv. This vector is then stored in FAISS, an open source library providing an in-memory vector datastore. When the chatbot is asked a question, we query FAISS with the question and retrieve the text which is semantically closest. This will be our answer. "
]
Expand Down Expand Up @@ -707,7 +712,7 @@
"\n",
"Assistant: Question:\"\"\")\n",
"\n",
"# recreate the Claude LLM with more tokens to sample - this provide longer responses but introduces some latency\n",
"# recreate the Claude LLM with more tokens to sample - this provides longer responses but introduces some latency\n",
"cl_llm = Bedrock(model_id=\"anthropic.claude-v1\", client=boto3_bedrock, model_kwargs={\"max_tokens_to_sample\": 500})\n",
"memory_chain = ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True)\n",
"qa = ConversationalRetrievalChain.from_llm(\n",
Expand All @@ -726,7 +731,6 @@
"qa.combine_docs_chain.llm_chain.prompt = PromptTemplate.from_template(\"\"\"\n",
"{context}\n",
"\n",
"\n",
"Human: Use at maximum 3 sentences to answer the question inside the <q></q> XML tags. \n",
"\n",
"<q>{question}</q>\n",
Expand Down

0 comments on commit 57de851

Please sign in to comment.