Skip to content

Commit

Permalink
added prompt file + docs
Browse files Browse the repository at this point in the history
  • Loading branch information
fractalego committed Jul 10, 2024
1 parent 908bce1 commit 47b5893
Show file tree
Hide file tree
Showing 11 changed files with 84 additions and 28 deletions.
12 changes: 11 additions & 1 deletion documentation/source/configuration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,11 @@ A typical configuration file looks like this:
"waking_up_sound": true,
"deactivate_sound": true,
"rules": "rules.yaml",
"index": "indices.yaml",
"cache_filename": "knowledge_cache",
"prompt_filename": "main.prompt",
"functions": "functions.py",
"frontend_port": 8081,
"max_recursion": 2,
"llm_model": {
"model_host": "localhost",
"model_port": 8080,
Expand All @@ -37,6 +40,7 @@ A typical configuration file looks like this:
}
These settings regulate the following:

* "waking_up_word" is the name of the bot, used to wake up the system in the "run-audio" mode.
Expand All @@ -45,6 +49,12 @@ These settings regulate the following:

* "rules" is the file containing the facts and rules that guide the chatbot. The default is "rules.yaml".

* "index" is the file containing the path to the files to index. The default is "indices.yaml".

* "cache_filename" is the file where the indexed knowledge is cached. The default is "knowledge_cache".

* "prompt_filename" is the file containing the main prompt for the chatbot. The default is "main.prompt".

* "functions" is the file containing the functions that can be used in the rules. The default is "functions.py".

* "frontend_port" is the port where the web frontend is running. The default is 8090.
Expand Down
1 change: 1 addition & 0 deletions documentation/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ Welcome to WAFL's 0.0.90 documentation!
configuration
running_WAFL
facts_and_rules
modify_the_prompt
examples
testcases
actions
Expand Down
23 changes: 23 additions & 0 deletions documentation/source/modify_the_prompt.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
Modify the original prompt
==========================

The prompt is stored in the file "main.prompt" in the project's root directory.
The name of the file can be changed in the `config.json` file.
The default is:


.. code-block:: text
A user is chatting with a bot. The chat is happening through a web interface. The user is typing the messages and the bot is replying.
This is summary of the bot's knowledge:
{facts}
The rules that *must* be followed are:
{rules}
Create a plausible dialogue based on the aforementioned summary and rules.
Do not repeat yourself. Be friendly but not too servile.
Follow the rules if present and they apply to the dialogue. Do not improvise if rules are present.
The variables `{facts}` and `{rules}` are replaced by the actual facts and rules when the prompt is generated.
3 changes: 2 additions & 1 deletion tests/config.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,8 @@
"deactivate_sound": true,
"rules": "rules.yaml",
"index": "indices.yaml",
"index_filename": "knowledge_cache",
"cache_filename": "knowledge_cache",
"prompt_filename": "main.prompt",
"functions": "functions.py",
"max_recursion": 2,
"llm_model": {
Expand Down
11 changes: 11 additions & 0 deletions tests/main.prompt
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
A user is chatting with a bot. The chat is happening through a web interface. The user is typing the messages and the bot is replying.

This is summary of the bot's knowledge:
{facts}

The rules that *must* be followed are:
{rules}

Create a plausible dialogue based on the aforementioned summary and rules.
Do not repeat yourself. Be friendly but not too servile.
Follow the rules if present and they apply to the dialogue. Do not improvise if rules are present.
19 changes: 14 additions & 5 deletions todo.txt
Original file line number Diff line number Diff line change
@@ -1,12 +1,21 @@
* interruptible speech
* dependabot!!!
* use poetry

PharazonE
* upload to hetzner and make it work for some retrieval tasks
* develop more rules + use-cases for voice and other


/* add control over which llm to use from the frontend
/ - add list of models in the backend

* interruptible speech
* add option so use llama.cpp from wafl_llm
* add option to have None as a model setting in wafl_llm
/* add quantization of llm to wafl_llm config
/* write docs about it on wafl

/* add option so use llama.cpp from wafl_llm
/* add option to have None as a model setting in wafl_llm

* dependabot!!!
* use poetry
/* add pdf to indexing
* add json to indexing
/* add metadata to indexing items
Expand Down
4 changes: 2 additions & 2 deletions wafl/answerer/dialogue_answerer.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
)
from wafl.answerer.base_answerer import BaseAnswerer
from wafl.answerer.rule_maker import RuleMaker
from wafl.connectors.clients.llm_chitchat_answer_client import LLMChitChatAnswerClient
from wafl.connectors.clients.llm_chat_client import LLMChatClient
from wafl.dataclasses.dataclasses import Query, Answer
from wafl.interface.conversation import Conversation
from wafl.simple_text_processing.questions import is_question
Expand All @@ -20,7 +20,7 @@
class DialogueAnswerer(BaseAnswerer):
def __init__(self, config, knowledge, interface, code_path, logger):
self._threshold_for_facts = 0.85
self._client = LLMChitChatAnswerClient(config)
self._client = LLMChatClient(config)
self._knowledge = knowledge
self._logger = logger
self._interface = interface
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
import os
import textwrap
from typing import List

from wafl.connectors.factories.llm_connector_factory import LLMConnectorFactory
Expand All @@ -8,10 +7,12 @@
_path = os.path.dirname(__file__)


class LLMChitChatAnswerClient:
class LLMChatClient:
def __init__(self, config):
self._connector = LLMConnectorFactory.get_connector(config)
self._config = config
with open(self._config.get_value("prompt_filename")) as f:
self.prompt = f.read()

async def get_answer(self, text: str, dialogue: str, rules_text: List[str]) -> str:
prompt = await self._get_answer_prompt(text, dialogue, "\n".join(rules_text))
Expand All @@ -26,16 +27,4 @@ async def _get_answer_prompt(
)

def _get_system_prompt(self, text, rules_text):
return f"""
A user is chatting with a bot. The chat is happening through a web interface. The user is typing the messages and the bot is replying.
This is summary of the bot's knowledge:
{text.strip()}
The rules that *must* be followed are:
{rules_text.strip()}
Create a plausible dialogue based on the aforementioned summary and rules.
Do not repeat yourself. Be friendly but not too servile.
Follow the rules if present and they apply to the dialogue. Do not improvise if rules are present.
""".strip()
return self.prompt.replace("{facts}", text.strip()).replace("{rules}", rules_text.strip()).strip()
6 changes: 3 additions & 3 deletions wafl/knowledge/indexing_implementation.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,14 +36,14 @@ async def load_knowledge(config, logger=None):
with open(index_filename) as file:
index_txt = file.read()

if os.path.exists(config.get_value("index_filename")):
knowledge = joblib.load(config.get_value("index_filename"))
if os.path.exists(config.get_value("cache_filename")):
knowledge = joblib.load(config.get_value("cache_filename"))
if knowledge.hash == hash(rules_txt + index_txt):
return knowledge

knowledge = SingleFileKnowledge(config, rules_txt, logger=logger)
knowledge = await _add_indices_to_knowledge(knowledge, index_txt)
joblib.dump(knowledge, config.get_value("index_filename"))
joblib.dump(knowledge, config.get_value("cache_filename"))
await knowledge.initialize_retrievers()
return knowledge

Expand Down
3 changes: 2 additions & 1 deletion wafl/templates/config.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,8 @@
"deactivate_sound": true,
"rules": "rules.yaml",
"index": "indices.yaml",
"index_filename": "knowledge_cache",
"cache_filename": "knowledge_cache",
"prompt_filename": "main.prompt",
"functions": "functions.py",
"max_recursion": 2,
"frontend_port": 8090,
Expand Down
11 changes: 11 additions & 0 deletions wafl/templates/main.prompt
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
A user is chatting with a bot. The chat is happening through a web interface. The user is typing the messages and the bot is replying.

This is summary of the bot's knowledge:
{facts}

The rules that *must* be followed are:
{rules}

Create a plausible dialogue based on the aforementioned summary and rules.
Do not repeat yourself. Be friendly but not too servile.
Follow the rules if present and they apply to the dialogue. Do not improvise if rules are present.

0 comments on commit 47b5893

Please sign in to comment.