Skip to content

Commit

Permalink
Merge pull request #95 from fractalego/development
Browse files Browse the repository at this point in the history
Development
  • Loading branch information
fractalego authored May 27, 2024
2 parents b623866 + 879abd7 commit 19d8ad3
Show file tree
Hide file tree
Showing 44 changed files with 540 additions and 306 deletions.
16 changes: 7 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# WAFL 0.0.80 [![Tests](https://github.com/fractalego/wafl/actions/workflows/development-tests1.yml/badge.svg)](https://github.com/fractalego/wafl/actions/workflows/development-tests1.yml)[![Docs](https://readthedocs.org/projects/wafl/badge/?version=latest)](https://wafl.readthedocs.io/en/latest/)
# WAFL 0.0.90 [![Tests](https://github.com/fractalego/wafl/actions/workflows/development-tests1.yml/badge.svg)](https://github.com/fractalego/wafl/actions/workflows/development-tests1.yml)[![Docs](https://readthedocs.org/projects/wafl/badge/?version=latest)](https://wafl.readthedocs.io/en/latest/)

Introduction
============
Expand Down Expand Up @@ -56,13 +56,6 @@ wafl-llm start
```
which will use the default models and start the server on port 8080.

#### Docker
A docker image can be used to run it as in the following:

```bash
$ docker run -p8080:8080 --env NVIDIA_DISABLE_REQUIRE=1 --gpus all fractalego/wafl-llm:0.80
```

The interface side has a `config.json` file that needs to be filled with the IP address of the LLM side.
The default is localhost.
Alternatively, you can run the LLM side by cloning [this repository](https://github.com/fractalego/wafl-llm).
Expand All @@ -73,6 +66,11 @@ Running WAFL
This document contains a few examples of how to use the `wafl` CLI.
There are four modes in which to run the system


### $ wafl run
Starts all the available interfaces of the chatbot at the same time.


## $ wafl run-audio

This is the main mode of operation. It will run the system in a loop, waiting for the user to speak a command.
Expand All @@ -82,7 +80,7 @@ The default name is "computer", but you can change it to whatever you want.

## $ wafl run-server

It runs a local web server that listens for HTTP requests on port 8889.
It runs a local web server that listens for HTTP requests on port 8090.
The server will act as a chatbot, executing commands and returning the result as defined in the rules.


Expand Down
3 changes: 1 addition & 2 deletions documentation/source/configuration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,7 @@ These settings regulate the following:

* "frontend_port" is the port where the web frontend is running. The default is 8090.

* "llm_model" is the configuration to connect to the LLM model in the backend. The default is "localhost:8080".
The "temperature" parameter is used to set the temperature for the LLM model. The default is 0.4.
* "llm_model" is the configuration to connect to wafl-llm in the backend. The default url is "localhost:8080". The "temperature" parameter is used to set the temperature for the LLM model. The default is 0.4.

* "listener_model" is the configuration to connect to the listener model in the backend. The default is "localhost:8080".

Expand Down
2 changes: 1 addition & 1 deletion documentation/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to WAFL's 0.0.80 documentation!
Welcome to WAFL's 0.0.90 documentation!
=======================================

.. toctree::
Expand Down
12 changes: 3 additions & 9 deletions documentation/source/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,10 @@ After installing the requirements, you can initialize the interface by running t
.. code-block:: bash
$ wafl init
$ wafl run
which creates a `config.json` file that you can edit to change the default settings.
A standard rule file is also created as `wafl.rules`.
The first command creates a set of template files, including a configuration in `config.json` that you can edit to change the default settings.
The second command starts the audio interface as well as a web server on port 8090 by default.
Please see the examples in the following chapters.


Expand All @@ -42,13 +43,6 @@ In order to quickly run the LLM side, you can use the following installation com
which will use the default models and start the server on port 8080.
Alternatively, a Docker image can be used to run it as in the following:

.. code-block:: bash
$ docker run -p8080:8080 --env NVIDIA_DISABLE_REQUIRE=1 --gpus all fractalego/wafl-llm:0.80
The interface side has a `config.json` file that needs to be filled with the IP address of the LLM side.
The default is localhost.

Expand Down
8 changes: 7 additions & 1 deletion documentation/source/running_WAFL.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,12 @@ Running WAFL
This document contains a few examples of how to use the `wafl` CLI.
There are four modes in which to run the system

$ wafl run
----------
Starts the available interfaces of the chatbot at the same time.
This is equivalent to running run-audio and run-server in parallel (see below).


$ wafl run-audio
----------------

Expand All @@ -14,7 +20,7 @@ The default name is "computer", but you can change it to whatever you want.
$ wafl run-server
-----------------

It runs a local web server that listens for HTTP requests on port 8889.
It runs a local web server that listens for HTTP requests on port 8090.
The server will act as a chatbot, executing commands and returning the result as defined in the rules.


Expand Down
2 changes: 1 addition & 1 deletion license.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Copyright (c) 2023 [email protected]
Copyright (c) 2024 [email protected]

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

Expand Down
3 changes: 2 additions & 1 deletion tests/config.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,8 @@
"max_recursion": 2,
"llm_model": {
"model_host": "localhost",
"model_port": 8080
"model_port": 8080,
"temperature": 0.4
},
"listener_model": {
"model_host": "localhost",
Expand Down
2 changes: 1 addition & 1 deletion tests/test_closing_conversation.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ def test__thank_you_closes_conversation(self):
)
try:
asyncio.run(conversation_events.process_next())

print(interface.get_utterances_list())
except CloseConversation:
self.assertTrue(True)
return
Expand Down
18 changes: 11 additions & 7 deletions tests/test_connection.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,8 @@

from unittest import TestCase
from wafl.config import Configuration
from wafl.connectors.bridges.llm_chitchat_answer_bridge import LLMChitChatAnswerBridge
from wafl.connectors.prompt_template import PromptCreator
from wafl.connectors.remote.remote_llm_connector import RemoteLLMConnector
from wafl.speaker.fairseq_speaker import FairSeqSpeaker

_path = os.path.dirname(__file__)

Expand All @@ -16,8 +15,10 @@ def test__connection_to_generative_model_can_generate_text(self):
connector = RemoteLLMConnector(config.get_value("llm_model"))
prediction = asyncio.run(
connector.predict(
'Generate a full paragraph based on this chapter title "The first contact". '
"The theme of the paragraph is space opera. "
PromptCreator.create_from_one_instruction(
'Generate a full paragraph based on this chapter title "The first contact".'
"The theme of the paragraph is space opera. "
)
)
)
assert len(prediction) > 0
Expand All @@ -34,7 +35,9 @@ def test__connection_to_generative_model_can_generate_text_within_tags(self):
<result>
""".strip()

prediction = asyncio.run(connector.predict(prompt))
prediction = asyncio.run(
connector.predict(PromptCreator.create_from_one_instruction(prompt))
)
print(prediction)
assert len(prediction) > 0

Expand All @@ -43,6 +46,7 @@ def test__connection_to_generative_model_can_generate_a_python_list(self):
connector = RemoteLLMConnector(config.get_value("llm_model"))
connector._num_prediction_tokens = 200
prompt = "Generate a Python list of 4 chapters names for a space opera book. The output needs to be a python list of strings: "
prediction = asyncio.run(connector.predict(prompt))
print(prediction)
prediction = asyncio.run(
connector.predict(PromptCreator.create_from_one_instruction(prompt))
)
assert len(prediction) > 0
62 changes: 62 additions & 0 deletions tests/test_prompts.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
import os
from unittest import TestCase

from wafl.connectors.prompt_template import PromptCreator
from wafl.interface.conversation import Conversation, Utterance

_path = os.path.dirname(__file__)


class TestPrompts(TestCase):
def test_utterance(self):
utterance = Utterance(
text="Hello", speaker="user", timestamp="2022-01-01T00:00:00"
)
self.assertEqual(
utterance.to_dict(),
{"text": "Hello", "speaker": "user", "timestamp": "2022-01-01T00:00:00"},
)

def test_conversation(self):
utterance1 = Utterance(text="Hello", speaker="user", timestamp=2)
utterance2 = Utterance(text="Hi", speaker="bot", timestamp=1)
conversation = Conversation(utterances=[utterance1, utterance2])
self.assertEqual(
conversation.to_dict(),
[
{
"text": "Hello",
"speaker": "user",
"timestamp": 2,
},
{
"text": "Hi",
"speaker": "bot",
"timestamp": 1,
},
],
)

def test_prompt(self):
utterance1 = Utterance(text="Hello", speaker="user", timestamp=2)
utterance2 = Utterance(text="Hi", speaker="bot", timestamp=1)
conversation = Conversation(utterances=[utterance1, utterance2])
prompt = PromptCreator.create(system_prompt="Hello", conversation=conversation)
self.assertEqual(
prompt.to_dict(),
{
"system_prompt": "Hello",
"conversation": [
{
"text": "Hello",
"speaker": "user",
"timestamp": 2,
},
{
"text": "Hi",
"speaker": "bot",
"timestamp": 1,
},
],
},
)
4 changes: 2 additions & 2 deletions tests/test_rules.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,8 @@ def test__rules_can_be_triggered(self):
interface=interface,
)
asyncio.run(conversation_events.process_next())
expected = "bot: the horse is tall"
self.assertEqual(expected, interface.get_utterances_list()[-1])
expected = "The horse is tall"
self.assertIn(expected, interface.get_utterances_list()[-1])

def test__rules_are_not_always_triggered(self):
interface = DummyInterface(
Expand Down
4 changes: 2 additions & 2 deletions tests/test_voice.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@

from unittest import TestCase
from wafl.config import Configuration
from wafl.interface.voice_interface import VoiceInterface
from wafl.events.conversation_events import ConversationEvents
from wafl.interface.dummy_interface import DummyInterface
from wafl.listener.whisper_listener import WhisperListener
Expand All @@ -31,7 +30,8 @@ def test__activation(self):
interface.activate()
asyncio.run(conversation_events.process_next(activation_word="computer"))
asyncio.run(conversation_events.process_next(activation_word="computer"))
assert interface.get_utterances_list()[-1] == "bot: I hear you"
print(interface.get_utterances_list())
assert "I hear you" in interface.get_utterances_list()[-1]

def test__no_activation(self):
interface = DummyInterface(to_utter=["my name is bob"])
Expand Down
8 changes: 5 additions & 3 deletions todo.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
* substitute utterances in base_interface with the conversation class

* add config file for model names
- llm model name
- whisper model name
Expand All @@ -6,9 +8,9 @@
/* let user decide port for frontend
/* update docs about port
/* push new version
* update pypi with wafl and wafl-llm
* clean code for llm eval and make it public
* update huggingface readme
/* update pypi with wafl and wafl-llm
/* clean code for llm eval and make it public
/* update huggingface readme
* read overleaf paper


Expand Down
18 changes: 0 additions & 18 deletions wafl/answerer/answerer_implementation.py

This file was deleted.

Loading

0 comments on commit 19d8ad3

Please sign in to comment.