Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

V0.1.3: bugs fixed #15

Open
wants to merge 61 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
61 commits
Select commit Hold shift + click to select a range
e1dcd38
fix: code interpreter program join
ChuxiJ Oct 20, 2023
18cef46
fix: skill install raise exception
ChuxiJ Oct 20, 2023
457d7f6
fix: skill_dependencies parse
ChuxiJ Oct 20, 2023
015cfd6
Merge pull request #14 from timedomain-tech/fix_r_language_support
ChuxiJ Oct 20, 2023
97a3edd
fix: add set done when timeout
ChuxiJ Oct 20, 2023
260f8c5
fix: handle non skill return and refact llm create name
ChuxiJ Oct 21, 2023
9a46ec9
feat: add version and update cursor ●
ChuxiJ Oct 21, 2023
0ce97ed
[feat] add expertise_prompt_agent
DumoeDss Oct 21, 2023
8b73a12
[feat] update expertise_prompt_agent prompt
DumoeDss Oct 21, 2023
acfba72
Merge pull request #18 from timedomain-tech/dev/expertise_prompt_agent
DumoeDss Oct 21, 2023
4c31d0a
Create main.yml
ChuxiJ Oct 22, 2023
eeb1b6c
feat: add material theme
ChuxiJ Oct 22, 2023
7aa11c9
Merge pull request #20 from timedomain-tech/add_mkdocs_material_theme
ChuxiJ Oct 22, 2023
355ebed
Merge pull request #19 from timedomain-tech/mkdocs-auto-action
ChuxiJ Oct 22, 2023
9aefb61
feat: mkdocs add primary black and add social footer
ChuxiJ Oct 23, 2023
49c0e0f
feat: test prompt enhancer agent
ChuxiJ Oct 23, 2023
e9553e9
feat: update prompt for prompt_enhancer_agent
ChuxiJ Oct 23, 2023
232e424
fix: repl
ChuxiJ Oct 24, 2023
fa5cf72
feat: add interpreter load save message
ChuxiJ Oct 24, 2023
83a07a0
feat: add memGPT prompt
ChuxiJ Oct 24, 2023
b6ddfc5
Improve subprocess handling and thread management
ChuxiJ Oct 24, 2023
df5227f
feat: memGPT worked
ChuxiJ Oct 24, 2023
e6f4e70
feat: default config is gpt-4
ChuxiJ Oct 24, 2023
73ac422
feat: memGPT can assign task to subagent to create skills
ChuxiJ Oct 25, 2023
d8be1f6
feat: instruct memGPT that no shared memory with subagent
ChuxiJ Oct 25, 2023
0d870b0
feat: refact memGPT prompt (WIP) need more tests
ChuxiJ Oct 25, 2023
57a2a6c
fix: memory management prompt revert to original
ChuxiJ Oct 25, 2023
cd7fb29
feat: use set_llm_cache
ChuxiJ Oct 25, 2023
217b074
feat: add memGPT memory (base, core, recall)
ChuxiJ Oct 25, 2023
87a695d
feat: add embedding cache
ChuxiJ Oct 25, 2023
5c57cd2
feat: support langchain message as memory and SQLChatMessageHistory
ChuxiJ Oct 25, 2023
531d20a
refact: core, recall, archival -> short-term & long-term
ChuxiJ Oct 25, 2023
ad77db8
feat: ArchivalMemory support add but not support modify
ChuxiJ Oct 25, 2023
7c204e2
feat: baseAgent add memory and support langsmith config
ChuxiJ Oct 25, 2023
e138510
refactor: fully support langsmith runnable skills
ChuxiJ Oct 26, 2023
0c1afeb
fix: langsmith test ok
ChuxiJ Oct 26, 2023
01c6b54
feat: auto generate run url
ChuxiJ Oct 26, 2023
2def92b
feat: add langsmith status check
ChuxiJ Oct 26, 2023
07828e7
feat: auto update user config if project config update new key
ChuxiJ Oct 26, 2023
3f4efa9
feat: update langchain dependancy and add add_to_memory interface
ChuxiJ Oct 26, 2023
df18656
feat: add token count for function call and remove long_term_memory s…
ChuxiJ Oct 27, 2023
2cc73b7
feat: support memgpt and most of functionalities test ok
ChuxiJ Oct 27, 2023
48d13ee
feat: support vector db qdrant
ChuxiJ Oct 30, 2023
670e46d
feat: refact create agent from config
ChuxiJ Oct 30, 2023
0bc3c7f
feat: use config to directly create agents and support load function …
ChuxiJ Oct 31, 2023
35c4d15
fix: better function schema support for openai
ChuxiJ Oct 31, 2023
9f5f696
fix: runnable bugs
ChuxiJ Oct 31, 2023
c07923b
fix: test subagent run ok
ChuxiJ Oct 31, 2023
7af161b
fix: typo
ChuxiJ Oct 31, 2023
7670391
fix: subagent content, total_tries, set config update env
ChuxiJ Oct 31, 2023
2ab4fcd
fix: check langsmith ok
ChuxiJ Oct 31, 2023
1b044bf
fix: typo
ChuxiJ Oct 31, 2023
f0b3d5f
fix: fix python shell code startswith '!'
ChuxiJ Oct 31, 2023
acdc11a
fix: typo error
ChuxiJ Oct 31, 2023
45df039
fix: no pass
ChuxiJ Oct 31, 2023
50b222f
[feat] config agent model
DumoeDss Oct 31, 2023
951ce9d
Merge branch 'v0.1.3' of ssh://ssh.github.com:443/timedomain-tech/ope…
DumoeDss Oct 31, 2023
9fcb730
refactor: BaseStreamingHandler and dummy chunk message
ChuxiJ Nov 1, 2023
857d586
fix: streamlit subprocess not stop
ChuxiJ Nov 1, 2023
d3d3ad0
refactor: mv code from init to manager.py
ChuxiJ Nov 1, 2023
395cfe9
feat: support group chat base
ChuxiJ Nov 2, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 25 additions & 0 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
name: ci
on:
push:
branches:
- master
- main
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v4
with:
python-version: 3.x
- run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV
- uses: actions/cache@v3
with:
key: mkdocs-material-${{ env.cache_id }}
path: .cache
restore-keys: |
mkdocs-material-
- run: pip install mkdocs-material
- run: mkdocs gh-deploy --force
2 changes: 1 addition & 1 deletion creator/__version__.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = "0.1.2"
__version__ = "0.1.3"
20 changes: 12 additions & 8 deletions creator/agents/__init__.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,16 @@
from .extractor_agent import skill_extractor_agent
from .interpreter_agent import code_interpreter_agent
from .tester_agent import code_tester_agent
from .refactor_agent import code_refactor_agent
from .extractor_agent import create_skill_extractor_agent
from .interpreter_agent import create_code_interpreter_agent
from .tester_agent import create_code_tester_agent
from .refactor_agent import create_code_refactor_agent
from .prompt_enhancer_agent import create_prompt_enhancer_agent
from .creator_agent import create_creator_agent


__all__ = [
"skill_extractor_agent",
"code_interpreter_agent",
"code_tester_agent",
"code_refactor_agent"
"create_skill_extractor_agent",
"create_code_interpreter_agent",
"create_code_tester_agent",
"create_code_refactor_agent",
"create_prompt_enhancer_agent",
"create_creator_agent"
]
29 changes: 21 additions & 8 deletions creator/agents/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,18 @@ class BaseAgent(LLMChain):
system_template: str = ""
allow_user_confirm: bool = False
prompt: ChatPromptTemplate = ChatPromptTemplate.from_messages(messages=["system", ""])
agent_name: str = "BaseAgent"
share_memory: bool = False

@property
def _chain_type(self):
return "BaseAgent"
return self.agent_name

def __repr__(self) -> str:
return self.agent_name + "()"

def __hash__(self):
return hash(self.agent_name)

@property
def input_keys(self) -> List[str]:
Expand Down Expand Up @@ -70,14 +78,14 @@ def tool_result_to_str(self, tool_result) -> str:
return json.dumps(tool_result, ensure_ascii=False)
return str(tool_result)

def run_tool(self, function_call: Dict[str, Any]):
def run_tool(self, function_call: Dict[str, Any], run_manager: Optional[CallbackManager] = None):
function_name = function_call.get("name", "")
arguments = parse_partial_json(function_call.get("arguments", "{}"))
tool_result = None
for tool in self.tools:
if tool.name == function_name:
if self.human_confirm():
tool_result = tool.run(arguments)
tool_result = tool.run(arguments, callbacks=run_manager)
tool_result = self.tool_result_to_str(tool_result)
tool_result = FunctionMessage(name=function_name, content=tool_result)
self.update_tool_result_in_callbacks(tool_result)
Expand All @@ -96,31 +104,37 @@ def messages_hot_fix(self, langchain_messages):
def preprocess_inputs(self, inputs: Dict[str, Any]):
return inputs

def add_to_memory(self, messages):
"""Add message to long-term memory"""
pass

def run_workflow(self, inputs: Dict[str, Any], run_manager: Optional[CallbackManager] = None) -> Dict[str, Any]:
run_manager_callbacks = run_manager.get_child() if run_manager else None
inputs = self.preprocess_inputs(inputs)
messages = inputs.pop("messages")
langchain_messages = convert_openai_messages(messages)
self.llm.function_calls = self.function_schemas
llm_with_functions = self.llm.bind(functions=self.function_schemas)
current_try = 0
while current_try < self.total_tries:
self.start_callbacks()
prompt = self.construct_prompt(langchain_messages)
llm_chain = prompt | llm_with_functions | self.postprocess_mesasge
message = llm_chain.invoke(inputs)
llm_chain = (prompt | llm_with_functions | self.postprocess_mesasge).with_config({"run_name": f"Iteration {current_try+1}"})
message = llm_chain.invoke(inputs, {"callbacks": run_manager_callbacks})
langchain_messages.append(message)
function_call = message.additional_kwargs.get("function_call", None)
if function_call is None:
self.end_callbacks(message)
break

tool_result = self.run_tool(function_call)
tool_result = self.run_tool(function_call, run_manager_callbacks)
if tool_result is None:
self.end_callbacks(message)
break
langchain_messages.append(tool_result)
langchain_messages = self.messages_hot_fix(langchain_messages)
current_try += 1
self.end_callbacks(message)
self.end_callbacks(message=message)
langchain_messages = remove_tips(langchain_messages)
openai_messages = list(map(convert_message_to_dict, langchain_messages))
return openai_messages
Expand Down Expand Up @@ -159,4 +173,3 @@ def task_target():
result = output_queue.pop()
yield True, result
return

20 changes: 5 additions & 15 deletions creator/agents/creator_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
from creator.code_interpreter.safe_python import SafePythonInterpreter
from creator.config.library import config
from creator.utils import load_system_prompt, get_user_info, remove_tips
from creator.llm.llm_creator import create_llm
from creator.llm import create_llm

from .base import BaseAgent

Expand All @@ -20,19 +20,15 @@
ALLOWED_FUNCTIONS = {"create", "save", "search", "CodeSkill"}
ALLOW_METHODS = {".show", ".show_code", ".test", ".run", ".save", "__add__", "__gt__", "__lt__", "__annotations__"}
IMPORT_CODE = (
"from creator.core import creator\n"
"from creator import create, save, search\n"
"from creator.core.skill import CodeSkill\n"
"create, save, search = creator.create, creator.save, creator.search\n\n"
)


class CreatorAgent(BaseAgent):
total_tries: int = 5
allow_user_confirm: bool = config.run_human_confirm

@property
def _chain_type(self):
return "CreatorAgent"
agent_name: str = "CreatorAgent"

def prep_inputs(self, inputs: Dict[str, Any] | Any) -> Dict[str, str]:
inputs["OPEN_CREATOR_API_DOC"] = OPEN_CREATOR_API_DOC
Expand Down Expand Up @@ -68,21 +64,15 @@ async def ainvoke(self, inputs: Dict[str, Any], config: RunnableConfig | None =
return {"messages": self.run(inputs)}


def create_creator_agent(llm):
def create_creator_agent(config):
template = load_system_prompt(config.creator_agent_prompt_path)

code_interpreter = SafePythonInterpreter(allowed_functions=ALLOWED_FUNCTIONS, allowed_methods=ALLOW_METHODS, redirect_output=True)
code_interpreter.setup(IMPORT_CODE)

chain = CreatorAgent(
llm=llm,
llm=create_llm(config, config.agent_model_config.CREATOR_AGENT),
system_template=template,
tools=[code_interpreter],
function_schemas=[code_interpreter.to_function_schema()],
verbose=False,
)
return chain


llm = create_llm(config)
open_creator_agent = create_creator_agent(llm=llm)
36 changes: 16 additions & 20 deletions creator/agents/extractor_agent.py
Original file line number Diff line number Diff line change
@@ -1,60 +1,56 @@
from typing import Dict, Any

from langchain.prompts import ChatPromptTemplate
from langchain.output_parsers.json import parse_partial_json

from creator.config.library import config
from creator.utils import convert_to_values_list, get_user_info, load_system_prompt
import json

from creator.utils import convert_to_values_list, get_user_info, load_system_prompt, load_json_schema
from creator.llm import create_llm

from .base import BaseAgent


class SkillExtractorAgent(BaseAgent):
output_key: str = "extracted_skill"

@property
def _chain_type(self):
return "SkillExtractorAgent"
agent_name: str = "SkillExtractorAgent"

def construct_prompt(self, langchain_messages: Dict[str, Any]):
prompt = ChatPromptTemplate.from_messages(messages=[
*langchain_messages,
("system", self.system_template + get_user_info())
])
return prompt

def parse_output(self, messages):
function_call = messages[-1].get("function_call", None)

if function_call is not None:
extracted_skill = parse_partial_json(function_call.get("arguments", "{}"))
try:
if function_call is not None:
content = function_call.get("arguments", "{}")
else:
content = messages[-1].get("content", "{}")
extracted_skill = parse_partial_json(content)
extracted_skill["conversation_history"] = messages[:-1]
extracted_skill["skill_parameters"] = convert_to_values_list(extracted_skill["skill_parameters"]) if "skill_parameters" in extracted_skill else None
extracted_skill["skill_return"] = convert_to_values_list(extracted_skill["skill_return"]) if "skill_return" in extracted_skill else None
return {"extracted_skill": extracted_skill}
except Exception:
pass
return {"extracted_skill": None}


def create_skill_extractor_agent(llm):
def create_skill_extractor_agent(config):
template = load_system_prompt(config.extractor_agent_prompt_path)
# current file's parent as dir
with open(config.codeskill_function_schema_path, encoding="utf-8") as f:
code_skill_json_schema = json.load(f)
code_skill_json_schema = load_json_schema(config.codeskill_function_schema_path)
function_schema = {
"name": "extract_formmated_skill",
"description": "a function that extracts a skill from a conversation history",
"parameters": code_skill_json_schema
}

chain = SkillExtractorAgent(
llm=llm,
llm=create_llm(config, config.agent_model_config.EXTRACTOR_AGENT),
system_template=template,
function_schemas=[function_schema],
verbose=False
)
return chain


llm = create_llm(config)
skill_extractor_agent = create_skill_extractor_agent(llm)
118 changes: 118 additions & 0 deletions creator/agents/group_chat.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,118 @@
from typing import List, Dict, Any, Optional
import networkx as nx

from creator.agents.base import BaseAgent
from creator.utils import print

from langchain.chains.base import Chain
from langchain.callbacks.manager import CallbackManagerForChainRun


class GroupChat(Chain):
graph: Optional[nx.DiGraph] = None
max_consecutive_auto_reply: int = 3

@property
def input_keys(self) -> List[str]:
"""Keys expected to be in the chain input."""
return ["messages", "sender", "receivers"]

@property
def output_keys(self) -> List[str]:
"""Keys expected to be in the chain output."""
return ["messages", "sender", "receivers"]

def add_agent(self, agent: BaseAgent):
"""
Add an agent to the graph.

:param agent: The agent to be added.
"""
self.graph.add_node(agent.agent_name, agent=agent)

def add_agents(self, agents: List[BaseAgent]):
"""
Add a list of agents to the graph.

:param agents: The list of agents to be added.
"""
for agent in agents:
self.add_agent(agent)

def remove_agent(self, agent_name: str):
"""
Remove an agent from the graph.

:param agent_name: The name of the agent to be removed.
"""
if agent_name in self.graph:
self.graph.remove_node(agent_name)

def add_edge(self, agent1: BaseAgent, agent2: BaseAgent):
"""
Add an edge between two agents.

:param agent1: The first agent.
:param agent2: The second agent.
"""
self.graph.add_edge(agent1.agent_name, agent2.agent_name)

def remove_edge(self, agent1: BaseAgent, agent2: BaseAgent):
"""
Remove an edge between two agents.

:param agent1: The first agent.
:param agent2: The second agent.
"""
self.graph.remove_edge(agent1.agent_name, agent2.agent_name)

@classmethod
def from_mapping(cls, mapping: Dict[BaseAgent, List[BaseAgent]]):
graph = nx.DiGraph()
node_set = set()
for from_node, to_nodes in mapping.items():
if from_node.agent_name not in node_set:
node_set.add(from_node)
for node in to_nodes:
if node.agent_name not in node_set:
node_set.add(node)
graph.add_edge(from_node.agent_name, node.agent_name)
cls.graph = graph
return cls

def _call(
self,
inputs: Dict[str, Any],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
return self.run_chat(inputs["messages"], inputs["sender"], inputs["receivers"])

def run_chat(self, messages: List[Dict], sender: str, receivers: List[str], run_manager: Optional[CallbackManagerForChainRun] = None):
"""Run a group chat."""
assert len(messages) > 0, "Input Messages Must Not Be Empty"
curr_cnt = 0
while curr_cnt < self.max_consecutive_auto_reply:
for receiver in receivers:
try:
receiver_agent = self.graph.nodes[receiver]
except KeyError:
print("> agent {receiver} not found", print_type="markdown")
raise KeyError
if not receiver_agent.share_memory:
messages = [messages[-1]]
output_messages = receiver_agent.with_config({"callbacks": run_manager.get_child()}).invoke({"messages": messages})
sender = receiver
try:
receiver = output_messages[-1]["receiver"]
except KeyError:
print("> agent {receiver} has no receiver", print_type="markdown")
raise KeyError

if not receiver_agent.share_memory:
messages = messages[:-1] + output_messages
else:
messages = output_messages

if receiver == "human":
return messages, sender, receiver
curr_cnt += 1
Loading