Skip to content

Commit

Permalink
[#50] Clean up & document function calling demo. Add precis of demos …
Browse files Browse the repository at this point in the history
…to main README.
  • Loading branch information
uogbuji committed Nov 26, 2023
1 parent 2c4a194 commit 6797b8e
Show file tree
Hide file tree
Showing 4 changed files with 45 additions and 9 deletions.
14 changes: 13 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,19 @@ llm = ctrans_wrapper(model=model)
print(llm(prompt='Write a short birthday greeting for my star employee', max_new_tokens=100))
```

For more examples see the [demo directory](https://github.com/uogbuji/OgbujiPT/tree/main/demo)
For more examples see the [demo directory](https://github.com/uogbuji/OgbujiPT/tree/main/demo). Demos include:

* Basics:
* Use of basic LLM text completion to correct a data format (XML)
* Multiple simultaneous LLM queries via multiprocessing
* Chatbots/agents:
* Simple Discord bot
* Advanced LLM API features:
* OpenAI-style function calling
* Retrieval Augmented Generation (RAG):
* Ask LLM questions based on web site contents, on the command line
* Ask LLM questions based on uploaded PDF, via Streamlit interactive UI
* Use PostgreSQL/PGVector for extracting context which can be fed to LLMs

## A bit more explanation

Expand Down
13 changes: 13 additions & 0 deletions demo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,19 @@ Intermediate demo asking an LLM multiple simultaneous riddles on various topics,
running a separate, progress indicator task in the background, using asyncio.
Works even if the LLM framework suport asyncio, thanks to ogbujipt.async_helper

## function_calling.py

[OpenAI-style function calling](https://openai.com/blog/function-calling-and-other-api-updates) allows the LLM to specify a structured response to a user's query in the form of details of a function to be called to complete the response, or take an action. Though developed by OpenAI, function calling can be made available through other LLM tools, though progress on this
is just emerging.

* [In llama-cpp-python](https://llama-cpp-python.readthedocs.io/en/latest/server/#function-calling)

You might also be interested in the reAct approach, which Oori has covered in their blog:

* [reAct (reasoning Action)](https://www.oori.dev/blog/2023/10/react/)

See also: [Low-level experiments for agent/tool interaction with locally-hosted LLMs #42](https://github.com/OoriData/OgbujiPT/discussions/42)

# Advanced

## qa_discord.py
Expand Down
Empty file modified demo/chat_web_selects.py
100755 → 100644
Empty file.
27 changes: 19 additions & 8 deletions demo/function_calling.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,16 @@
# SPDX-FileCopyrightText: 2023-present Oori Data <[email protected]>
# SPDX-License-Identifier: Apache-2.0
# ogbujipt/demo/function_calling.py
# demo/function_calling.py
'''
Demonstrate the use of OpenAI-style function calling with OgbujiPT
python demo/function_calling.py
python demo/function_calling.py --apibase=http://localhost:8000
Requires `OPENAI_API_KEY` in the environment
You can alternatively use OpenAI by using the --openai param
Hard-codes the function spec, but you can generate it from a PyDantic schema
```py
from typing import List
from pydantic import BaseModel
Expand All @@ -19,6 +21,11 @@ class ExecuteStepByStepPlan(BaseModel):
# Generate the function spec
FUNC_SPEC = ExecuteStepByStepPlan.schema()
```
What about non-OpenAI LLM hosts? There is ongoing work in several areas.
It requires properly fine-tuned models, the right systems prompts and also suppot by the host code
Useful discussion re llama-cpp-python: https://github.com/abetlen/llama-cpp-python/discussions/397
'''

from ogbujipt.llm_wrapper import openai_chat_api, prompt_to_chat
Expand All @@ -33,9 +40,10 @@ class ExecuteStepByStepPlan(BaseModel):
'required': ['headline', 'steps'],
}

llm_api = openai_chat_api(model='gpt-4')
# Requires OPENAI_API_KEY in environment
llm_api = openai_chat_api(model='gpt-3.5-turbo')

messages = prompt_to_chat('Explain how to poach an egg')
messages = prompt_to_chat('Explain how to poach an egg, step by step')

functions=[
{
Expand All @@ -48,7 +56,10 @@ class ExecuteStepByStepPlan(BaseModel):
function_call={'name': 'handle_steps_from_user_query'}

resp = llm_api(messages=messages, functions=functions, function_call=function_call)
# print(resp.choices[0].message.function_call)
fc = resp.choices[0].message.function_call

print('Function to be called: ' + resp.choices[0].message.function_call.name)
print('Function call arguments: ' + resp.choices[0].message.function_call.arguments)
if fc:
print('Function to be called: ' + fc.name)
print('Function call arguments: ' + fc.arguments)
else:
print('No function call issued')

0 comments on commit 6797b8e

Please sign in to comment.