Skip to content

Commit

Permalink
Merge branch 'main' into pgvector_bigdata_insert_and_search
Browse files Browse the repository at this point in the history
  • Loading branch information
chimezie committed Nov 30, 2023
2 parents 8c5abe6 + 6797b8e commit e63f027
Show file tree
Hide file tree
Showing 4 changed files with 91 additions and 1 deletion.
14 changes: 13 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,19 @@ llm = ctrans_wrapper(model=model)
print(llm(prompt='Write a short birthday greeting for my star employee', max_new_tokens=100))
```

For more examples see the [demo directory](https://github.com/uogbuji/OgbujiPT/tree/main/demo)
For more examples see the [demo directory](https://github.com/uogbuji/OgbujiPT/tree/main/demo). Demos include:

* Basics:
* Use of basic LLM text completion to correct a data format (XML)
* Multiple simultaneous LLM queries via multiprocessing
* Chatbots/agents:
* Simple Discord bot
* Advanced LLM API features:
* OpenAI-style function calling
* Retrieval Augmented Generation (RAG):
* Ask LLM questions based on web site contents, on the command line
* Ask LLM questions based on uploaded PDF, via Streamlit interactive UI
* Use PostgreSQL/PGVector for extracting context which can be fed to LLMs

## A bit more explanation

Expand Down
13 changes: 13 additions & 0 deletions demo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,19 @@ Intermediate demo asking an LLM multiple simultaneous riddles on various topics,
running a separate, progress indicator task in the background, using asyncio.
Works even if the LLM framework suport asyncio, thanks to ogbujipt.async_helper

## function_calling.py

[OpenAI-style function calling](https://openai.com/blog/function-calling-and-other-api-updates) allows the LLM to specify a structured response to a user's query in the form of details of a function to be called to complete the response, or take an action. Though developed by OpenAI, function calling can be made available through other LLM tools, though progress on this
is just emerging.

* [In llama-cpp-python](https://llama-cpp-python.readthedocs.io/en/latest/server/#function-calling)

You might also be interested in the reAct approach, which Oori has covered in their blog:

* [reAct (reasoning Action)](https://www.oori.dev/blog/2023/10/react/)

See also: [Low-level experiments for agent/tool interaction with locally-hosted LLMs #42](https://github.com/OoriData/OgbujiPT/discussions/42)

# Advanced

## qa_discord.py
Expand Down
Empty file modified demo/chat_web_selects.py
100755 → 100644
Empty file.
65 changes: 65 additions & 0 deletions demo/function_calling.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# SPDX-FileCopyrightText: 2023-present Oori Data <[email protected]>
# SPDX-License-Identifier: Apache-2.0
# demo/function_calling.py
'''
Demonstrate the use of OpenAI-style function calling with OgbujiPT
python demo/function_calling.py
Requires `OPENAI_API_KEY` in the environment
Hard-codes the function spec, but you can generate it from a PyDantic schema
```py
from typing import List
from pydantic import BaseModel
# PyDantic schema from which we'll generate function spec
class ExecuteStepByStepPlan(BaseModel):
title: str
steps: List[str]
# Generate the function spec
FUNC_SPEC = ExecuteStepByStepPlan.schema()
```
What about non-OpenAI LLM hosts? There is ongoing work in several areas.
It requires properly fine-tuned models, the right systems prompts and also suppot by the host code
Useful discussion re llama-cpp-python: https://github.com/abetlen/llama-cpp-python/discussions/397
'''

from ogbujipt.llm_wrapper import openai_chat_api, prompt_to_chat

FUNC_SPEC = {
'title': 'ExecuteStepByStepPlan',
'type': 'object',
'properties': {
'headline': {'headline': 'Headline', 'type': 'string'},
'steps': {'title': 'Steps', 'type': 'array', 'items': {'type': 'string'}},
},
'required': ['headline', 'steps'],
}

# Requires OPENAI_API_KEY in environment
llm_api = openai_chat_api(model='gpt-3.5-turbo')

messages = prompt_to_chat('Explain how to poach an egg, step by step')

functions=[
{
'name': 'handle_steps_from_user_query',
'description': 'Respond to a user query by specifying a series of steps',
'parameters': FUNC_SPEC
}
]

function_call={'name': 'handle_steps_from_user_query'}

resp = llm_api(messages=messages, functions=functions, function_call=function_call)
fc = resp.choices[0].message.function_call

if fc:
print('Function to be called: ' + fc.name)
print('Function call arguments: ' + fc.arguments)
else:
print('No function call issued')

0 comments on commit e63f027

Please sign in to comment.