A Tool for Prompt Version Management in LLM Related Projects
(A experiment project composed with 'plastic' code and 'shit mountain' code)
- The project is developed based on Reflex (a python binding for react web)
# Requires Python 3.8+ with Reflex
$ pip3 install -r requirements.txt
# set OPENAI_API_KEY for chatgpt
$ cp prompt_go/.env_copy prompt_go/.env
or
$ export OPENAI_API_KEY='xxxx'
$ cd prompt_go
$ reflex init
$ reflex db init
$ reflex run
chat api
: for prompt node run, any llm apiscore api
: for rate the prompt result, any llm apinorml api
: for preprocess and postprocess node like pdf parse、image ocr、reformat result……but, above not implement yet
- add dataset for specific node run
- build single node with specific dataset and get AI score
- add new version of existed node
- build a chain of cascade nodes to run a pipeline
- review one node's running result
- compare two node's result, especially for different version prompt node
- modify item score manually
- save node's output as new dataset
- code optimizing, such as duplicated func、multiple if-else-for……
- add user profile
- function not implemented list in
Modules
part - AI score prompt optimizing
- bug in check UI for remove situation
- ……
- Reflex is a good choice for quickly build AI demos for PYTHONers, but not so good at serving speed and complex components like native react. It is worth recommending anyway and reflex has many kind contributors.
- This project is a experiment demo, just try and stop here