# Create conda environment: python >= 3.10
conda create --name llmrag python=3.10.14 -y
conda activate llmrag
# Install required Python packages:
pip install -r requirements.txt
ollama run llama2:7b
📦 Deploy Graph Database
- NebulaGraph Installation Guide Step 1: Install docker-compose Ensure that you have docker-compose installed. If not, you can install it with the following command:
sudo apt install docker-compose
Step 2: Clone NebulaGraph Docker Compose Repository In a directory of your choice, clone the NebulaGraph Docker Compose files:
git clone https://github.com/vesoft-inc/nebula-docker-compose.git
cd nebula-docker-compose
Step 3: Start NebulaGraph In the nebula-docker-compose directory, run the following command to start NebulaGraph:
docker-compose up -d
Step 4: Check NebulaGraph Container Status After starting, you can verify that the NebulaGraph container is running by using:
docker ps
Step 5: Connect to NebulaGraph To connect to NebulaGraph inside the container, use the following command:
nebula-console -u <user> -p <password> --address=graphd --port=9669
#Replace <user> and <password> with the actual username and password. Ensure that port 9669 is used for the default configuration.
Step 6: Enable Data Persistence To ensure that data persists even after the container is restarted, you can mount persistent volumes. Either modify the volumes section in the docker-compose.yaml file, or manually run the following command with specified persistence paths:
docker run -d --name nebula-graph \
-v /yourpath/nebula/data:/data \
-v /yourpath/nebula/logs:/logs \
-p 9669:9669 \
vesoft/nebula-graphd:v2.5.0
#Replace /yourpath/nebula with your actual data persistence path.
- Neo4j (Installation optional for now)
# Configure temporary environment variables
Example export PYTHONPATH=$PYTHONPATH:/home/lipz/RAGWebUi/RAGWebUi_demo/backend
export PYTHONPATH=$PYTHONPATH:/your/path/backend
# Run a WebUI to display the frontend interface:
python webui_chat.py
# Use another terminal to run a graph-based UI to display topology in the frontend:
python graph.py
# Running the backend mainly for research purposes:
python backend_chat.py --dataset_name "rgb" --llm "llama2:7b" --func "Graph RAG" --graphdb "nebulagraph" --vectordb "MilvusDB"
- .env file loading is deprecated. Now uses client input, including LLM name
- The method low_chat() in ./llmragenv/llmrag_env.py is a simplified input version where the LLM name, database usage, etc., are hardcoded. The web_chat method is the full version.
- LLM support: The llm_provider dictionary in llm_factory lists all currently supported local models. (Commercial model API keys are not enabled here due to cost, but users can purchase them separately and configure in ./config/config-local.yaml.)
- Frontend ports and database configurations can be modified in ./config/config-local.yaml (vector DB and NebulaGraph are hardcoded in the code, and need refactoring)
- Code structure:
Although web_chat() allows selecting a different LLM for each chat session, in practice, only the first selection takes effect—subsequent interactions always use the initially chosen model.
Includes graphrag and vectorrag