This project serves as a demonstration of prompt injection techniques aimed at educating users on how to identify and exploit vulnerabilities in systems that utilize Large Language Models (LLMs). The demo is designed to be user-friendly and can be run locally or in a containerized environment, making it ideal for presentations and educational purposes.
- Interactive Demo: Explore various prompt injection techniques through an intuitive interface.
- Local Setup: Easily set up and run the demo on your local machine.
- Educational Resource: Learn about the implications of prompt injection in LLM systems.
Before you begin, ensure you have the following installed:
- Python 3.12 or higher
- uv installed
To set up the project locally, follow these steps:
-
Clone the repository:
git clone https://github.com/FloTeu/mr-injector.git cd mr-injector # creates virtual environment uv sync
- Setup environment file
Providing
cp .env.template .env # populate .env file with values.
OPENAI_API_KEY
is enough to enable most of the features.
If you run the app locally, setDEBUG
to True.
SetPRESENTATION_MODE
to True, if the ui should be more suitable for a lecture.
If like to have more models, you can also includeOPENROUTER_API_KEY
. - Activate virtual environment
# execute this in root of the project . .venv/bin/activate
- Run streamlit frontend
# execute this in root of the project streamlit run mr_injector/frontend/main.py
- Build docker image
docker build -t mr-injector . # or with global password docker build --build-arg STREAMLIT_PASSWORD=<password> -t mr-injector . # or with Azure OpenAI setup docker build --build-arg AZURE_OPENAI_ENDPOINT=<your-endpoint-url> --build-arg AZURE_OPENAI_API_KEY=<your-endpoint-api-key> -t mr-injector .
Hint: If you are using podman ensure that the right linux platform for spacy is used e.g. --platform linux/amd64
- Run docker container
docker run -p 8501:8501 mr-injector