Skip to content

ouspg/LLM-Hackathon

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM Hackathon Environment

Table of Contents

Introduction

This repository contains a Docker environment for vulnerability testing Large Language Models (LLMs). The environment contains Giskard and Garak tools for finding vulnerabilities by prompting a LLM, as well as DependencyCheck for finding vulnerabilities in projects' dependencies.

Following the Quickstart guide below will introduce you to each of the tools through examples. The guide contains three OBJECTIVEs and by completing all of them, you know you have learned how to utilize the tools for vulnerability testing LLMs.




Quickstart

Prerequisites

Required

  • Install latest version of Docker and have it running.
  • Make sure port 11434 is not in use by any program.
    • On Linux you can check ports that are in use with: lsof -i -P -n | grep LISTEN
    • On Windows you can check ports that are in use with: netstat -bano
    • On MacOS lsof -i -P -n | grep LISTEN or netstat -pan may work.
  • ~20Gb of disk space.
  • 5.6 GB of RAM for running containerized Phi-3-Mini for giskard tool.

Optional

  • For using garak with certain Hugging Face models (Phi-3-Mini for example), you need to create a Hugging Face account here. After you have an account, create and save a Hugging Face User Access Token with "Read" priviliges. You can create one here when you are logged in.
  • To save 15 minutes of time when using DependencyCheck, request a NVD API key here. The link for your personal NVD API key will be sent to your email - save it for later use.




Setup

Running a Large Language Model for inference can be computationally intensive. It is recommended to utilize the computation of your GPU for running the LLM, if you have a compatible GPU for GPU accelerated containers. Below there are several different collapsible Setup sections for different hardware. Follow the one that matches the hardware you are using. If none match, choose Setup for CPU only.

Setup for NVIDIA GPU

Step 1

Install and configure NVIDIA Container Toolkit for Docker to allow GPU accelerated container support.

Step 2

  • Clone this repository to your local machine with:
  git clone https://github.com/ouspg/LLM-Hackathon.git
  • Navigate to the repository with:
  cd LLM-Hackathon
  • Open compose.yaml with your text editor and uncomment the deploy blocks (lines 7-13 & 22-28). The compose.yaml file should look as in the image below:

compose.yaml for Nvidia GPU

  • Build the llm_hackathon and ollama Docker containers with:
  docker compose up -d

Note: building the container environment may take up to 20 minutes.

Step 3

  • Make sure the ollama container is running with:
  docker container start ollama
  docker exec -it ollama ollama run phi3

You can use any other LLM from Ollama Library as well. Just replace the phi3 in the above command with the corresponding LLM tag.

  • After the download is complete you should be able to chat with the model. Type /bye to leave the interactive mode.

Step 4

  • Make sure the llm_hackathon container is running with:
  docker container start llm_hackathon
  • Attach to the container's shell with:
  docker exec -ti llm_hackathon /bin/bash
  • Type ls to see contents of current directory and if you see an output as in the image below - Congratulations! You have succesfully completed the setup part.

setup complete


Setup for AMD GPU

Step 1

  • Clone this repository to your local machine with:
  git clone https://github.com/ouspg/LLM-Hackathon.git
  • Navigate to the repository with:
  cd LLM-Hackathon
  • Open compose.yaml with your text editor and uncomment lines 35-55. Remove lines 1-28. The compose.yaml file should look as in the image below:

compose.yaml for AMD GPU

  • Build the llm_hackathon and ollama Docker containers with:
  docker compose up -d

Note: building the container environment may take up to 20 minutes.

If you get an error response from daemon such as "Error response from daemon: error gathering device information while adding custom device "/dev/kfd": no such file or directory", remove the - /dev/kfd lines (lines 10 and 18) from compose.yaml file.

Step 2

  • Make sure the ollama container is running with:
  docker container start ollama
  docker exec -it ollama ollama run phi3

You can use any other LLM from Ollama Library as well. Just replace the phi3 in the above command with the corresponding LLM tag.

  • After the download is complete you should be able to chat with the model. Type /bye to leave the interactive mode.

Step 3

  • Make sure the llm_hackathon container is running with:
  docker container start llm_hackathon
  • Attach to the container's shell with:
  docker exec -ti llm_hackathon /bin/bash
  • Type ls to see contents of current directory and if you see an output as in the image below - Congratulations! You have succesfully completed the setup part.

setup complete


Setup for macOS

Step 1

  • Clone this repository to your local machine with:
  git clone https://github.com/ouspg/LLM-Hackathon.git
  • Navigate to the repository with:
  cd LLM-Hackathon
  • Open Dockerfile with your text editor. Add lines apt install cargo -y && \ and python3 -m pip install maturin && \ to the Dockerfile, so it looks like in the image below:

Dockerfile for macOS

  • Build the llm_hackathon and ollama Docker containers with:
  docker compose up -d

Note: building the container environment may take up to 20 minutes.

Step 2

  • Make sure the ollama container is running with:
  docker container start ollama
  docker exec -it ollama ollama run phi3

You can use any other LLM from Ollama Library as well. Just replace the phi3 in the above command with the corresponding LLM tag.

  • After the download is complete you should be able to chat with the model. Type /bye to leave the interactive mode.

Step 3

  • Make sure the llm_hackathon container is running with:
  docker container start llm_hackathon
  • Attach to the container's shell with:
  docker exec -ti llm_hackathon /bin/bash
  • Type ls to see contents of current directory and if you see an output as in the image below - Congratulations! You have succesfully completed the setup part.

setup complete


Setup for CPU only

Step 1

  • Clone this repository to your local machine with:
  git clone https://github.com/ouspg/LLM-Hackathon.git
  • Navigate to the repository with:
  cd LLM-Hackathon
  • Build the llm_hackathon and ollama Docker containers with:
  docker compose up -d

Note: building the container environment may take up to 20 minutes.

Step 2

  • Make sure the ollama container is running with:
  docker container start ollama
  docker exec -it ollama ollama run phi3

You can use any other LLM from Ollama Library as well. Just replace the phi3 in the above command with the corresponding LLM tag.

  • After the download is complete you should be able to chat with the model. Type /bye to leave the interactive mode.

Step 3

  • Make sure the llm_hackathon container is running with:
  docker container start llm_hackathon
  • Attach to the container's shell with:
  docker exec -ti llm_hackathon /bin/bash
  • Type ls to see contents of current directory and if you see an output as in the image below - Congratulations! You have succesfully completed the setup part.

setup complete




Usage

The llm_hackathon container includes Garak and Giskard LLM vulnerability tools, as well as DependencyCheck.



Garak

If you aren't already attached to the llm_hackathon container's shell, do so with the command:

  docker exec -ti llm_hackathon /bin/bash

You can now use garak via the shell. To list different available garak probes, type:

  python3 -m garak --list_probes

You should see an output such as in the image below:

garak probes list

You can run the probes on all available models in Hugging Face Models (some require authentication and more computation power than others).

Hugging Face API has rate limits, so in order to run garak probes on certain Hugging Face models, we need to set a personal User Access Token as an environment variable. If you don't already have a Hugging Face User Access Token, you can create one here after you have created an account and are logged in to the Hugging Face web platform. The User Accesss Token needs to have "Read" privileges (see image below).

Hugging Face Token creation

Set your personal User Access Token as an environment variable with:

  export HF_INFERENCE_TOKEN=REPLACE_THIS_WITH_YOUR_TOKEN

Now we can, for example, run malwaregen.Evasion probe on Microsoft's Phi-3-Mini model with the command:

  python3 -m garak --model_type huggingface.InferenceAPI --model_name microsoft/Phi-3-mini-4k-instruct --probes malwaregen.Evasion

After garak has ran its probe(s), it will generate reports into garak_runs directory. You can copy the reports to your local host machine and explore the report files. The html file contains a summary of the results and the json files contain chat logs:

  • Exit the container with command exit or by pressing Ctrl + D
  • Run the following command to copy the report files to your local machine into a directory labeled "garak_runs":
docker cp llm_hackathon:/home/ubuntu/garak_runs/ garak_runs
  • Explore the report files:

garak report snippet


OBJECTIVE: Use different probes on the LLM and see what types of vulnerabilities you can find from it (all available probes might not work).



DependencyCheck

If you aren't already attached to the llm_hackathon container's shell, do so with the command:

  docker exec -ti llm_hackathon /bin/bash

Make sure you are in the correct directory. Type pwd and if the output is /home/ubuntu - you are.

You can use DependencyCheck to scan any repository utilizing languages supported by the DependencyCheck project.

Let's analyze the tool we just used, garak, as an example.

Clone the repository with:

  git clone https://github.com/leondz/garak.git

Garak is a Python project and it contains a requirements.txt file, which is a list of required dependencies to run the software.

To save 15 minutes of your time when running the first analysis, you need a NVD API key. If you don't already have one, you can request one here and a link to it will be sent to your email.

To analyze the repository with DependencyCheck, scan the requirements.txt file with the command (if you wish not to use a NVD API Key, remove the --nvdApiKey REPLACE_THIS_WITH_YOUR_API_KEY part):

  /home/ubuntu/Dependency-Check/dependency-check/bin/dependency-check.sh \
--enableExperimental \
--out . \
--scan garak/requirements.txt \
--nvdApiKey REPLACE_THIS_WITH_YOUR_API_KEY

DependencyCheck will generate a html file of the analysis report, which you can copy from the container to your local machine.

  • Exit the container with the command exit or by pressing Ctrl + D.
  • Run the following command to copy the report to your local machine:
docker cp llm_hackathon:/home/ubuntu/dependency-check-report.html .
  • Explore the report file:

DependencyCheck report snippet


OBJECTIVE: Find a Github repository of a software project containing a supported file type by dependency-check, and see if you can find any vulnerable dependencies from the project.



Giskard

If you aren't already attached to the llm_hackathon container's shell, do so with the command docker exec -ti llm_hackathon /bin/bash.

  • Use command ls to make sure there is a directory labeled "giskard" in your current directory. setup complete

  • If there is, you can check the contents of the "giskard" directory with ls giskard.

  • The Python file llm_scan.py contains a Python script that runs a Giskard LLM scan on the LLM previously downloaded to the ollama container (Default: 'phi3'; You need to change MODEL parameter accordingly in llm_scan.py file if you selected a different model).

  • You can define a custom dataset that will be used to evaluate the LLM by altering the custom_dataset parameter in the llm_scan.py file.

  • You can start the Giskard LLM Scan with:

  python3 giskard/llm_scan.py
  • After the scan is complete, the Giskard tool will generate an evaluation report into the current directory labeled giskard_scan_results.html.
  • You can copy the results file to your local host machine and explore the report in browser:
    • Exit the container with command exit or by pressing Ctrl + D
    • Run the following command to copy the report to your local machine:
  docker cp llm_hackathon:/home/ubuntu/giskard_scan_results.html .
    • Open the giskard_scan_results.html in a browser and you should see a report such as in the image below.

Giskard report

Note: Running the Giskard LLM Scan can take up to an hour or even several hours based on the computation power the LLM is being run on and the size of the dataset used to evaluate the LLM. This repository contains an example evaluation report in the giskard directory labeled giskard/giskard_scan_results.html that was produced after running the scan on Phi-3-Mini model using Hackaprompt dataset. You can open this html file within your browser, and explore what kind of a report the tool would produce after running the complete scan.


OBJECTIVE: Try to conduct the Giskard Scan on some other LLM available in the Ollama library. You need to download & run the LLM inside the ollama container, and change the MODEL parameter in giskard/llm_scan.py file accordingly (the Giskard Scan might take quite a long time, so it is recommended to do this last).

Editing files inside a container

The llm_hackathon container includes nano text editor. You can start editing llm_scan.py file while connected to the container's shell with the command:

  nano giskard/llm_scan.py



Using a LLM via REST API

After setting up the environment, you can also generate responses and chat with the model via REST API. The file chat_api_template.py contains a template for generating responses to custom prompts.

For more information, please visit: https://github.com/ollama/ollama/blob/main/docs/api.md






Useful resources

Ollama model library

Garak ReadMe
Garak Documentation

Giskard ReadMe
Giskard Documentation

DependencyCheck ReadMe
DependencyCheck Documentation
DependencyCheck CLI Arguments
DependencyCheck Supported File Types

About

Repository for OUSPG LLM Hackathon.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published