This is a Docker Compose definition to run the Vectra Automated Response framework in a container.
This uses the vectraautomatedrespones github repository within the container.
Below are the files that need to be modified in order to use this tool:
- docker-compose.yaml
- config.py
- Integration config file in the third_party_clients directory
-
docker-compose.yaml
This file handles the secrets that the container will use to interact with the Brain(s) and the selected EDR(s).
-
Brain
- Replace the <brain_url> in following with URL of the corresponding Brain
- <brain_url>_Client_ID
- <brain_url>_Secret_Key
- Note: This can be used multiple times for multiple Brains
- Replace the <brain_url> in following with URL of the corresponding Brain
-
Integrations
- For each integration configured in config.py, uncomment the related variables. If a variable has
<>surrounding text, this identifies unique information that will need to be provided in that section of the variable similar to the Brain configuration.
- For each integration configured in config.py, uncomment the related variables. If a variable has
-
Debug Logs
VAR_DEBUG: Truewill turn logging debug on.
-
-
config.py integration config filea
These are the same configuration files used in vectraautomatedrespones. Follow the guidance in that repository for configuration.
The container will need to be built in a local registry. Run the following command to build the container.
docker build -t vectra-automated-response:latest vectra_automated_response/. --no-cache
- To run the container, after it is built, simply use
docker compose up -dwithin the directory with the docker-compose.yaml file. - To stop the container, use
docker compose down