You get the following out of the box on using these docker images
- Ubuntu
- CUDA 9 (GPU Version Only)
- CUDNN 7 (GPU Version Only)
- Pytorch 1.0
- Torchvision
- Torchgan
- TensorFlow (for Logging Purposes)
- TensorBoardX
- Visdom
- A few other libraries like numpy
-
Install Docker following the installation guide for your platform: https://docs.docker.com/engine/installation/
-
GPU Version Only: Install Nvidia drivers on your machine either from Nvidia directly or follow the instructions here. Note that you don't have to install CUDA or cuDNN. These are included in the Docker container.
-
GPU Version Only: Install nvidia-docker: https://github.com/NVIDIA/nvidia-docker, following the instructions here. This will install a replacement for the docker CLI. It takes care of setting up the Nvidia host driver environment inside the Docker containers and a few other things.
You have 2 options to obtain the Docker image
Docker Hub is a cloud based repository of pre-built images. You can download the image directly from here. This is much faster compared to building the images locally. The image is built based on the Dockerfile
in the Github repo.
CPU Version
docker pull avikpal/torchgan:cpu
GPU Version
docker pull avikpal/torchgan:gpu
Alternatively, you can build the images locally. Note, this will take a long time/
git clone https://github.com/torchgan/dockerfiles
cd dockerfiles
CPU Version
docker build -t torchgan/torchgan:cpu -f Dockerfile.cpu .
GPU Version
docker build -t torchgan/torchgan:gpu -f Dockerfile.gpu .
This will build a Docker image named dl-docker
and tagged either cpu
or gpu
depending on the tag your specify. Also note that the appropriate Dockerfile.<architecture>
has to be used.
Once we've built the image, we have all the frameworks we need installed in it. We can now spin up one or more containers using this image, and you should be ready to go deeper
CPU Version
docker run -it -p 8888:8888 -p 6006:6006 -p 8097:8097 -v /sharedfolder:/root/sharedfolder torchgan/torchgan:cpu bash
GPU Version
nvidia-docker run -it -p 8888:8888 -p 6006:6006 -p 8097:8097 -v /sharedfolder:/root/sharedfolder torchgan/torchgan:gpu bash
Note the use of nvidia-docker
rather than just docker
Parameter | Explanation |
---|---|
-it |
This creates an interactive terminal you can use to iteract with your container |
-p 8888:8888 -p 6006:6006 -p 8097:8097 |
This exposes the ports inside the container so they can be accessed from the host. The format is -p <host-port>:<container-port> . The default iPython Notebook runs on port 8888, Tensorboard on 6006 and Visdom on 8097 |
-v /sharedfolder:/root/sharedfolder/ |
This shares the folder /sharedfolder on your host machine to /root/sharedfolder/ inside your container. Any data written to this folder by the container will be persistent. You can modify this to anything of the format -v /local/shared/folder:/shared/folder/in/container/ . See Docker container persistence |
torchgan/torchgan:cpu |
This the image that you want to run. The format is image:tag . In our case, we use the image torchgan and tag gpu or cpu to spin up the appropriate image |
bash |
This provides the default command when the container is started. Even if this was not provided, bash is the default command and just starts a Bash session. You can modify this to be whatever you'd like to be executed when your container starts. For example, you can execute docker run -it -p 8888:8888 -p 6006:6006 torchgan/torchgan:cpu jupyter notebook --allow-root . This will execute the command jupyter notebook and starts your Jupyter Notebook for you when the container starts |
NOTE Visdom logging is disabled by default. If you choose to use it please refer to the docs here
Parts of this README and Dockerfiles have been borrowed from https://github.com/floydhub/dl-docker.