This work is licensed under a Creative Commons Attribution 4.0 International License.
I am sharing my experiences setting up my Deep Learning Workstation. The main reason I am doing this documentation is to be able to redo all of the installation and configuring if needed. Furthermore, I hope this documentation helps others who are getting started with this topic. Feel free to contribute.
Advantages of using Docker for a Deep Learning Workstation are that you will only need to install the following on your host system.
Everything else will be installed in the Docker container. Using containers you can make usage of different versions of CUDA at the same time in different containers. Therefore I believe using them is the best way in developing Deep Learning models.
(source: https://github.com/NVIDIA/nvidia-docker)
The InstallationInstructions.md file provides information on:
docker compose
examples with GPU support
This DockerForBeginners.md files provides information on some of my commonly used docker commands, e.g.:
docker compose
examples with GPU supportThe folder ./examples contains multiple examples for running Docker containers with GPU support.
docker compose
indicating how a GPU can get made accessible within a container using docker compose
.
Afterwards try out the PyTorch or TensorFlow examplesTensorFlow, PyTorch
After having tried out the example mentioned in 1.) try out this one which customized an image based on nvcr.io/nvidia/pytorch
and nvcr.io/nvidia/tensorflow
images.
ℹ️ I personally prefer using docker-compose.yml
files as they offer a clean way to build images and start containers without the need of long and tedious commands on the cli or the need for hard to maintain bash scripts.
In 2018 I got a good deal on a used Lenovo ThinkStation P520 (30BE006X**) equiped as follows:
Modifications over time: