By providing self-contained execution environments without the overhead of a full virtual machine, containers have become an appealing proposition for deploying applications at scale. The credit goes to Docker for making containers easy-to-use and hence making them popular. From enabling multiple engineering teams to play around with their own configuration for development, to benchmarking or deploying a scalable microservices architecture, containers are finding uses everywhere.
GPU-based applications, especially in the deep learning field, are rapidly becoming part of the standard workflow; deploying, testing and benchmarking these applications in a containerized application has quickly become the accepted convention. But native implementation of Docker containers does not support NVIDIA GPUs yet — that’s why we developed nvidia-docker plugin. Here I’ll walk you through how to use it.
Nvidia-docker
NVIDIA GPUs require kernel modules and user-level libraries to be recognized and used for computing. There is a work-around for this but that requires installation of Nvidia drivers and mapping of the character devices corresponding to NVIDIA GPUs. However, if the host Nvidia driver is changed, the driver version installed inside container is no longer compatible and hence breaks the container usage on host. This goes against the prime feature of containers, i.e. portability. But with Nvidia-docker, which is a wrapper around Docker, one can seamlessly provision a container having GPU devices visible and ready to execute one’s GPU based application.
Nvidia’s blog on nvidia-docker highlights the two critical points for using a portable GPU container:
- Driver-agnostic CUDA images
- A Docker command line wrapper that mounts the user mode components of the driver and the GPUs (character devices) into the container at launch.
Getting started with Nvidia-docker
Installing NVIDIA Docker
Update the NVIDIA drivers for your system before installing nvidia-docker. Also, make sure that Docker is installed on the system. Once you’ve done that, follow installation instructions here .
The next step is to test nvidia-docker:
rtaneja@DGX:~$ nvidia-docker
Usage: docker COMMAND
A self-sufficient runtime for containers
Options:
--config string Location of client config files (default "/home/rtaneja/.docker")
-D, --debug Enable debug mode
--help Print usage
-H, --host list Daemon socket(s) to connect to (default [])
-l, --log-level string Set the logging level ("debug", "info", "warn", "error", "fatal") (default "info")
--tls Use TLS; implied by --tlsverify
--tlscacert string Trust certs signed only by this CA (default "/home/rtaneja/.docker/ca.pem")
--tlscert string Path to TLS certificate file (default "/home/rtaneja/.docker/cert.pem")
--tlskey string Path to TLS key file (default "/home/rtaneja/.docker/key.pem")
--tlsverify Use TLS and verify the remote
-v, --version Print version information and quit
Now, let’s test whether you can pull hello-world image from docker-hub using nvidia-docker instead of Docker command:
rtaneja@DGX:~$ nvidia-docker run --rm hello-world
Using default tag: latest
latest: Pulling from library/hello-world
9a0669468bf7: Pull complete
Digest: sha256:cf2f6d004a59f7c18ec89df311cf0f6a1c714ec924eebcbfdd759a669b90e711
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
The message above shows that your installation appears to be working correctly.
Developing GPU applications
For CUDA development, you can start by pulling nvidia/cuda image from Dockerhub
rtaneja@DGX:~$ nvidia-docker run --rm -ti nvidia/cuda:8.0 nvidia-smi
8.0: Pulling from nvidia/cuda
16da43b30d89: Pull complete
1840843dafed: Pull complete
91246eb75b7d: Pull complete
7faa681b41d7: Pull complete
97b84c64d426: Pull complete
ce2347c6d450: Pull complete
f7a91ae8d982: Pull complete
ac4e251ee81e: Pull complete
448244e99652: Pull complete
f69db5193016: Pull complete
Digest: sha256:a73077e90c6c605a495566549791a96a415121c683923b46f782809e3728fb73
Status: Downloaded newer image for nvidia/cuda:8.0
Building your own application
Now instead of running nvidia-smi from docker command line, you can build a Docker image and use CMD to run nvidia-smi, once a container is launched. To build images, Docker reads instructions from Dockerfile and assembles and image. An example Dockerfile for Ubuntu 16.04 and CUDA 9.0 looks from Docker hub looks like this:
# FROM defines the base image FROM nvidia/cuda:7.5 # RUN executes a shell command # You can chain multiple commands together with && # A \ is used to split long lines to help with readability RUN apt-get update && apt-get install -y --no-install-recommends \ cuda-samples-$CUDA_PKG_VERSION && \ rm -rf /var/lib/apt/lists/* # CMD defines the default command to be run in the container # CMD is overridden by supplying a command + arguments to # `docker run`, e.g. `nvcc --version` or `bash` CMD nvidia-smi #end of Dockerfile
$ docker build -t my-nvidia-smi . # will build an image named my-nvidia-smi and assumes you have Dockerfile in the current directory $ nvidia-docker images # or docker images will show the nvidia-smi image name
Now, we are ready to execute the image. By default, you will see all GPUs on the host visible inside the container. Using NV_GPU environment variable, with nvidia-docker, provisions a container with number of GPUs desired. For example the command below will have container see only 1 GPU from the host:
NV_GPU=1 nvidia-docker run --rm my-nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.81 Driver Version: 384.81 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A |Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage |GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On |00000000:06:00.0 Off | 0 |
| N/A 37C P0 45W / 300W | 10MiB / 16152MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory|
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
Getting started with Nvidia optimized containers for Deep Learning
To get started with DL development, you can pull the nvidia DIGITS container from docker hub and launch DIGITS web service. The command below maps port 8000 on the host to port 5000 on container and you can access DIGITS at http://localhost:8000 after running this command:
nvidia-docker run --name digits -ti -p 8000:5000 nvidia/digits
Looking forward
Newer version of nvidia-docker (2.0) project which is based on an alpha release of libnvidia-container is announced, and will be the preferred way of deploying GPU containers in future. Now, that you have got an overview of deploying GPU-based containerized applications, Nvidia also provides containerized easy-to-use, fully optimized deep learning software stack through newly announced NVIDIA GPU Cloud container registry service.
About the author
Rohit Taneja is a solutions architect at Nvidia working on supporting deep learning and data analytics applications.
- How to containerize GPU applications - December 1, 2017