How-To Setup NVIDIA Docker and NGC Registry on your Workstation – Part 5 Docker Performance and Resource Tuning



This should be the last post in this series dealing with the Docker setup for accessing the NVIDIA NCG Docker registry on your workstation.

The previous 4 posts have gone from bare-metal Ubuntu install, docker and nvidia docker setup, user-namespaces configuration to signing-up for and accessing NGC. There are a couple of configuration tuning changes that you may want to make. These will improve performance and ensure that you have proper system “user limit” resources to handle large application and job runs with docker.

The earlier How-To NGC posts are listed below. They cover everything from initial system install to accessing NGC.


Why Tune your Docker Setup

It is not uncommon to make a few runtime resource adjustments when running scientific programs. The adjustments are usually in the form of changes to various limits concerning memory usage for various parts of an application runtime. Scientific programs can make heavy demands on a system and may mysteriously lock-up or crash if the resources limits of the user running the program are set too restrictively.

In the NVIDIA documentation for using NGC on a workstation they have a table of recommended settings. Also, in the individual docker container pages on the NGC registry there are runtime flags listed that alternatively can be added to the docker command-line when starting a container to achieve the same effect.

Some of the default docker settings are too conservative for High Performance Computing applications like Machine Learning/AI in containers. The default settings are more appropriate for a typical docker “micro-service”.

Part of the reason I’m writing this post is because I don’t really like the way NVIDIA is recommending making user resource limit changes to the Docker configuration. They are adding a systemd override add-in to do it. I’m trying to only use the Docker JSON configuration file for my setup (if possible). I recommend that you do not mess with systemd directly if you don’t have too!

The table of changes that NVIDIA has in their documentation is,

Option Explanation
–default-shm-size=”1G” Set the default shared memory size to 1G
–host=fd:// Indicate systemd is starting the service, and to use socket activation
–storage-driver=overlay2 Use the overlay2 storage driver
LimitMEMLOCK=infinity Prevent memory from being paged out
LimitSTACK=67108864 Increase the stack limit to 64G

Don’t make these changes the way suggested in the NVIDIA documentation … but, we do want to do most of these to be set.


Discussion of option settings for resource/performance tuning

–default-shm-size=”1G”

This is something that you should change since it is set ridiculously small by default in docker! This is setting the tmpfs size for /dev/shm. That is basically a shared memory space for programs to exchange data. If it is too small performance can take a big hit. On my workstation doing df -h shows me the following size for this,

tmpfs            63G  141M   63G   1% /dev/shm

63G is a lot really but I have 128GB memory in my system. On another system with 32GB memory I see 16G allocated for /dev/shm.

In a docker Ubuntu 16.04 container this is the allocation,

shm              64M     0   64M   0% /dev/shm

64M! That is way too small! That is the docker default. We will change that and we can do it in the docker JSON configuration file so we don’t need to mess with systemd. The 1G setting suggested by NVIDIA is a much more reasonable value (I don’t think it really needs to be any bigger than that). There is no need to use a systemd drop-in to change this since it is an option that can be easily set in the docker daemon.json file.

–host=fd://

By default on a Linux system docker starts with a normal unix domain socket for the docker daemon. There is nothing wrong with that. There are some possible advantages on systems that use systemd to let systemd handle the socket. This is what this option is doing. However, if you are using systemd then you can’t change the docker startup host socket in the JSON configuration file since it is set on service startup by systemd. That means it would have to be done with a systemd drop-in file to override that. That is something I’m trying to avoid in my current configuration. I’m going to leave it alone since I’m not convinced there is really any serious advantage to changing it. [If I change my mind I’ll let you know!]

–storage-driver=overlay2

This is the default in current docker releases so there is no reason to set this.

LimitMEMLOCK=infinity

This is just the systemd way of setting “max locked memory”. It is equivalent to using ulimit -l. Docker has a flag "default-ulimits": {} which can be added to the daemon.json file for configuring ulimit settings. On my system “max locked memory” is set to 64K it is also set to 64K in an Ubuntu docker container. Setting “LimitMEMLOCK=infinity” is the same as “ulimit -l unlimited”. It seems reasonable to make this change. We’ll set this in the daemon.json file instead of a systemd drop-in file.

LimitSTASCK=67108864

Again this a systemd setting that can be also be done from the ulimit setting “stack size”. “LimitSTACK=67108864” is the same as “ulimit -s 67108864”. On my system the stack size is set to 8195K. This is also the default setting in an Ubuntu 16.04 docker container. A 64GB stack seems really large to me. I would not expect to need that except for maybe old crusty Fortran code. However, it’s annoying when you get a segmentation fault because of a process or recursion stack overflow. I think this is OK to set this large of a stack on a system that will be running serious HPC workloads. This is another option that can be set in the daemon.json file for docker so we wont need to mess with the systemd configuration.


Making changes at runtime

You can make some of these changes at runtime when you start a container. Even after you have changed the docker configuration. Any options you set at runtime take priority. Here’s and example of what starting up an Ubuntu 16.04 container would look like,

docker run --rm -it --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 ubuntu:16.04

That is a lot to type out but you could create startup scripts for containers that you use regularly. Alternatively, since these are reasonable defaults for our purpose you can add these to the docker configuration as I describe below.


Making the tuning changes in docker/daemon.json

The docker configuration file we will be editing is /etc/docker/daemon.json. This is where the NVIDIA runtime configuration is set and where we setup user-namespaces in previous posts.

There are 3 docker tuning changes we will make,

  • default-shm-size
  • max locked memory (LimitMEMLOCK)
  • stack limit (LimitSTACK)

Note: You need to be root to edit (or even look at) /etc/docker/daemon.json. I suggest that you just execute the following to sudo to a root shell, and then use the editor of your choice to make changes.

sudo -s

default-shm-size

The line to add to /etc/docker/daemon.json is,

"default-shm-size": "1G"

ulimit settings for “max locked memory” and “stack limit”

These are set using

"default-ulimits": {
	"memlock": { "name":"memlock", "soft":  -1, "hard": -1 },
	"stack"  : { "name":"stack", "soft": 67108864, "hard": 67108864 }
}

The complete /etc/docker/daemon.json file with these changes and including the settings for the NVIDIA runtime and User-Namespace remap will look like, (on my system with my user account for userns-remap)

{
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "userns-remap": "kinghorn",
    "default-shm-size": "1G",
    "default-ulimits": {
	     "memlock": { "name":"memlock", "soft":  -1, "hard": -1 },
	     "stack"  : { "name":"stack", "soft": 67108864, "hard": 67108864 }
    }
}

Please don’t just copy and past that without changing the "userns-remap": "kinghorn" part and substituting in your user name for the User-namespace you did in an earlier post. … but I don’t have to remind you of that do I….

After you have made the changes to /etc/docker/daemon.json you will need to restart the docker service.

sudo systemctl restart docker.service

Checking the changes

Before:

On my system if I start an Ubuntu 16.04 container,

docker run --rm -it ubuntu:16.04

and then checking values for these setting I see,

df -h | grep shm

shm              64M     0   64M   0% /dev/shm
ulimit -a

max locked memory       (kbytes, -l) 64
stack size              (kbytes, -s) 8192

After:

On my system after I made these tuning changes an Ubuntu 16.04 container shows,

df -h | grep shm

shm             1.0G     0  1.0G   0% /dev/shm

and

ulimit -a

max locked memory       (kbytes, -l) unlimited
stack size              (kbytes, -s) 65536

Success!


Keyword reference for ulimit settings

I’m not suggesting you change any other settings other than what we did above but, I’ve included the user limit keywords here just for reference.

  • core – limits the core file size (KB)
  • data – max data size (KB)
  • fsize – maximum filesize (KB)
  • memlock – max locked-in-memory address space (KB)
  • nofile – max number of open files
  • rss – max resident set size (KB)
  • stack – max stack size (KB)
  • cpu – max CPU time (MIN)
  • nproc – max number of processes
  • as – address space limit (KB)
  • maxlogins – max number of logins for this user
  • maxsyslogins – max number of logins on the system
  • priority – the priority to run user process with
  • locks – max number of file locks the user can hold
  • sigpending – max number of pending signals
  • msgqueue – max memory used by POSIX message queues (bytes)
  • nice – max nice priority allowed to raise to values: [-20, 19]
  • rtprio – max realtime priority
  • chroot – change root to directory (Debian-specific)

That finishes up 5 blog posts for a complete setup for Docker, NVIDIA-Docker, User-Namespaces, and Configuration Tuning for using the NVIDIA NGC docker registry on your workstation. Enjoy!

Happy computing –dbk

Tags: , , ,