How To Install CUDA 10.1 on Ubuntu 19.04

Introduction

Ubuntu 19.04 has entered beta as I write this and will be released in a few weeks. I decided to install it and give it a try. My initial impression is very positive. Subjectively, it feels like it has been optimized for performance. It is the first Linux distribution release using the new 5.0 kernel. Everything is up-to-date. There is a lot to like.

Even though this a xx.04 release, it is not an LTS (long term support) release. It is a short term release that will be supported for 1 year. The next LTS release will be 20.04 two years after the current LTS, Ubuntu 18.04. For a stable "production" install I still strongly recommend using Ubuntu 18.04.

I consider Ubuntu 19.04 an experimental release and that is exactly what I am doing with it, experimenting. I wanted to see if I could get some currently unsupported packages running. So far I have installed CUDA 10.1, docker 18.09.4 and NVIDIA-docker 2.03 and run TensorFlow 2 alpha with GPU support. They are all working fine. In this post I'll just go over how to get CUDA 10.1 running on Ubuntu 19.04. Fortunately it was straight forward do get it working.

dbk

“Teaser” info output from this Ubuntu 19.04 install

kinghorn@u19:~$ lsb_release -a

Distributor ID:	Ubuntu
Description:	Ubuntu Disco Dingo (development branch)
Release:	19.04
Codename:	disco
kinghorn@u19:~$ uname -a
Linux u19 5.0.0-7-generic #8-Ubuntu SMP Mon Mar 4 16:27:25 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
kinghorn@u19:~$ gcc --version
gcc (Ubuntu 8.3.0-3ubuntu1) 8.3.0
kinghorn@u19:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Fri_Feb__8_19:08:17_PST_2019
Cuda compilation tools, release 10.1, V10.1.105
kinghorn@u19:~$ docker run --runtime=nvidia -u $(id -u):$(id -g) --rm -it tensorflow/tensorflow:2.0.0a0-gpu-py3 bash

________                               _______________                
___  __/__________________________________  ____/__  /________      __
__  /  _  _ _  __ _  ___/  __ _  ___/_  /_   __  /_  __ _ | /| / /
_  /   /  __/  / / /(__  )/ /_/ /  /   _  __/   _  / / /_/ /_ |/ |/ / 
/_/    ___//_/ /_//____/ ____//_/    /_/      /_/  ____/____/|__/


You are running this container as user with ID 1000 and group 1000,
which should map to the ID and group for your user on the Docker host. Great!

tf-docker / > python -c "import tensorflow as tf; print(tf.__version__)"
2.0.0-alpha0

Steps to install CUDA 10.1 on Ubuntu 19.04


Step 1) Get Ubuntu 19.04 installed!

The first thing I tried to for installing Ubuntu 19.04 was to use the "Desktop" ISO installer. That failed! It hung during the install and I couldn't get it to work (I didn't try very hard to make it work since I have an easier method.) Out of fairness this was the "nightly" ISO build from March, 26th 2019, a few days before the beta release. By the time you read this the "beta" will be out (or the full release if you are reading this after mid April), hopefully it will install from the "Desktop/Live" ISO without trouble.

I used my fallback "standard" method for installing Ubuntu. I use the server installer and the wonderful Ubuntu tool `tasksel` to install a desktop. I installed my favorite MATE desktop. You can read how to do this in the following post,

The Best Way To Install Ubuntu 18.04 with NVIDIA Drivers and any Desktop Flavor. That almost always works and those instruction for 18.04 are still valid for 19.04. But, if you follow the guide linked above please see the next step about the display driver.

Step 2) Get the NVIDIA driver installed

You will need to have the NVIDIA display driver 410 or greater installed to work with CUDA 10.1. Otherwise you will get the dreaded "Status: CUDA driver version is insufficient for CUDA runtime version". I recommend using the most recent driver. The simplest way to install the driver is from the "graphics-drivers ppa".

sudo add-apt-repository ppa:graphics-drivers/ppa

Install dependencies for the system to build the kernel modules,

sudo apt-get install dkms build-essential

Then install the driver, (418 was the most recent at the time of this writing. If you do the command below and hit tab after typing nvidia-driver- you should see a list of all the available driver versions in the ppa.)

sudo apt-get update
sudo apt-get install nvidia-driver-418

After the driver install go ahead and reboot.

sudo shutdown -r Now

Step 3) Install CUDA “dependencies”

There are a few dependencies that get installed when you run the full CUDA deb file but, since we are not going to use the deb file, you will want to install them separately. It's simple since we can get what's needed with just four package installs,

sudo apt-get install freeglut3 freeglut3-dev libxi-dev libxmu-dev

Those packages will get the needed GL, GLU, Xi, Xmu libs and several other libraries that will be installed as dependencies from those.

Step 4) Get the CUDA “run” file installer (Use the Ubuntu 18.10 installer)

Go to the CUDA Zone and click the Download Now button. Then click the link buttons until you get the following,

CUDA download

Download that.

Step 5) Run the “runfile” to install the CUDA toolkit and samples

This is where we get the CUDA developer toolkit and samples onto the system. We will not install the included display driver since the latest driver was installed in step 2). You can use `sh` to run the shell script (".run" file),

sudo sh cuda_10.1.105_418.39_linux.run

This is new installer and is much slower to start-up than the older scripts (in case you have done this before).

You will be asked to accept the EULA, of course, after which you will be presented with a "selector". Un-check the "Driver" block and then select "Install" and hit "Enter".

┌──────────────────────────────────────────────────────────────────────────────┐
│ CUDA Installer                                                               │
│ - [ ] Driver                                                                 │
│      [ ] 418.39                                                              │
│ + [X] CUDA Toolkit 10.1                                                      │
│   [X] CUDA Samples 10.1                                                      │
│   [X] CUDA Demo Suite 10.1                                                   │
│   [X] CUDA Documentation 10.1                                                │
│   Install                                                                    │
│   Options                                                                    │
│                                                                              │
│                                                                              │
│                                                                              │
│                                                                              │
│                                                                              │
│                                                                              │
│                                                                              │
│                                                                              │
│                                                                              │
│                                                                              │
│                                                                              │
│                                                                              │
│                                                                              │
│ Up/Down: Move | Left/Right: Expand | 'Enter': Select | 'A': Advanced options │
└──────────────────────────────────────────────────────────────────────────────┘

This will do the "right thing". It will,

  • install the CUDA toolkit in /usr/local/cuda-10.1
  • create a symbolic link to /usr/local/cuda
  • install the samples in /usr/local/cuda/samples and in your home directory under NVIDIA_CUDA-10.1_Samples
  • add the appropriate library path
cat /etc/ld.so.conf.d/cuda-10-1.conf 
/usr/local/cuda-10.1/targets/x86_64-linux/lib

It does not setup your PATH for the toolkit. That's the next section.

Step 6) Setup your environment variables

There are two good ways to setup your environment variables so you can use CUDA.

  • Setup system environment
  • Setup user environment

In the past I would typically do system-wide environment configuration. You can do this even for a single user workstation but you might prefer to create a small script that sets things up just for the terminal you are working in when you need it.

System-wide alternative

To configure the CUDA environment for all users (and applications) on your system create the file (use sudo and a text editor of your choice)

/etc/profile.d/cuda.sh

with the following content,

export PATH=$PATH:/usr/local/cuda/bin
export CUDADIR=/usr/local/cuda

Environment scripts that are in /etc/profile.d/ get read by your local .bashrc file when you start a terminal or login. It's automatic.

The next time you login your shells will start with CUDA on your path and be ready to use. If you want to load that environment in a shell right now without logging out then just do,

source /etc/profile.d/cuda.sh

Note on LIBRARY PATH:

The cuda-toolkit install did add a .conf file to /etc/ld.so.conf.d but what it added is not idea and seems to not always work right. If you are doing a system-wide environment configuration I suggest doing the following;

Move the installed conf file out of the way,

sudo mv /etc/ld.so.conf.d/cuda-10-1.conf  /etc/ld.so.conf.d/cuda-10-1.conf-orig

Then create, (using sudo and your editor of choice), the file

/etc/ld.so.conf.d/cuda.conf

containing,

/usr/local/cuda/lib64

Then run

sudo ldconfig

This cuda.conf file in /etc/ld.so.conf.d/ will be pointing at the symbolic link to cuda-xx in /usr/local so it will still be correct even if you change the cuda version that the link is pointing to. (This is my "normal" way of setting up system-wide environments for CUDA.)

User per terminal alternative

If you want to be able to activate your CUDA environment only when and where you need it then this is a way to do that. You might prefer this method over a system-wide environment since it will keep your PATH cleaner and allow you easy management of multiple CUDA versions. If you decide to use the ideas in this post to install another CUDA version, say 9.2, along with your 10.1 this will make it easier to switch back and forth.

For a localized user CUDA environment create the following simple script. You don't need to use sudo for this and you can keep the script anywhere in your home directory. You will just need to "source" it when you want a CUDA dev environment.

I'll create the file with the name `cuda10.1-env`. Add the following lines to this file,

export PATH=$PATH:/usr/local/cuda-10.1/bin
export CUDADIR=/usr/local/cuda-10.1
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-10.1/lib64

Note: I explicitly used the full named path to version 10.1 i.e `/usr/local/cuda-10.1` rather than to the symbolic link `/usr/local/cuda`. You can use the symbolic link path if you want. I just did this in case I want to install another version of CUDA and make another environment script pointing to the different version.

Now when you want your CUDA dev environment just do `source cuda10.1-env`. That will set those environment variables in your current shell. (you could copy that file to your working directory or else give the full path to it when you use the `source` command.)

Step 7) Test CUDA by building the “samples”

Let's make sure everything is working correctly. You can use the copy of the samples that the installer put in you home directory under `NVIDIA_CUDA-10.1_Samples` or copy the samples from `/usr/local/cuda/samples`.

cd  ~/NVIDIA_CUDA-10.1_Samples

source cuda-10.1-env

make -j4

Running that make command will compile and link all of the source examples as specified in the Makefile. ( the -j4 just means run 4 "jobs" make can build objects in parallel so you can speed up the build time by using more processes. )

After everything finishes building you can `cd` to `bin/x86_64/linux/release/` and see all of the sample executables. All of the samples seem to have built without error even though this is an unsupported Ubuntu version. I ran several of the programs and they were working as expected including the ones that were using OpenGL graphics.

Just because the samples built OK doesn't mean that there are not any problems with the install but it is a really good indication that you can proceed with confidence for your development work!

Extras not discussed … docker, nvidia-docker, TensorFlow

I have only talked about setting up CUDA 10.1 on Ubuntu 19.04. I also installed the latest docker and nvidia-docker. This was done using repo setups based on "bionic" i.e. Ubuntu 18.04. Those deb packages installed fine on 19.04. My basic procedure for installing and setting up docker is presented in a series of 5 posts from the beginning of 2018 (still relevant), How-To Setup NVIDIA Docker and NGC Registry on your Workstation – Part 5 Docker Performance and Resource Tuning. That post has links to the first 4.

After setting up docker and nvidia-docker I ran TensorFlow 2.0 alpha from google's docker image on DockerHub. I could also, have attempted to build TensorFlow 2 alpha against the CUDA 10.1 install here but I'm not that brave. It would be best to stick with docker or an Ubuntu 18.04 setup for that.

[I did try installing TensorFlow from the pip package but ended up with a segmentation fault on a system library. I don't recommend trying this.]

Recommendation

Like I said at the beginning of the post, this is an experimental setup. Ubuntu 19.04 looks like it will be a good Linux platform and it has all the latest packages which will be tempting for you (since you are reading this). My serious recommendation is do it if you want to experiment with a bleeding edge dev environment, otherwise stick with Ubuntu 18.04. Your stable "production" platform should be Ubuntu 18.04. 18.04 will be supported for several more years which means it will be an attractive default Linux platform for software builds. It should remain stable and well supported.

I will be doing more posts on setting up Machine Learning/ AI/ Data Science/ HPC /etc. configurations. That includes setup's for Windows 10 and Ubuntu 18.04. I probably wont do anything else about Ubuntu 19.04 unless I get talked into it, lol. It does look like a good release to me. Congratulations to Canonical and the Ubuntu team!

Happy computing! –dbk


Looking for a
Scientific Compute System?

Do you have a project that needs serious compute power, and you don’t know where to turn? Puget Systems offers a range of HPC workstations and servers tailored for both CPU and GPU workloads.

Configure a Scientific Compute System

Why Choose Puget Systems?

gears icon

Built Specifically for You

Rather than getting a generic workstation, our systems are designed around your unique workflow and are optimized for the work you do every day.

people icon

We’re Here, Give Us a Call!

We make sure our representatives are as accessible as possible, by phone and email. At Puget Systems, you can actually talk to a real person!

delivery icon

Fast Build Times

By keeping inventory of our most popular parts, and maintaining a short supply line to parts we need, we are able to offer an industry-leading ship time.

repair icon

Lifetime Labor & Tech Support

Even when your parts warranty expires, we continue to answer your questions and even fix your computer with no labor costs.
Click here for even more reasons!