Setup Deep Learning workstation with Ubuntu 22.04

Venky
5 min readJul 26, 2022
Deep Learning Setup
Photo by Alex Knight on Unsplash

This article assumes that you are familiar with basic/intermediate level of UNIX/Linux working experience and also make sure you have CUDA enabled GPU devices in your machine to successfully finish this setup and run tests on devices.

Get your system updates ready

sudo apt update && sudo apt upgrade -ysudo apt install git curl vim build-essential gcc-9 g++-9 python-is-python3 python3-virtualenv

We need not reboot ubuntu after an upgrade but it is recommended to do so.

Ubuntu 22.04 comes with out-of-the-box driver support for NVIDIA graphics cards. So, we can just use the metapackages. Go to “Software & Updates” then select the “Additional Drivers” tab.

It should look like below: (Always choose the proprietary ,tested with latest driver number)

Install CUDA:

# install dependencies
sudo apt install git curl vim build-essential gcc-9 g++-9 python-is-python3 python3-virtualenv
# download cuda 11.2 packagewget https://developer.download.nvidia.com/compute/cuda/11.2.0/local_installers/cuda_11.2.0_460.27.04_linux.run# run the installer and follow instructions on screen
sudo sh cuda_11.2.0_460.27.04_linux.run

If the installer fails with any gcc version checks, then run below command to find out what all gcc versions you have available:

dpkg --list | grep compiler

Note: Make sure you have at least one gcc version that is below 10 available so that gcc<10 is compatible with the cuda 11.2

In my machine, I have gcc 9 installed, so set 9 as my preferred version:

# SET g++ compiler version
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-9 100
# SET gcc version; 100=priority
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-9 100
# RUN the installer again
sudo sh cuda_11.2.0_460.27.04_linux.run

You may want to read more about update-alternatives priority here.

Note: Make sure to uncheck the Driver by pressing ‘space’ key as we already have the driver installed.

Now, Update the environment variables, and add the following lines to ~/.bashrc or ~/.zshrc

export PATH=/usr/local/cuda-11.2/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-11.2/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
export CUDA_HOME=/usr/local/cuda
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
# source the config
source ~/.bashrc or source ~/.zshrc

Test CUDA installation

The CUDA installation comes with some sample programs which get stored at $HOME, so let’s navigate to the samples and run device query program

cd ~/NVIDIA_CUDA-11.2_Samples/1_Utilities/deviceQuery
make all
make run

If installation succeeded, you should see output similar to as shown below:

Note: I have 2 GPUs installed, so I see those in output, if you have 1 device, you would see only one or whatever devices available.

Install CUDNN

You will have to register an account with nvidia and verify your email used for registration and then you will have to download 8.1.1 cudnn archive that is compatible with cuda-11.2 version at this time of installation. (If you are trying this setup on a later date, you might want to check for compatibility matrix for correct version of archive for your cuda version.)

Once you have the correct archive downloaded and ready, navigate to the directory where the archive is stored and run below commands to proceed with installation:

tar -zvxf cudnn-11.2-linux-x64-v8.1.1.33.tgz
sudo cp -P cuda/include/cudnn.h /usr/local/cuda-11.2/include
sudo cp -P cuda/lib64/libcudnn* /usr/local/cuda-11.2/lib64/
sudo chmod a+r /usr/local/cuda-11.2/lib64/libcudnn*

Verify cudnn installation:

# check for nvidia cuda compiler (nvcc) version
nvcc -V
# you should see similar output
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Mon_Nov_30_19:08:53_PST_2020
Cuda compilation tools, release 11.2, V11.2.67
Build cuda_11.2.r11.2/compiler.29373293_0

Setup dependencies:

sudo apt install build-essential pkg-config cmake cmake-qt-gui ninja-build valgrindsudo apt install python3 python3-wheel python3-pip python3-venv python3-dev python3-setuptools# create virtual env for gpu specific
python3 -m venv ~/venv/torchgpu
# activate venv
source venv/torchgpu/bin/activate
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113pip install ipykernelpython -m ipykernel install --user --name TORCH-GPU --display-name "PyTorch GPU"# if you see any warning/error w.r.t openGL and glut during make all/run of any sample program run
sudo apt-get install freeglut3 freeglut3-dev

Quick PyTorch test:

Test a sample simulation program that would utilize our GPU:

# navigate to sample program
cd $HOME/NVIDIA_CUDA-11.2_Samples/5_Simulations/smokeParticles
make all
make run
# you should see a simulation window appear with 60 fps or something

While this simulation is running, open a new terminal session to see what’s happening with each GPU in machine:

# run this command
nvidia-smi
# you should see output similar to what is in snapshot below

We can see that our sample program, smoke particles is running on GPU devices.

That’s all for this post.

Thanks for checking this out, let me know how your setup goes in comments :-).

--

--

Venky

A sentient machine interested in abstract ideas, computing and intelligent systems.