Creating a Virtual Environment
As Tensorflow is very finicky and has a lot of dependencies, I highly recommend setting up a virtual environment for Tensorflow using Python version 3.6.12. You can set this up using python-venv or Anaconda. In this section I will show how to do this using Anaconda and how to link your new virtual environment to Jupyter.
First off to create your virtual environment you want to run this in your terminal:
conda create --name tensorflow python=3.6
Note that I chose to name my virual environment tensorflow, but you can name it whatever you would like. Go through the prompts and press y when it asks you to. Now activate your virutal environment by typing:
conda activate tensorflow
After this, we need to link our virtual environment to Jupyter, to do this do the following:
pip install --user ipykernel python -m ipykernel install --user --name=tensorflow jupyter kernelspec list
At the end you should see that tensorflow is an available kernel for Jupyter!
To install Tensorflow all you need to do is this:
pip install tensorflow
That’s it you now have Tensorflow installed!
Getting all the Nvidia Dependencies
First off you will need to get the Nvidia GPU drivers, I will be showing how to do this through the command line. To start type the following in the command line:
You will get something that looks like this, you may get something slightly different depending on your machine:
== /sys/devices/pci0000:00/0000:00:02.0/0000:03:00.0 == modalias : pci:v000010DEd00002204sv0000196Esd0000136Abc03sc00i00 vendor : NVIDIA Corporation manual_install: True driver : nvidia-driver-455 - distro non-free recommended driver : xserver-xorg-video-nouveau - distro free builtin
You will want to install the recommended driver, in this case you can do this by typing in the command line:
sudo apt install nvidia-driver-455
Note that in order for tensorflow to work you will need nvidia-driver-450 or newer.
Now reboot your system
Next we will need to get the CUDA Toolkit and cuDNN. We will be using CUDA 11.0 in this guide. To get the CUDA Toolkit run the following in the command line:
$ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin $ sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600 $ wget http://developer.download.nvidia.com/compute/cuda/11.0.2/local_installers/cuda-repo-ubuntu2004-11-0-local_11.0.2-450.51.05-1_amd64.deb $ sudo dpkg -i cuda-repo-ubuntu2004-11-0-local_11.0.2-450.51.05-1_amd64.deb $ sudo apt-key add /var/cuda-repo-ubuntu2004-11-0-local/7fa2af80.pub $ sudo apt-get update $ sudo apt-get -y install cuda
Now be sure to add CUDA to your PATH by typing this in the command line:
You can now check if CUDA was properly installed by typing this in the command line:
You should get an output like this:
nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2020 NVIDIA Corporation Built on Thu_Jun_11_22:26:38_PDT_2020 Cuda compilation tools, release 11.0, V11.0.194 Build cuda_11.0_bu.TC445_37.28540450_0
Next up we will get the cuDNN SDK, to do this you will need to sign up for an account by clicking here. After you’ve made your account, you can go to the download page here. You will want to get the cuDNN Runtime and Developer libraries for Ubuntu 20.04 for CUDA 11.0. Once you’ve downloaded the libraries, go to the folder where you downloaded them, and run the following on the command line:
sudo dpkg -i libcudnn8_188.8.131.52-1+cuda11.0_amd64.deb sudo dpkg -i libcudnn8-dev_184.108.40.206-1+cuda11.0_amd64.deb
These might take a while so go grab a drink of water in the meantime.
Finally, the last piece our our install will be TensorRT. We will be using the TAR packing of TensorRT 7.2.2 for CUDA 11.0 which you can download by clicking here. Once you downloaded this, unpack the packing by typing in the command line:
tar -xvf TensorRT-220.127.116.11.Ubuntu-18.04.x86_64-gnu.cuda-11.0.cudnn8.0.tar.gz
Again, this might take a while, so go ahead and go get another drink of water.
After it’s finish unpacking, run the following in the command line:
cd TensorRT-18.104.22.168/python python3.6 -m pip install tensorrt-22.214.171.124-cp36-none-linux_x86_64.whl
Testing out that Tensorflow can recognize your GPU
You can run the following the code in Python to see if your GPU is being used in tensorflow.
import tensorflow as tf tf.config.experimental.list_physical_devices('GPU')
After you get an output that your GPU is recognized by Tensorflow, you’ll want to use these 2 lines of code to avoid errors when fitting models.
gpus = tf.config.experimental.list_physical_devices('GPU') tf.config.experimental.set_memory_growth(gpus, True)
After you’ve run these you can fit any type of neural network using Tensorflow on your Nvidia GPU!