TensorFlow 2.0 with GPU on Debian/sid

Some time ago I have been written about how to get Tensorflow (1.x) running on current Debian/sid back then. It turned out that this isn’t correct anymore and needs an update, so here it is, getting the most uptodate TensorFlow 2.0 running with nVidia support running on Debian/sid.

Step 1: Install CUDA 10.0

Follow more or less the instructions here and do

wget -O- https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub | sudo tee /etc/apt/trusted.gpg.d/nvidia-cuda.asc
echo "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /" | sudo tee /etc/apt/sources.list.d/nvidia-cuda.list
sudo apt-get update
sudo apt-get install cuda-libraries-10-0

Warning! Don’t install the 10-1 version since the TensorFlow binaries need 10.0.

This will install lots of libs into /usr/local/cuda-10.0 and add the respective directory to the ld.so path by creating a file /etc/ld.so.conf.d/cuda-10-0.conf.

Step 2: Install CUDA 10.0 CuDNN

One difficult to satisfy dependency are the CuDNN libraries. In our case we need the version 7 library for CUDA 10.0. To download these files one needs to have a NVIDIA developer account, which is quick and painless. After that go to the CuDNN page where one needs to select Download cuDNN v7.N.N (xxxx NN, YYYY), for CUDA 10.0 and then cuDNN Runtime Library for Ubuntu18.04 (Deb).

At the moment (as of today) this will download a file libcudnn7_7.6.4.38-1+cuda10.0_amd64.deb which needs to be installed with dpkg -i libcudnn7_7.6.4.38-1+cuda10.0_amd64.deb.

Step 3: Install Tensorflow for GPU

This is the easiest one and can be done as explained on the TensorFlow installation page using

pip3 install --upgrade tensorflow-gpu

This will install several other dependencies, too.

Step 4: Check that everything works

Last but not least, make sure that TensorFlow can be loaded and find your GPU. This can be done with the following one-liner, and in my case gives the following output:

$ python3 -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
....(lots of output)
2019-10-04 17:29:26.020013: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3390 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
tf.Tensor(444.98087, shape=(), dtype=float32)
$

I haven’t tried to get R working with the newest TensorFlow/Keras combination, though. Hope the above helps.

4 Responses

  1. 2019/10/06

    […] TensorFlow 2.0 with GPU on Debian/sid […]

  2. 2019/10/10

    […] recently posted on getting TensorFlow 2.0 with GPU support running on Debian/sid. At that time I didn’t manage to get the tensorflow package for R running properly. It […]

  3. 2020/05/14

    […] have been using my Geforce 1060 extensively for deep learning, both with Python and R. But the always painful play with the closed source drivers and kernel updates, paired with […]

  4. 2020/09/02

    […] and deep learning needs. I reported a few times how to get Tensorflow running on Debian/Sid, see here and here. Later on I switched to AMD GPU in the hope that an open source approach to both GPU […]

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>