Compiling TensorFlow with GPU support on a MacBook Pro

OK, so TensorFlow is the popular new computational framework from Google everyone is raving about (check out this year’s TensorFlow Dev Summit video presentations explaining its cool features). Of course, a fun way to learn TensorFlow is to play with it on your own laptop, so that you can iterate quickly and work offline (perhapse build a hot dog recognition app). In these cases a GPU is very useful for training models more quickly. There used to be a tensorflow-gpu package that you could install in a snap on MacBook Pros with NVIDIA GPUs, but unfortunately it’s no longer supported these days due to some driver issues. Luckily, it’s still possible to manually compile TensorFlow with NVIDIA GPU support. I’ve hunted through a lot of different tutorials (1, 2, 3, 4 – this last one helped me the most) to bring you this hopefully complete description of how to set everything up correctly and get deep into learning (and I know, in 2 months probably become just another one in that list of outdated tutorials, but that’s life 🙂 ).

For the sake of verbosity, I’m using a MacBook Pro 10,1 with an NVIDIA GT 650M and OS X 10.12. Hopefully, though, it will work on a couple of other configurations as well. In any case, let’s start…

Prerequirements

You probably have this stuff, but if not be sure to install:

brew update
brew upgrade
brew install python3

Requirements

The more “uncommon” requirements:

brew install python3 coreutils swig llvm bazel
brew cask install java

Tap the drivers:

brew tap caskroom/drivers

Then install CUDA

brew cask install cuda

This should give you:

brew cask info cuda

Now, I found the CUDA installed from Homebrew to be unsupported (which lead to some errors), so go to Apple – System Preferences – CUDA and click the “Install CUDA Update” button. For me, the latest version was 8.0.90.

Now add these lines to your ~/.bash_profile file (as suggested here), so that all the CUDA binaries are available to you on the command line:

export PATH=/Developer/NVIDIA/CUDA-8.0/bin${PATH:+:${PATH}}
export DYLD_LIBRARY_PATH=/Developer/NVIDIA/CUDA-8.0/lib\
                         ${DYLD_LIBRARY_PATH:+:${DYLD_LIBRARY_PATH}}

OK, so far so good. Now comes the slightly more annoying process. In order to install NVIDIA’s cuDNN thingy, you need to register on their website.

  • register by clicking “Join” on the NVIDIA developer portal (when they ask you for the opt-in spam, it’s actually mandatory 😛 )
  • download cuDNN for OS X version 6 by going here and clicking “download”, check “I agree…” and select “Download cuDNN v6.0 (April 27, 2017), for CUDA 8.0”

Once you have the cudnn-8.0-osx-x64-v6.0.tar or something file downloaded, time to extract it.

tar zxvf ~/Downloads/cudnn-8.0-osx-x64-v6.0.tar
sudo mv -v cuda/lib/libcudnn* /usr/local/cuda/lib
sudo mv -v cuda/include/cudnn.h /usr/local/cuda/include

The CUDA nvcc thingy

You need to download an outdated version of Xcode Command Line Tools for Xcode version 8.2 (something doesn’t work with 8.3 – nvcc fatal : The version ('80100') of the host compiler ('Apple clang') is not supported). Found the solution here. You can get Xcode CLT version 8.2 from here, install it (double click and next, next…) and temporarily switch to it (when you’re done with everything you can switch to the latest dev tools with sudo xcode-select -r).

sudo xcode-select --switch /Library/Developer/CommandLineTools

My clang & CLT versions are

$ clang --version
Apple LLVM version 8.0.0 (clang-800.0.42.1)
Target: x86_64-apple-darwin16.7.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
$ pkgutil --pkg-info=com.apple.pkg.CLTools_Executables
package-id: com.apple.pkg.CLTools_Executables
version: 8.2.0.0.1.1480973914
volume: /
location: /
install-time: 1505388312
groups: com.apple.FindSystemFiles.pkg-group

(note that I had a previous full installation of Xcode)

Now you should build something with this nvcc thing. Add this to your ~/.bash_profile and open a new Terminal tab to source it.

export DYLD_LIBRARY_PATH="/usr/local/cuda/lib":$DYLD_LIBRARY_PATH

Run this:

cd /usr/local/cuda/samples
sudo make -C 1_Utilities/deviceQuery

If it builds – awesome! Here you can check that the driver is set to the correct version and note the CUDA capability of your GPU – CUDA Capability Major/Minor version number: 3.0 in my case. We’ll need this 3.0 number or whatever it is for you later.

Disable SIP

Some said this wasn’t necessary, but I had various linking errors before finally trying this out. It supposedly weakens the OS security a bit, but it’s necessary to give the compiler access to all the necessary libraries. Here’s a tutorial on how to do this. In short – reboot, press and hold “Cmd + R” when you hear the booting sound, open Utilities – Terminal, issue csrutil disable and restart your computer again.

Install Python dependencies

I’m gonna assume you’ll install TensorFlow in an environment called s in your home folder (I have a setup similar to this, where s is my “system” with all the scientific libraries that I share between data analysis projects). Feel free to change this if you know what you’re doing.

cd # assuming that s will live in your home folder, so /Users/username/s
python3 -m venv s
source ~/s/bin/activate

Now install the Python packages TensorFlow depends on.

pip install numpy wheel six

Get the TensorFlow source code

This should be easy enough:

cd # again, assuming you keep your code in your home directory
git clone --recurse-submodules https://github.com/tensorflow/tensorflow
git checkout r1.3 # I wanted to compile the official 1.3.0 release
cd tensorflow

Building

It’s necessary to set some environment variables before you start this session.

export CUDA_HOME=/usr/local/cuda
export DYLD_LIBRARY_PATH=/usr/local/cuda/lib:/usr/local/cuda/extras/CUPTI/lib
export LD_LIBRARY_PATH=$DYLD_LIBRARY_PATH
export PATH=$DYLD_LIBRARY_PATH:$PATH
export flags="--config=cuda --config=opt"

As mentioned here, comment out this one line in the file tensorflow/third_party/gpus/cuda/BUILD.tpl that containes something like

# linkopts = [“-lgomp”]

Now run the configuration. Be sure to substitute TF_CUDA_COMPUTE_CAPABILITIES with the CUDA Capability Major/Minor version number that you found in the earlier steps.

PYTHON_BIN_PATH=$HOME/s/bin/python3 CUDA_TOOLKIT_PATH="/usr/local/cuda" CUDNN_INSTALL_PATH="/usr/local/cuda" TF_UNOFFICIAL_SETTING=1 TF_NEED_CUDA=1 TF_CUDA_COMPUTE_CAPABILITIES="3.0" TF_CUDNN_VERSION="6" TF_CUDA_VERSION="8.0" TF_CUDA_VERSION_TOOLKIT=8.0 ./configure

There are some questions. I answered most of them with the default suggestions by pressing enter (be sure to answer the clang question with the default “no” as well – we’ll be using gcc, but in fact it’s using clang after all – some Apple thing… 😛 ). The things I did answer yes are the Google Cloud and Hadoop questions. Those sounded like they might come in handy…

Now run the build step and go for a cup of tea or something (seriously, it’s gonna take like 45 minutes).

bazel build $flags --verbose_failures --action_env PATH --action_env LD_LIBRARY_PATH --action_env DYLD_LIBRARY_PATH //tensorflow/tools/pip_package:build_pip_package
XKCD compiling web comic
Mandatory XKCD reference

In the end if there are no errors, you can almost party… Just two more commands:

bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

and finally to install the package (your exact path might vary):

pip install /tmp/tensorflow_pkg/tensorflow-1.3.0-cp36-cp36m-macosx_10_12_x86_64.whl

Hopefully, you have a GPU-enhanced TensorFlow on your MacBook Pro now 🙂

To continue using TensorFlow in different folders / terminal sessions, this is what I’ve made my ~/.bash_profile, i.e. ~/.zsh_profile look like regarding TensorFlow (my full dotfiles can be found here):

export PATH=/Developer/NVIDIA/CUDA-8.0/bin${PATH:+:${PATH}}
export DYLD_LIBRARY_PATH=/Developer/NVIDIA/CUDA-8.0/lib:/usr/local/cuda/lib:/usr/local/cuda/extras/CUPTI/lib:${DYLD_LIBRARY_PATH:+:${DYLD_LIBRARY_PATH}}
export LD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:$LD_LIBRARY_PATH

Go ahead and run the test script:

import tensorflow as tf

# Creates a graph.
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)

# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

# Runs the op.
print(sess.run(c))

(for me it errored out the first time for some reason – yay, stable software… so give it another go in case it doesn’t work)

If it works, you’ll see lots of mentions of your graphics card (GeForce GT 650M in my case) and a matrix as an end results. You’ll suddenly be overfilled with joy! Go have another cup of tea, you’ve earned it 🙂

Published by

metakermit

Building apps, analysing data at Punk Rock Dev and sharing weird & cool photographs, drawings, music, films, games… More about me here. You can get new blog posts via RSS or follow @metakermit on Twitter where I also announce new stuff.

10 thoughts on “Compiling TensorFlow with GPU support on a MacBook Pro”

  1. Good detailed article! I see you might have a typo on the line after “Now run the configuration.” Btw were you able to get it to work without errors using cudNN 6.0 (vs 5.0)?

    1. Thanks, Phil. You mean the double `//` ? Because that seems to be correct based on the official steps.

      Yes, I got it to work. I was using it for Kaggle yesterday, but then realised that unfortunately my 1 GB GPU is probably not suitable for most practical cases – you need a beefier card or cloud VMs is what I was told. Still, it’s handy to be able to test some smaller examples locally.

  2. Hey, thanks for that helpful tutorial.

    Do you know if this also works with CUDA 9.0? Im basically stuck at the building step where several error occur. From my own research I got to know that there might be a problem with the NCCL (?). I think my problem refers to this issue: https://github.com/tensorflow/tensorflow/issues/12489

    where a fix is suggested but I’m not sure how to do that exactly.

    Do you might know a solution?

  3. Thanks appears to now work for MacPro 3,1 running Sierra and NVIDIA GeForce GT 730

    Exiting the tensorflow directory after compilation was needed for the tf example to work properly.

Leave a Reply

Your email address will not be published. Required fields are marked *