Compiling TensorFlow with GPU support on a MacBook Pro

OK, so TensorFlow is the popular new computational framework from Google everyone is raving about (check out this year’s TensorFlow Dev Summit video presentations explaining its cool features). Of course, a fun way to learn TensorFlow is to play with it on your own laptop, so that you can iterate quickly and work offline (perhapse build a hot dog recognition app). In these cases a GPU is very useful for training models more quickly. There used to be a tensorflow-gpu package that you could install in a snap on MacBook Pros with NVIDIA GPUs, but unfortunately it’s no longer supported these days due to some driver issues. Luckily, it’s still possible to manually compile TensorFlow with NVIDIA GPU support. I’ve hunted through a lot of different tutorials (1, 2, 3, 4 – this last one helped me the most) to bring you this hopefully complete description of how to set everything up correctly and get deep into learning (and I know, in 2 months probably become just another one in that list of outdated tutorials, but that’s life 🙂 ).

For the sake of verbosity, I’m using a MacBook Pro 10,1 with an NVIDIA GT 650M and OS X 10.12. Hopefully, though, it will work on a couple of other configurations as well. In any case, let’s start… Continue reading Compiling TensorFlow with GPU support on a MacBook Pro