Building TensorFlow on the NVIDIA Jetson TX1 is a little more complicated than some of the installations we have done in the past. Looky here:
TensorFlow is one of the major deep learning systems. Created at Google, it is an open-source software library for machine intelligence. The Jetson TX1 ships with TensorRT, which is the run time for TensorFlow. TensorRT is what is called an “Inference Engine“, the idea being that large machine learning systems can train models which are then transferred over and “run” on the Jetson.
However, some people would like to use the entire TensorFlow system on a Jetson. This has been difficult for a few reasons. The first reason is that TensorFlow binaries aren’t generally available for ARM based processors like the Tegra TX1. The second reason is that actually compiling TensorFlow takes a larger amount of system resources than is normally available on the Jetson TX1. The third reason is that TensorFlow itself is rapidly changing (it’s only a year old), and the experience has been a little like building on quicksand.
In this article, we’ll go over the steps to build TensorFlow r0.11 on the Jetson TX1. This will take about three hours or so to build.
Note: You may want to read through this article and then read the secret article: Install TensorFlow on TX1. Just a thought. But you didn’t hear it from me.
Note: Jan. 17, 2017 – Some issues have been addressed as the installation has changed over the last few weeks. Following the instructions in this article incorporates the changes. You can read about the changes here: Building TensorFlow Update
This article assumes that JetPack 2.3.1 is used to flash the Jetson TX1. Install:
- L4T 24.2.1 an Ubuntu 16.04 64-bit variant (aarch64)
- CUDA 8.0
- cuDNN 5.1.5
Note that the library locations when installed by JetPack may not match a manual installation. TensorFlow will use CUDA and cuDNN in this build.
In order to get TensorFlow to compile on the Jetson TX1, a swap file is needed for virtual memory. Also, a good amount of disk space ( > 5.5GB ) is needed to actually build the program. If you’re unfamiliar with how to set the Jetson TX1 up like that, see a previous article: Jetson TX1 Swap File and Development Preparation.
There is a repository on the JetsonHacks account on Github named installTensorFlowTX1. Clone the repository and switch over to that directory.
$ git clone https://github.com/jetsonhacks/installTensorFlowTX1
$ cd installTensorFlowTX1
Next, tell the dynamic linker to use /usr/local/lib
There is a convenience script which will install the required prerequisites such as Java, along with Protobuf, grpc-java and Bazel. The script also patches the source files appropriately for ARM 64. Bazel and grpc-java each require a different version of Protobuf, so that is also taken care of in the script.
From the video installation of the prerequisites takes a little over 30 minutes, but will depend on your internet connection speed.
First, clone the TensorFlow repository and patch for Arm 64 operation:
then setup the TensorFlow environment variables. This is a semi-automated way to run the TensorFlow configure.sh file. Note that most of the library locations are configured in this script. As stated before, the library locations are determined by the JetPack installation.
We’re now ready to build TensorFlow:
This will take a couple of hours. After TensorFlow is finished building, we package it into a ‘wheel’ file:
The wheel file will be in the $HOME directory, tensorflow-0.11.0-py2-none-any.whl
Pip can be used to install the wheel file:
$ pip install $HOME/tensorflow-0.11.0-py2-none-any.whl
Then run a simple TensorFlow example for the initial sanity check:
$ cd $HOME/tensorflow
$ time python tensorflow/models/image/mnist/convolutional.py
So there you have it. Building TensorFlow is quite a demanding task, but hopefully some of these scripts may the job a little bit simpler.