TensorFlow on NVIDIA Jetson TX1 Development Kit

Building TensorFlow on the NVIDIA Jetson TX1 is a little more complicated than some of the installations we have done in the past. Looky here:


TensorFlow is one of the major deep learning systems. Created at Google, it is an open-source software library for machine intelligence. The Jetson TX1 ships with TensorRT, which is the run time for TensorFlow. TensorRT is what is called an “Inference Engine“, the idea being that large machine learning systems can train models which are then transferred over and “run” on the Jetson.

However, some people would like to use the entire TensorFlow system on a Jetson. This has been difficult for a few reasons. The first reason is that TensorFlow binaries aren’t generally available for ARM based processors like the Tegra TX1. The second reason is that actually compiling TensorFlow takes a larger amount of system resources than is normally available on the Jetson TX1. The third reason is that TensorFlow itself is rapidly changing (it’s only a year old), and the experience has been a little like building on quicksand.

In this article, we’ll go over the steps to build TensorFlow r0.11 on the Jetson TX1. This will take about three hours or so to build.

Note: You may want to read through this article and then read the secret article: Install TensorFlow on TX1. Just a thought. But you didn’t hear it from me.


Note: Jan. 17, 2017 – Some issues have been addressed as the installation has changed over the last few weeks. Following the instructions in this article incorporates the changes. You can read about the changes here: Building TensorFlow Update

This article assumes that JetPack 2.3.1 is used to flash the Jetson TX1. Install:

  • L4T 24.2.1 an Ubuntu 16.04 64-bit variant (aarch64)
  • CUDA 8.0
  • cuDNN 5.1.5

Note that the library locations when installed by JetPack may not match a manual installation. TensorFlow will use CUDA and cuDNN in this build.

In order to get TensorFlow to compile on the Jetson TX1, a swap file is needed for virtual memory. Also, a good amount of disk space ( > 5.5GB ) is needed to actually build the program. If you’re unfamiliar with how to set the Jetson TX1 up like that, see a previous article: Jetson TX1 Swap File and Development Preparation.

There is a repository on the JetsonHacks account on Github named installTensorFlowTX1. Clone the repository and switch over to that directory.

$ git clone https://github.com/jetsonhacks/installTensorFlowTX1
$ cd installTensorFlowTX1

Next, tell the dynamic linker to use /usr/local/lib

$ ./setLocalLib.sh


There is a convenience script which will install the required prerequisites such as Java, along with Protobuf, grpc-java and Bazel. The script also patches the source files appropriately for ARM 64. Bazel and grpc-java each require a different version of Protobuf, so that is also taken care of in the script.

$ ./installPrerequisites.sh

From the video installation of the prerequisites takes a little over 30 minutes, but will depend on your internet connection speed.

Building TensorFlow

First, clone the TensorFlow repository and patch for Arm 64 operation:

$ ./cloneTensorFlow.sh

then setup the TensorFlow environment variables. This is a semi-automated way to run the TensorFlow configure.sh file. Note that most of the library locations are configured in this script. As stated before, the library locations are determined by the JetPack installation.

$ ./setTensorFlowEV.sh

We’re now ready to build TensorFlow:

$ ./buildTensorFlow.sh

This will take a couple of hours. After TensorFlow is finished building, we package it into a ‘wheel’ file:

$ ./packageTensorFlow.sh

The wheel file will be in the $HOME directory, tensorflow-0.11.0-py2-none-any.whl


Pip can be used to install the wheel file:

$ pip install $HOME/tensorflow-0.11.0-py2-none-any.whl


Then run a simple TensorFlow example for the initial sanity check:

$ cd $HOME/tensorflow
$ time python tensorflow/models/image/mnist/convolutional.py


So there you have it. Building TensorFlow is quite a demanding task, but hopefully some of these scripts may the job a little bit simpler.

16 Comments on TensorFlow on NVIDIA Jetson TX1 Development Kit

  1. Hi Jim, thanks for this. Two days ago I tried to do it on my own following posts on stockoverflow and github, but completely run out of space in the middle of the process. Had to perform some memory gymnastics to even boot the device properly 🙂

    Judging by the terminal output from that test at the end, tensorflow is compiled with cuda support? Also, is there any reason why this wouldn’t work with 0.12 version TF?

  2. HI Marko,
    I found the whole TensorFlow build process rather trying. Looking at the Github and Stackoverflow threads, it looks like people far more determined and smarter than I are having a bunch of issues building it on the TX1. I attempted to build everything in an entirely automated fashion, but ran into numerous roadblocks. Eventually I gave in and published what I had, admitting defeat. Hopefully others can follow along with my attempt. On a TX1 directly after flashing with enough memory and disk space, I did NOT find that the solutions posted actually built 0.11 correctly

    The script file configureTensorFlow.sh sets all of the environment variables before calling the TF script named ‘configure’. configureTensorFlow sets TensorFlow to use CUDA 8.0 and cuDNN 5.1.15. The environment variables use the JetPack configuration, so if someone set their TX1 up manually they may have to adjust those environment variables in the script file, or just run the TensorFlow configure script directly.

    As for TensorFlow 0.12, the quick answer is that I haven’t tried it. Personally, I was just happy to get 0.11 to build consistently.

    Directions for getting the pre-built wheel file for 0.11: http://wp.me/p7ZgI9-II

  3. Dear Marko and kangalow,
    it took me about 40hrs in total till I finished the build of Tensorflow.
    The scripts definitely gave a good reference.
    As you can see the output of the example mentioned in the end of the article Tensorflow is using the TX1 gpu.

    ubuntu@jetson:~/tensorflow$ time python tensorflow/models/image/mnist/convolutional.py
    I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcublas.so.8.0 locally
    I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcudnn.so.5.1.5 locally
    I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcufft.so.8.0 locally
    I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcuda.so.1 locally
    I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcurand.so.8.0 locally
    Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
    Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
    Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
    Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
    Extracting data/train-images-idx3-ubyte.gz
    Extracting data/train-labels-idx1-ubyte.gz
    Extracting data/t10k-images-idx3-ubyte.gz
    Extracting data/t10k-labels-idx1-ubyte.gz
    I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] ARM has no NUMA node, hardcoding to return zero
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:951] Found device 0 with properties:
    name: NVIDIA Tegra X1
    major: 5 minor: 3 memoryClockRate (GHz) 0.072
    pciBusID 0000:00:00.0
    Total memory: 3.90GiB
    Free memory: 1.40GiB
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:972] DMA: 0
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] 0: Y
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Creating TensorFlow device (/gpu:0) -> (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0)
    Step 0 (epoch 0.00), 42.6 ms
    Minibatch loss: 12.054, learning rate: 0.010000
    Minibatch error: 90.6%
    Validation error: 84.6%
    Step 100 (epoch 0.12), 56.8 ms
    Minibatch loss: 3.283, learning rate: 0.010000
    Minibatch error: 6.2%
    Validation error: 7.1%
    Step 200 (epoch 0.23), 55.0 ms

    Because this was running so long and maybe I was just lucky that it didn’t crashed in some way I uploaded the wheel-file to Drive so one can just download it 😉


    best regards and thanks for your work,

  4. I’m having the following issue when running the tensorflow configure script:

    ERROR: /home/ubuntu/tensorflow/tensorflow/core/platform/default/build_config/BUILD:56:1: no such package ‘@jpeg_archive//’: Error downloading from http://www.ijg.org/files/jpegsrc.v9a.tar.gz to /home/ubuntu/.cache/bazel/_bazel_ubuntu/ad1e09741bb4109fbc70ef8216b59ee2/external/jpeg_archive: Error downloading http://www.ijg.org/files/jpegsrc.v9a.tar.gz to /home/ubuntu/.cache/bazel/_bazel_ubuntu/ad1e09741bb4109fbc70ef8216b59ee2/external/jpeg_archive/jpegsrc.v9a.tar.gz: Connection reset and referenced by ‘//tensorflow/core/platform/default/build_config:platformlib’.
    ERROR: /home/ubuntu/tensorflow/tensorflow/core/platform/default/build_config/BUILD:56:1: no such package ‘@jpeg_archive//’: Error downloading from http://www.ijg.org/files/jpegsrc.v9a.tar.gz to /home/ubuntu/.cache/bazel/_bazel_ubuntu/ad1e09741bb4109fbc70ef8216b59ee2/external/jpeg_archive: Error downloading http://www.ijg.org/files/jpegsrc.v9a.tar.gz to /home/ubuntu/.cache/bazel/_bazel_ubuntu/ad1e09741bb4109fbc70ef8216b59ee2/external/jpeg_archive/jpegsrc.v9a.tar.gz: Connection reset and referenced by ‘//tensorflow/core/platform/default/build_config:platformlib’.
    ERROR: Evaluation of query “deps((//tensorflow/… union @bazel_tools//tools/jdk:toolchain))” failed: errors were encountered while computing transitive closure.

    Has anyone seen this error before?

  5. Hi, I’m trying to install tensorflow on TX1 as this post.
    However, every time I run ./cloneTensorflow.sh
    a prompt is shown as below.
    Reversed (or previously applied) patch detected! Assume -R? [n]
    Did I something wrong?

  6. After successfully setting up the environment. Tensorflow has been building for 5-6 days. Seems to have been stuck on the following step of building protobuf:

    INFO: From Compiling external/protobuf/src/google/protobuf/compiler/java/java_enum_lite.cc:
    external/protobuf/src/google/protobuf/compiler/java/java_enum_lite.cc:53:6: warning: ‘bool google::protobuf::compiler::java::{anonymous}::EnumHasCustomOptions(const google::protobuf::EnumDescriptor*)’ defined but not used [-Wunused-function]
    bool EnumHasCustomOptions(const EnumDescriptor* descriptor) {

    I ran the jetson_clocks.sh before starting the build process, but still seems to be taking longer than expected and mentioned online.

    Anyone had the same behavior or have a recommendation on how to debug/speed up the process?

    • What version of L4T are you using? I do know that the build process is very sensitive to the ./setLocalLib.sh step. If you don’t have to actually build it yourself, consider using the pre-built wheel file available through: http://wp.me/p7ZgI9-II

      The build should only take a few hours. Certainly if it gets stuck you should consider that there is something wrong, and restart the build. The steps to build protobuf should take less than 15 minutes, most of the build time is spent on Tensorflow itself.

  7. Hello, after using “./buildTensorFlow.sh” it gets stuck in “external/protobuf/python/google/protobuf/pyext/message.cc:554:20: warning: ISO C++ forbids converting a string constant to ‘char*’ [-Wwrite-strings]
    “. Till now more than 48 hours have passed. Any suggestion?

Leave a Reply

Your email address will not be published.


%d bloggers like this:
Skip to toolbar