JetsonHacks

Developing on NVIDIA® Jetson™ for AI on the Edge

TensorFlow on NVIDIA Jetson TX1 Development Kit

Building TensorFlow on the NVIDIA Jetson TX1 is a little more complicated than some of the installations we have done in the past. Looky here:

Background

TensorFlow is one of the major deep learning systems. Created at Google, it is an open-source software library for machine intelligence. The Jetson TX1 ships with TensorRT, which is the run time for TensorFlow. TensorRT is what is called an “Inference Engine“, the idea being that large machine learning systems can train models which are then transferred over and “run” on the Jetson.

However, some people would like to use the entire TensorFlow system on a Jetson. This has been difficult for a few reasons. The first reason is that TensorFlow binaries aren’t generally available for ARM based processors like the Tegra TX1. The second reason is that actually compiling TensorFlow takes a larger amount of system resources than is normally available on the Jetson TX1. The third reason is that TensorFlow itself is rapidly changing (it’s only a year old), and the experience has been a little like building on quicksand.

In this article, we’ll go over the steps to build TensorFlow r0.11 on the Jetson TX1. This will take about three hours or so to build.

Note: You may want to read through this article and then read the secret article: Install TensorFlow on TX1. Just a thought. But you didn’t hear it from me.

Preparation

Note: Jan. 17, 2017 – Some issues have been addressed as the installation has changed over the last few weeks. Following the instructions in this article incorporates the changes. You can read about the changes here: Building TensorFlow Update

This article assumes that JetPack 2.3.1 is used to flash the Jetson TX1. Install:

  • L4T 24.2.1 an Ubuntu 16.04 64-bit variant (aarch64)
  • CUDA 8.0
  • cuDNN 5.1.5

Note that the library locations when installed by JetPack may not match a manual installation. TensorFlow will use CUDA and cuDNN in this build.

In order to get TensorFlow to compile on the Jetson TX1, a swap file is needed for virtual memory. Also, a good amount of disk space ( > 5.5GB ) is needed to actually build the program. If you’re unfamiliar with how to set the Jetson TX1 up like that, see a previous article: Jetson TX1 Swap File and Development Preparation.

There is a repository on the JetsonHacks account on Github named installTensorFlowTX1. Clone the repository and switch over to that directory.

$ git clone https://github.com/jetsonhacks/installTensorFlowTX1
$ cd installTensorFlowTX1

Next, tell the dynamic linker to use /usr/local/lib

$ ./setLocalLib.sh

Prerequisites

There is a convenience script which will install the required prerequisites such as Java, along with Protobuf, grpc-java and Bazel. The script also patches the source files appropriately for ARM 64. Bazel and grpc-java each require a different version of Protobuf, so that is also taken care of in the script.

$ ./installPrerequisites.sh

From the video installation of the prerequisites takes a little over 30 minutes, but will depend on your internet connection speed.

Building TensorFlow

First, clone the TensorFlow repository and patch for Arm 64 operation:

$ ./cloneTensorFlow.sh

then setup the TensorFlow environment variables. This is a semi-automated way to run the TensorFlow configure.sh file. Note that most of the library locations are configured in this script. As stated before, the library locations are determined by the JetPack installation.

$ ./setTensorFlowEV.sh

We’re now ready to build TensorFlow:

$ ./buildTensorFlow.sh

This will take a couple of hours. After TensorFlow is finished building, we package it into a ‘wheel’ file:

$ ./packageTensorFlow.sh

The wheel file will be in the $HOME directory, tensorflow-0.11.0-py2-none-any.whl

Installation

Pip can be used to install the wheel file:

$ pip install $HOME/tensorflow-0.11.0-py2-none-any.whl

Test

Then run a simple TensorFlow example for the initial sanity check:

$ cd $HOME/tensorflow
$ time python tensorflow/models/image/mnist/convolutional.py

Conclusion

So there you have it. Building TensorFlow is quite a demanding task, but hopefully some of these scripts may the job a little bit simpler.

Facebook
Twitter
LinkedIn
Reddit
Email
Print

36 Responses

  1. Hi Jim, thanks for this. Two days ago I tried to do it on my own following posts on stockoverflow and github, but completely run out of space in the middle of the process. Had to perform some memory gymnastics to even boot the device properly 🙂

    Judging by the terminal output from that test at the end, tensorflow is compiled with cuda support? Also, is there any reason why this wouldn’t work with 0.12 version TF?

  2. HI Marko,
    I found the whole TensorFlow build process rather trying. Looking at the Github and Stackoverflow threads, it looks like people far more determined and smarter than I are having a bunch of issues building it on the TX1. I attempted to build everything in an entirely automated fashion, but ran into numerous roadblocks. Eventually I gave in and published what I had, admitting defeat. Hopefully others can follow along with my attempt. On a TX1 directly after flashing with enough memory and disk space, I did NOT find that the solutions posted actually built 0.11 correctly

    The script file configureTensorFlow.sh sets all of the environment variables before calling the TF script named ‘configure’. configureTensorFlow sets TensorFlow to use CUDA 8.0 and cuDNN 5.1.15. The environment variables use the JetPack configuration, so if someone set their TX1 up manually they may have to adjust those environment variables in the script file, or just run the TensorFlow configure script directly.

    As for TensorFlow 0.12, the quick answer is that I haven’t tried it. Personally, I was just happy to get 0.11 to build consistently.

    Directions for getting the pre-built wheel file for 0.11: http://wp.me/p7ZgI9-II

  3. Dear Marko and kangalow,
    it took me about 40hrs in total till I finished the build of Tensorflow.
    The scripts definitely gave a good reference.
    As you can see the output of the example mentioned in the end of the article Tensorflow is using the TX1 gpu.

    ubuntu@jetson:~/tensorflow$ time python tensorflow/models/image/mnist/convolutional.py
    I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcublas.so.8.0 locally
    I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcudnn.so.5.1.5 locally
    I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcufft.so.8.0 locally
    I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcuda.so.1 locally
    I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcurand.so.8.0 locally
    Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
    Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
    Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
    Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
    Extracting data/train-images-idx3-ubyte.gz
    Extracting data/train-labels-idx1-ubyte.gz
    Extracting data/t10k-images-idx3-ubyte.gz
    Extracting data/t10k-labels-idx1-ubyte.gz
    I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] ARM has no NUMA node, hardcoding to return zero
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:951] Found device 0 with properties:
    name: NVIDIA Tegra X1
    major: 5 minor: 3 memoryClockRate (GHz) 0.072
    pciBusID 0000:00:00.0
    Total memory: 3.90GiB
    Free memory: 1.40GiB
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:972] DMA: 0
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] 0: Y
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Creating TensorFlow device (/gpu:0) -> (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0)
    Initialized!
    Step 0 (epoch 0.00), 42.6 ms
    Minibatch loss: 12.054, learning rate: 0.010000
    Minibatch error: 90.6%
    Validation error: 84.6%
    Step 100 (epoch 0.12), 56.8 ms
    Minibatch loss: 3.283, learning rate: 0.010000
    Minibatch error: 6.2%
    Validation error: 7.1%
    Step 200 (epoch 0.23), 55.0 ms

    Because this was running so long and maybe I was just lucky that it didn’t crashed in some way I uploaded the wheel-file to Drive so one can just download it 😉

    https://drive.google.com/file/d/0B_s_RYUahuaJZUhqcWxlQmY3QU0/view?usp=sharing

    best regards and thanks for your work,
    Arne

  4. I’m having the following issue when running the tensorflow configure script:

    ERROR: /home/ubuntu/tensorflow/tensorflow/core/platform/default/build_config/BUILD:56:1: no such package ‘@jpeg_archive//’: Error downloading from http://www.ijg.org/files/jpegsrc.v9a.tar.gz to /home/ubuntu/.cache/bazel/_bazel_ubuntu/ad1e09741bb4109fbc70ef8216b59ee2/external/jpeg_archive: Error downloading http://www.ijg.org/files/jpegsrc.v9a.tar.gz to /home/ubuntu/.cache/bazel/_bazel_ubuntu/ad1e09741bb4109fbc70ef8216b59ee2/external/jpeg_archive/jpegsrc.v9a.tar.gz: Connection reset and referenced by ‘//tensorflow/core/platform/default/build_config:platformlib’.
    ERROR: /home/ubuntu/tensorflow/tensorflow/core/platform/default/build_config/BUILD:56:1: no such package ‘@jpeg_archive//’: Error downloading from http://www.ijg.org/files/jpegsrc.v9a.tar.gz to /home/ubuntu/.cache/bazel/_bazel_ubuntu/ad1e09741bb4109fbc70ef8216b59ee2/external/jpeg_archive: Error downloading http://www.ijg.org/files/jpegsrc.v9a.tar.gz to /home/ubuntu/.cache/bazel/_bazel_ubuntu/ad1e09741bb4109fbc70ef8216b59ee2/external/jpeg_archive/jpegsrc.v9a.tar.gz: Connection reset and referenced by ‘//tensorflow/core/platform/default/build_config:platformlib’.
    ERROR: Evaluation of query “deps((//tensorflow/… union @bazel_tools//tools/jdk:toolchain))” failed: errors were encountered while computing transitive closure.

    Has anyone seen this error before?

  5. Hi, I’m trying to install tensorflow on TX1 as this post.
    However, every time I run ./cloneTensorflow.sh
    a prompt is shown as below.
    Reversed (or previously applied) patch detected! Assume -R? [n]
    Did I something wrong?

    1. When you run cloneTensorFlow.sh it patches files that are outside the tensorflow tree. If you run cloneTensorFlow.sh those patches are already applied. Assuming that you do not wish to revert the patches, the default answer [n] is correct. Hope this helps.

  6. After successfully setting up the environment. Tensorflow has been building for 5-6 days. Seems to have been stuck on the following step of building protobuf:

    INFO: From Compiling external/protobuf/src/google/protobuf/compiler/java/java_enum_lite.cc:
    external/protobuf/src/google/protobuf/compiler/java/java_enum_lite.cc:53:6: warning: ‘bool google::protobuf::compiler::java::{anonymous}::EnumHasCustomOptions(const google::protobuf::EnumDescriptor*)’ defined but not used [-Wunused-function]
    bool EnumHasCustomOptions(const EnumDescriptor* descriptor) {

    I ran the jetson_clocks.sh before starting the build process, but still seems to be taking longer than expected and mentioned online.

    Anyone had the same behavior or have a recommendation on how to debug/speed up the process?

    1. What version of L4T are you using? I do know that the build process is very sensitive to the ./setLocalLib.sh step. If you don’t have to actually build it yourself, consider using the pre-built wheel file available through: http://wp.me/p7ZgI9-II

      The build should only take a few hours. Certainly if it gets stuck you should consider that there is something wrong, and restart the build. The steps to build protobuf should take less than 15 minutes, most of the build time is spent on Tensorflow itself.

  7. Hello, after using “./buildTensorFlow.sh” it gets stuck in “external/protobuf/python/google/protobuf/pyext/message.cc:554:20: warning: ISO C++ forbids converting a string constant to ‘char*’ [-Wwrite-strings]
    “. Till now more than 48 hours have passed. Any suggestion?
    Thanks.

    1. To give you a sense of time, the whole build should take around 4 hours. The bulk of the time is spent compiling TensorFlow itself. Unless you have some reason to actually build TensorFlow yourself, consider that the easiest way to install is from a pre-built wheel file: https://jetsonhacks.com/2016/12/30/install-tensorflow-on-nvidia-jetson-tx1-development-kit/

      There isn’t much information just given the warning you listed, it’s difficult to tell where in the compilation process it occurs. You may want to read:
      https://jetsonhacks.com/2017/01/15/tensorflow-build-update-jetson-tx1/

      and make sure that you have sufficient swap space available. Good luck!

  8. thank you for the all the info and scripts. I have been able to install tensor flow on a tx1 with a ssd drive. i’m using the ssd drive as the root and a 10 g swap file. it took several days of work but there is plenty of feed back from the compiler and linker when something was wrong and the proper file version that is needed for the tx1 was among the files, some just had to be renamed. running the tutorials that came with the tensor flow and getting same results. thanks again it does work. running real sense camera r200 and ros and tensorRT and now looking forward to training.

  9. Hi,
    Have you tried installing TensorFlow on TX2 development kit? is the process same? How do you use Tensorflow Python interface in TX kit? does it have python interpreter installed like Anaconda?

    1. I haven’t tried it yet on the TX2, though I’ve heard people have had success installing v1.0. The TX2 nvcc compiler update fixes a bug that prevented v1.0 on the TX1. The Jetson Dev kits are pretty much standard Ubuntu desktops, so it’s pretty much the standard TensorFlow dev environments that you want to use. Thanks for reading!

  10. Hi,
    Folling your video,I have successfully install the Tensorflow R0.11 in Jetson TX1. Now I try to install the Tensorflow R1.0. But it is not successful according to your video. Have you install Tensorflow R1.0? Anything I should pay attention to?

    1. v1.0 does not build on the Jetson TX1 currently because of an issue with some instructions in the nvcc compiler. I’ve heard that some people have work arounds, but for me it’s not worth the trouble. The next release of JetPack should bring L4T R27.2 which fixes the issue. Thanks for reading!

  11. Hello, this article was very helpful.
    But when I was following this article, I noticed the url of avro specified in workspace.bzl is invalid. So we shoud specify avro-1.8.1 in it.

    1. I do not know. Did they change the workspace.bzl file recently, or has avro changed? You can try avro-1.8.1 and see it it works. Thanks for reading!

  12. when I run [puvsgpy.py](https://github.com/tensorflow/tensorflow/files/368246/CPUvsGPU.testing.zip)

    output :

    I tensorflow/core/common_runtime/gpu/gpu_device.cc:972] DMA: 0
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] 0: Y
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Creating TensorFlow device (/gpu:0) -> (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0)
    [1.0, 1.0, 1.0, 1.0, 1.0]
    DESCRIPTION: Reinforcement Learning (DeepQ) Batch: 200 RandProb: 0.01 LR: 1e-05 DR: 0.25 Brain: [140, 120, 100, 80]/[1.0, 1.0, 1.0, 1.0]/[‘tanh’, ‘tanh’, ‘tanh’, ‘tanh’]
    Timeline Sample 1000
    I tensorflow/stream_executor/dso_loader.cc:105] Couldn’t open CUDA library libcupti.so.8.0. LD_LIBRARY_PATH: /usr/local/cuda-8.0/lib64:
    F tensorflow/core/platform/default/gpu/cupti_wrapper.cc:58] Check failed: f != nullptr could not find cuptiActivityRegisterCallbacksin libcupti DSO; dlerror: /home/ubuntu/py2/local/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so: undefined symbol: cuptiActivityRegisterCallbacks
    Aborted

    why?

  13. Hi Kangalow!
    I can’t install tensorflow on jetson tx1,when i run ./installPrerequisites.sh,it told me can not find output/bazel,my TX1 jetpack is 3.0,but in your video is jetpack 2.3.1,is this problem?

  14. Hi kangalow!
    When i run “./setTensorFlowEV.sh”,then it told me
    /usr/local/lib/python2.7/dist-packages
    Problem with java installation: couldn’t find/access rt.jar in /usr/jdk-8
    Problem with java installation: couldn’t find/access rt.jar in /usr/jdk-8
    Configuration finished
    how can i do it?

    1. Hi kangalow!
      In your video,when you ran “$./installPrerequisites.sh”,it will pop up a “configuring oracle-java8-installer”,but i haven’t i ,could you please tell me why,thanks very much!

  15. I do not know why it doesn’t work as expected. You can try running the installPrerequisites shell file manually, line by line, and see if any issues arise.

  16. When I execute to $ ./setTensorFlowEV.sh, showing that avro-1.8.0 / cpp download failed.
    http: //www-us.apache.org/dist/avro/avro-1.8.0/cpp/avro-cpp -1.8.0.tar.gz is Not Found.
    When I consulted http://www-us.apache.org/dist/avro, finding that avro has been updated to 1.8.2. How to solve this problem?
    Thank you!

  17. Hi,

    After I ran the installPrequisites.sh, everything was going well until it was trying to build Bazel. Here’s the output:

    gcc: error trying to exec ‘cc1plus’: execvp: No such file or directory
    Target //src:bazel failed to build
    INFO: Elapsed time: 20.973s, Critical Path: 0.33s
    cp: cannot stat ‘output/bazel’: No such file or directory

    Any thoughts?

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclaimer

Some links here are affiliate links. If you purchase through these links I will receive a small commission at no additional cost to you. As an Amazon Associate, I earn from qualifying purchases.

Books, Ideas & Other Curiosities