JetsonHacks

Developing on NVIDIA® Jetson™ for AI on the Edge

Build TensorFlow on NVIDIA Jetson TX Development Kits

We build TensorFlow 1.6 on the Jetson TX with some new scripts written by Jason Tichy over at NVIDIA. Looky here:

Background

TensorFlow is one of the major deep learning systems. Created at Google, it is an open-source software library for machine intelligence. The Jetson TX2 ships with TensorRT. TensorRT is what is called an “Inference Engine“, the idea being that large machine learning systems can train models which are then transferred over and “run” on the Jetson.

In the vast majority of cases, you will want to install the associated .whl files for TensorFlow and not build from source. You can find the latest set of .whl files in the NVIDIA Jetson Forums.

Note: We previously built TensorFlow for both the Jetson TX2 and Jetson TX1 for L4T 28.1. Because of changes to the Java environment, these have been deprecated.

Some people would like to use the entire TensorFlow system on a Jetson. In this article, we’ll go over the steps to build TensorFlow r1.6 on a Jetson TX Dev Kit from source. These scripts work on both the Jetson TX1 and Jetson TX2. This should take about three hours to build on a Jetson TX2, longer on a Jetson TX1.

You will need ~10GB of free space in your build area. Typically the smart move is to freshly flash your Jetson with L4T 28.2, CUDA 9.0 Toolkit and cuDNN 7.0.5 and then start your build.

Installation

The TensorFlow scripts are located in the JasonAtNvidia account on Github in the JetsonTFBuild repository. You can simply check out the entire repository:

$ git checkout https://github.com/JasonAtNvidia/JetsonTFBuild.git

which will clone the repository including the TensorFlow .whl files. The .whl files take up several hundred megabytes of space. You may want to delete the .whl files.

As an alternative, here’s a script which will download the repository without the wheels directory:


#!/bin/bash
cd $HOME
# Get TensorFlow build scripts from JasonAtNvidia JetsonTFBuild repository
git clone –no-checkout https://github.com/JasonAtNvidia/JetsonTFBuild.git
cd JetsonTFBuild
# Sparse checkout tells git not to checkout the wheels directory
# where all of the .whl files are kept
git config core.sparsecheckout true
# Do not checkout the wheels directory
echo '!wheels/*' >> .git/info/sparse-checkout
# But checkout everything else
echo "/*" >> .git/info/sparse-checkout
git checkout
echo "JetsonTFBuild checked out"

Save the gist to a file (for example getJetsonTFBuild.sh), save the file and then execute it. For example:

$ bash getJetsonTFBuild.sh

This will download everything except the wheel directory.

Next, switch over to the repository directory:

$ cd JetsonTFBuild

Building

To execute the build file:

$ sudo bash BuildTensorFlow.sh

There are three parameters which you may pass to the script:

  • -b | --branch <branchname> Github branch to clone, i.e r1.6 (default: master)
  • -s | --swapsize <size> Size of swap file to create to assist building process in GB, i.e. 8
  • -d | --dir <directory> Directory to download files and use for build process, default: pwd/TensorFlow_install

Because the Jetson TX1 and Jetson TX2 do not have enough physical memory to build TensorFlow, a swap file is used.

Note: On a Jetson TX1, make sure that you set the directory to point to a device which has enough space for the build. The TX1 does not have enough eMMC memory to hold the swap file. The faster the external memory the better. The Jetson TX2 eMMC does have enough extra room for the build.

For example, to compile TensorFlow release 1.6 on a Jetson TX2 (as shown in the video):

$ sudo bash BuildTensorFlow.sh -b r1.6

After the TensorFlow build (which will take between 3 to 6 hours), you should do a validation check.

Validation

You can go through the procedure on the TensorFlow installation page: Tensorflow: Validate your installation

Validate your TensorFlow installation by doing the following:

Start a Terminal.
Change directory (cd) to any directory on your system other than the tensorflow subdirectory from which you invoked the configure command.
Invoke python or python3 accordingly, for python 2.X for example:

$ python

Enter the following short program inside the python interactive shell:

>>> import tensorflow as tf
>>> hello = tf.constant(‘Hello, TensorFlow!’)
>>> sess = tf.Session()
>>> print(sess.run(hello))

If the Python program outputs the following, then the installation is successful and you can begin writing TensorFlow programs.

Hello, TensorFlow!”

This is not very thorough, of course. However it does show that what you have built is installed.

Conclusion

This is a pretty straight forward process to build TensorFlow. At the same time, you should spend the time in reading through the scripts to get an understanding of how they operate.

Make sure to report any issues on the JasonAtNvidia account in the JetsonTFBuild repository.

Special thanks again to Jason Tichy over at NVIDIA for the repository!

Notes

  • The install in the video was performed directly after flashing the Jetson TX2 with JetPack 3.2
  • The install is lengthy, however it certainly should take much less than 4 hours on a TX2 and less than 6 hours on a TX1 once all the files are downloaded. If it takes longer, something is wrong.
  • In the video, TensorFlow 1.6.0 is installed
Facebook
Twitter
LinkedIn
Reddit
Email
Print

20 Responses

    1. David – you would need to cross compile them for the aarch64 architecture on your desktop for that to work… otherwise building them on your desktop would build x86-64 wheels that wont run on the arm processor on the jetsons.

    2. I think it depends on your definition of “terribly difficult”. There are several parts. You have to set up a cross compilation environment. You need to have a version of Bazel, the Google build tool, that runs using Java. The scripts here build a version of Bazel for this version of TensorFlow, as Bazel is still in development. Of course you then have the cross compilation issue with the version of Python that you’re using if you are building the .whl files (this is easier if you’re just building C/C++ libraries). Me, I’m not smart enough to figure all that stuff out so I just compile on the device.

      In any case, there’s really no need to compile TensorFlow. Enough people have done it, including people at NVIDIA, that you can just download the .whl files and install.

  1. Faced problems with tensorrt. Had to put TF_NEED_TENSORRT=1 in helperscript and build against master using. Everything went well.

  2. admin problem, not topic specific. Only way to get you guys the message is this way. Sorry. I tried to get a new password sent to my email box but it has been several hours now and it still hasn’t shown up. Is this normal?

  3. Hi, I’m using this tutorial to Install TF on my Jetson TX2. I have just received this error.

    ERROR: Skipping ‘//tensorflow/tools/pip_package:build_pip_package’: error loading package ‘tensorflow/tools/pip_package’: Encountered error while reading extension file ‘build_defs.bzl’: no such package ‘@local_config_tensorrt//’: Traceback (most recent call last):
    File “/home/nvidia/JetsonTFBuild/TensorFlow_Install/tensorflow/third_party/tensorrt/tensorrt_configure.bzl”, line 160
    auto_configure_fail(“TensorRT library (libnvinfer) v…”)
    File “/home/nvidia/JetsonTFBuild/TensorFlow_Install/tensorflow/third_party/gpus/cuda_configure.bzl”, line 210, in auto_configure_fail
    fail((“\n%sCuda Configuration Error:%…)))

    Cuda Configuration Error: TensorRT library (libnvinfer) version is not set.
    WARNING: Target pattern parsing failed.
    ERROR: error loading package ‘tensorflow/tools/pip_package’: Encountered error while reading extension file ‘build_defs.bzl’: no such package ‘@local_config_tensorrt//’: Traceback (most recent call last):
    File “/home/nvidia/JetsonTFBuild/TensorFlow_Install/tensorflow/third_party/tensorrt/tensorrt_configure.bzl”, line 160
    auto_configure_fail(“TensorRT library (libnvinfer) v…”)
    File “/home/nvidia/JetsonTFBuild/TensorFlow_Install/tensorflow/third_party/gpus/cuda_configure.bzl”, line 210, in auto_configure_fail
    fail((“\n%sCuda Configuration Error:%…)))

    Cuda Configuration Error: TensorRT library (libnvinfer) version is not set.
    INFO: Elapsed time: 2.895s
    FAILED: Build did NOT complete successfully (0 packages loaded)
    currently loading: tensorflow/tools/pip_package

    Can anyone give me some advice on how to resolve this problem please? Thanks.

  4. Hello,

    I have errors, and I think it is due to my “sources.list” file. Could someone who succeeded to install tensorflow 1.6 copy-paste the content of this file?

    I freshly flashed my Jetson TX2 with L4T 28.2, CUDA 9.0 Toolkit and cuDNN 7.0.5

    Thank you very much in advance!

    1. I flashed my Jetson TX2 with Tegra186_Linux_R28.2.1_aarch64.
      Then I followed to install JetPack 3.3.
      There are lots of compatibility problems during TensorFlow installation, so I searched the best wheel for it: TensorFlow 1.10-rc1.
      I choose to install this wheel because it is the only one that seems to work with CUDA 9.0 and CUDNN v7.1.5.

      For more info, I suggest you follow the designs from the link: https://github.com/peterlee0127/tensorflow-nvJetson

  5. I had an error when loading Tensor flow. I am running a TX2, it was freshly flashed with Jetpack 3.2.1.Here are the errors I am seeing after following the instructions this occurred after executing ‘sudo bash BuildTensorflow.sh’ :
    ……………………………
    WARNING: Config value opt is not defined in any .rc file
    ERROR: /home/nvidia/JetsonTFBuild/TensorFlow_Install/tensorflow/WORKSPACE:21:1: Traceback (most recent call last):
    File “/home/nvidia/JetsonTFBuild/TensorFlow_Install/tensorflow/WORKSPACE”, line 21
    check_bazel_version_at_least(“0.15.0”)
    File “/home/nvidia/JetsonTFBuild/TensorFlow_Install/tensorflow/tensorflow/version_check.bzl”, line 47, in check_bazel_version_at_least
    fail(“\nCurrent Bazel version is {}, …))

    Current Bazel version is 0.13.0- (@non-git), expected at least 0.15.0
    ERROR: Error evaluating WORKSPACE file
    ERROR: error loading package ”: Encountered error while reading extension file ‘android.bzl’: no such package ‘@local_config_android//’: error loading package ‘external’: Could not load //external package
    ERROR: error loading package ”: Encountered error while reading extension file ‘android.bzl’: no such package ‘@local_config_android//’: error loading package ‘external’: Could not load //external package
    INFO: Elapsed time: 3.549s
    INFO: 0 processes.
    FAILED: Build did NOT complete successfully (0 packages loaded)
    @AdamAmp

    Attach files by dragging & dropping,

    , or pasting from the clipboard.
    Styling with Markdown is supported

  6. Hi. I’m trying this tutorial but I had some compile error.
    Would you give me some advice?

    I’ve installed following libraries and packages.
    JetPack3.2
    CUDA9.0
    cuDNN7.0.5

    ————————————–
    nvidia@tegra-ubuntu:~/JetsonTFBuild$ sudo -H bash BuildTensorflow.sh -b r1.6
    Reading package lists… Done
    Building dependency tree
    Reading state information… Done
    build-essential is already the newest version (12.1ubuntu2).
    unzip is already the newest version (6.0-20ubuntu1).
    zip is already the newest version (3.0-11).
    openjdk-8-jdk is already the newest version (8u181-b13-0ubuntu0.16.04.1).
    python-dev is already the newest version (2.7.12-1~16.04).
    The following packages were automatically installed and are no longer required:
    apt-clone archdetect-deb dmeventd dmraid dpkg-repack gir1.2-timezonemap-1.0
    gir1.2-xkl-1.0 grub-common gstreamer1.0-plugins-bad-videoparsers kpartx
    kpartx-boot libappstream3 libass5 libavresample-ffmpeg2 libbs2b0
    libdebian-installer4 libdevmapper-event1.02.1 libdmraid1.0.0.rc16 libflite1
    libgstreamer-plugins-bad1.0-0 libllvm5.0 liblockfile-bin liblockfile1
    liblvm2app2.2 liblvm2cmd2.02 libmircommon5 libparted-fs-resize0 libqmi-glib1
    libqpdf17 libreadline5 libsodium18 libzmq5 lockfile-progs lvm2 os-prober
    pmount python3-icu python3-pam rdate ubiquity-casper ubiquity-ubuntu-artwork
    ubuntu-core-launcher
    Use ‘sudo apt autoremove’ to remove them.
    0 upgraded, 0 newly installed, 0 to remove and 254 not upgraded.
    Reading package lists… Done
    Building dependency tree
    Reading state information… Done
    python-numpy is already the newest version (1:1.11.0-1ubuntu1).
    python-scipy is already the newest version (0.17.0-1).
    python-wheel is already the newest version (0.29.0-1).
    python-pip is already the newest version (8.1.1-2ubuntu0.4).
    The following packages were automatically installed and are no longer required:
    apt-clone archdetect-deb dmeventd dmraid dpkg-repack gir1.2-timezonemap-1.0
    gir1.2-xkl-1.0 grub-common gstreamer1.0-plugins-bad-videoparsers kpartx
    kpartx-boot libappstream3 libass5 libavresample-ffmpeg2 libbs2b0
    libdebian-installer4 libdevmapper-event1.02.1 libdmraid1.0.0.rc16 libflite1
    libgstreamer-plugins-bad1.0-0 libllvm5.0 liblockfile-bin liblockfile1
    liblvm2app2.2 liblvm2cmd2.02 libmircommon5 libparted-fs-resize0 libqmi-glib1
    libqpdf17 libreadline5 libsodium18 libzmq5 lockfile-progs lvm2 os-prober
    pmount python3-icu python3-pam rdate ubiquity-casper ubiquity-ubuntu-artwork
    ubuntu-core-launcher
    Use ‘sudo apt autoremove’ to remove them.
    0 upgraded, 0 newly installed, 0 to remove and 254 not upgraded.
    Reading package lists… Done
    Building dependency tree
    Reading state information… Done
    E: Unable to locate package python-enum32
    Reading package lists… Done
    Building dependency tree
    Reading state information… Done
    python3-dev is already the newest version (3.5.1-3).
    python3-numpy is already the newest version (1:1.11.0-1ubuntu1).
    python3-scipy is already the newest version (0.17.0-1).
    python3-wheel is already the newest version (0.29.0-1).
    python3-pip is already the newest version (8.1.1-2ubuntu0.4).
    The following packages were automatically installed and are no longer required:
    apt-clone archdetect-deb dmeventd dmraid dpkg-repack gir1.2-timezonemap-1.0
    gir1.2-xkl-1.0 grub-common gstreamer1.0-plugins-bad-videoparsers kpartx
    kpartx-boot libappstream3 libass5 libavresample-ffmpeg2 libbs2b0
    libdebian-installer4 libdevmapper-event1.02.1 libdmraid1.0.0.rc16 libflite1
    libgstreamer-plugins-bad1.0-0 libllvm5.0 liblockfile-bin liblockfile1
    liblvm2app2.2 liblvm2cmd2.02 libmircommon5 libparted-fs-resize0 libqmi-glib1
    libqpdf17 libreadline5 libsodium18 libzmq5 lockfile-progs lvm2 os-prober
    pmount python3-icu python3-pam rdate ubiquity-casper ubiquity-ubuntu-artwork
    ubuntu-core-launcher
    Use ‘sudo apt autoremove’ to remove them.
    0 upgraded, 0 newly installed, 0 to remove and 254 not upgraded.
    Reading package lists… Done
    Building dependency tree
    Reading state information… Done
    python3-h5py is already the newest version (2.6.0-1).
    python3-mock is already the newest version (1.3.0-2.1ubuntu1).
    The following packages were automatically installed and are no longer required:
    apt-clone archdetect-deb dmeventd dmraid dpkg-repack gir1.2-timezonemap-1.0
    gir1.2-xkl-1.0 grub-common gstreamer1.0-plugins-bad-videoparsers kpartx
    kpartx-boot libappstream3 libass5 libavresample-ffmpeg2 libbs2b0
    libdebian-installer4 libdevmapper-event1.02.1 libdmraid1.0.0.rc16 libflite1
    libgstreamer-plugins-bad1.0-0 libllvm5.0 liblockfile-bin liblockfile1
    liblvm2app2.2 liblvm2cmd2.02 libmircommon5 libparted-fs-resize0 libqmi-glib1
    libqpdf17 libreadline5 libsodium18 libzmq5 lockfile-progs lvm2 os-prober
    pmount python3-icu python3-pam rdate ubiquity-casper ubiquity-ubuntu-artwork
    ubuntu-core-launcher
    Use ‘sudo apt autoremove’ to remove them.
    0 upgraded, 0 newly installed, 0 to remove and 254 not upgraded.
    Reading package lists… Done
    Building dependency tree
    Reading state information… Done
    mlocate is already the newest version (0.26-1ubuntu2).
    The following packages were automatically installed and are no longer required:
    apt-clone archdetect-deb dmeventd dmraid dpkg-repack gir1.2-timezonemap-1.0
    gir1.2-xkl-1.0 grub-common gstreamer1.0-plugins-bad-videoparsers kpartx
    kpartx-boot libappstream3 libass5 libavresample-ffmpeg2 libbs2b0
    libdebian-installer4 libdevmapper-event1.02.1 libdmraid1.0.0.rc16 libflite1
    libgstreamer-plugins-bad1.0-0 libllvm5.0 liblockfile-bin liblockfile1
    liblvm2app2.2 liblvm2cmd2.02 libmircommon5 libparted-fs-resize0 libqmi-glib1
    libqpdf17 libreadline5 libsodium18 libzmq5 lockfile-progs lvm2 os-prober
    pmount python3-icu python3-pam rdate ubiquity-casper ubiquity-ubuntu-artwork
    ubuntu-core-launcher
    Use ‘sudo apt autoremove’ to remove them.
    0 upgraded, 0 newly installed, 0 to remove and 254 not upgraded.
    Requirement already satisfied: keras_applications==1.0.4 in /usr/local/lib/python3.5/dist-packages (1.0.4)
    Requirement already satisfied: keras_preprocessing==1.0.2 in /usr/local/lib/python3.5/dist-packages (1.0.2)
    Requirement already satisfied: keras_applications==1.0.4 in /usr/local/lib/python3.5/dist-packages (1.0.4)
    Requirement already satisfied: keras_preprocessing==1.0.2 in /usr/local/lib/python3.5/dist-packages (1.0.2)
    Requirement already satisfied: enum34 in /usr/local/lib/python3.5/dist-packages (1.1.6)
    swapon: /home/nvidia/JetsonTFBuild/swapfile.swap: swapon failed: Device or resource busy
    Looks like Swap not desired or is already in use
    dirname: missing operand
    Try ‘dirname –help’ for more information.
    dirname: missing operand
    Try ‘dirname –help’ for more information.
    PYTHON_BIN_PATH=/usr/bin/python2
    GCC_HOST_COMPILER_PATH=/usr/bin/gcc
    CUDA_TOOLKIT_PATH=
    TF_CUDA_VERSION=9.0
    TF_CUDA_COMPUTE_CAPABILITIES=5.3,6.2
    CUDNN_INSTALL_PATH=/usr/lib/aarch64-linux-gnu
    TF_CUDNN_VERSION=7.0.5
    ./tf_build.sh: line 20: /home/nvidia/JetsonTFBuild: Is a directory
    /usr/bin/python2: can’t open file ‘configure.py’: [Errno 2] No such file or directory
    error: patch failed: tensorflow/contrib/lite/kernels/internal/BUILD:21
    error: tensorflow/contrib/lite/kernels/internal/BUILD: patch does not apply
    INFO: Options provided by the client:
    Inherited ‘common’ options: –isatty=1 –terminal_columns=80
    INFO: Reading rc options for ‘build’ from /home/nvidia/JetsonTFBuild/tensorflow/tools/bazel.rc:
    ‘build’ options: –define framework_shared_object=true –define=use_fast_cpp_protos=true –define=allow_oversize_protos=true –define=grpc_no_ares=true –spawn_strategy=standalone –genrule_strategy=standalone -c opt
    ERROR: Config value opt is not defined in any .rc file
    Tue Oct 9 20:34:03 JST 2018 : === Using tmpdir: /tmp/tmp.swhwrsSoaA
    cp: cannot stat ‘bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/org_tensorflow/tensorflow’: No such file or directory

    ———————————-

    1. It’s not clear which Jetson you are running this on. Is it a TX1?
      There appear to be many updates to the repository mentioned. If you *really* need version 1.6, you should revert to a Github commit that targeted that version.
      The swap file does not appear to be accessible.
      I do not know what command line sequence you called, it’s hard to guess the issue.

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclaimer

Some links here are affiliate links. If you purchase through these links I will receive a small commission at no additional cost to you. As an Amazon Associate, I earn from qualifying purchases.

Books, Ideas & Other Curiosities