JetsonHacks

Developing on NVIDIA® Jetson™ for AI on the Edge

Intel RealSense Package for ROS on NVIDIA Jetson TX2

Intel provides an open source ROS package for their RealSense Cameras. Let’s install the package on the Jetson TX2. Looky here:

Background

Intel is investing heavily in computer vision hardware, one of the areas being 3D vision. There have been several generations of the RealSense devices, in the video we demonstrate a RealSense R200. A R400 has been recently announced, and should be available soon.

The size of the R200 and the light weight make the R200 camera a device to consider for 3D vision and robotic applications.

Installation

There are two prerequisites for installing the realsense_camera package on the Jetson TX2. The first is to install the camera driver library, called librealsense, on the Jetson TX2. We covered this installation in an earlier article.

The second prerequisite of course is to install Robot Operating System (ROS). A short article on how to install ROS on the Jetson TX2 is also available.

Install RealSense Package for ROS

There are convenience scripts to install the RealSense ROS package on the Github JetsonHacks account. After the prerequisites mentioned above have been installed:

$ git clone https://github.com/jetsonhacks/installRealSenseROSTX2
$ cd installRealSenseROSTX2
$ ./installRealSenseROSTX2 <catkin workplace name>

Where catkin workplace name is the name of the catkin_workspace to place the RealSense ROS package. In the video, the workspace is named jetsonbot.

You can then launch a R200 node:

$ cd
$ source devel/setup.bash
$ roscd realsense_camera
$ roslaunch realsense_camera r200_nodelet_rgbd.launch

The RealSense ROS package contains configuration files for launching Rviz. If you have Rviz installed on the Jetson TX2:

$ cd
$ source devel/setup.bash
$ roscd realsense_camera
$ rviz -d rviz/realsense_rgbd_pointcloud.rviz

Notes

There is a file called Notes.txt in the installRealSenseROSTX2 directory which has some short notes for installing Rviz and rqt-reconfigure to help visualize the output from the RealSense camera and adjust camera parameters from the Jetson TX2.

The installation above shows a Jetson TX2 running L4T 27.1 which was installed using JetPack 3.0. The scripts install Intel RealSense ROS package version 1.8.0.

The JetsonHacks script installs ros-kinetic-librealsense. In effect, this means that there are two installations of librealsense on the Jetson. The reason that librealsense needs to be built on the Jetson instead of installed from the ROS repository is that a kernel module named uvcvideo must be modified to recognize the RealSense camera formats. The JetsonHacks librealsense install covers how to build the module. You can remove the original installation if desired:

$ cd librealsense
$ sudo make uninstall

Also, the ros-kinetic-librealsense package installs linux-headers in the /usr/src directory. These headers DO NOT match the Jetson TX2, so you should consider deleting them. Same with the uvcvideo realsense directory.

$ cd /usr/src
$ sudo rm -r linux-headers-4.4.0-70
$ sudo rm -r linux-headers-4.4.0-70-generic
$ sudo rm -r uvcvideo-1.1.1-3-realsense

The ROS repository also holds a ros-kinetic-realsense-camera package. The package is version 1.7.2 (as of March, 2017). There is an issue with that particular version, the auto-exposure (lr_auto_exposure) does not work correctly. This makes the camera considerably less effective in varied lighting conditions. Therefore the script builds the package from source (version 1.8.0) where the issue has been addressed.

Unlike most of articles and videos on JetsonHacks, installation of the Intel RealSense ROS package requires some prerequisites to be installed before installation. While previous articles cover the steps involved, be aware that this is a little more complicated than most of the software installation articles on this site.

As always, the scripts made available from JetsonHacks only provide a guide for how to complete the described task, you may have to modify them to suit your needs.

Facebook
Twitter
LinkedIn
Reddit
Email
Print

25 Responses

  1. Thank you this was really a huge help. I was able to get everything installed and running with the help of your tutorials here. Woot, Woot!

  2. Very helpful article and very good tutorials!
    I’m playing with ROS and Intel R200 for several months on laptop with i7-4700HQ (and decent graphic card but still not using it).
    I’m wondering about the performances difference between TX2 and a desktop CPU+GPU (maybe i7-6600k and gtx1050).
    I know there is a huge difference in terms of power dissipation so the comparison is not fully faire but actually I don’t know if the TX2 worth the money with respect to an open architecture for prototyping purposes (not production of course)
    Did you have the opportunity to make such considerations/comparisons? (For ROS at least with a given SLAM algorithm). Thank you.

    1. There’s no good comparison between the two. The Jetson TX2 is built for embedded/mobile applications such as automotive, vehicles, robots, etc. The desktop is not. Different purposes. The RealSense camera has quite a bit of processing power itself and provides depth maps natively.

      So it depends on what you’re prototyping. If you’re building something for several years down the road, choosing the desktop might be a good selection. If you need to deliver a solution real soon now that is mobile and/or runs on batteries, the Jetson has enough CPU and GPU power for most tasks.

  3. Thank you kangalow.
    What you said make sense.
    From a pure computational power point of view should I take into account the GFLOPS value only (750 for TX2 and 1981 for GTX1050Ti in FP32)?
    Actually I don’t know if I can make the most of GPGPU power for ROS tasks (mainly SLAM but also costmaps, exploration etc..)

    1. Typically people prototype what they’re trying to accomplish so that they can explore the ideas associated with their project. After they get a feel for the requirements, then they can make a more informed decision as to their hardware solution. It sounds like in your case you would benefit in doing some prototyping on your PC to help understand the problem space.

  4. Suggestions for how to do that prototyping on a PC? For example, can we load ROS on top of, say, Debian 9? Maybe in a virtual environment to be safe? I’d love to do the image and neural net training on the PC rather than the Tx2, then use the trained models on the TX2 while playing in the field. Does that sound practical?
    Thanks.
    Is there anyone else doing realsense and tx2 with blogs or vlogs as good as yours?

  5. Thanks for the very helpful article series, Kangalow!

    I did everything exactly like you described, with a SR300 camera behind a powered USB3.0 hub, and it works – at first.
    I got to a not-so-flattering 3d image of me displayed in rviz :-).

    However, after a while running rviz (maybe 15 minutes), some or all of the camera feeds freeze without any error in the roslaunch
    If I then try to launch the librealsense examples, I run into LIBUSB_ERROR_IOs. A few times, I also saw “rs.warn: interrupt e.p. timeout”.
    I’ve double-checked usb auto suspend is off.
    To fix it, I have to unplug and replug the camera or the hub, but the issue seems to always come back.
    The contacts and cables seem to be good, the power supply is powerful enough, I think it’s a software issue.

    Is everything super stable on your end?
    I don’t have much experience with usb or low level linux, could you please help me track it down?
    Also, what brand/model/chipset of USB hub are you using?

  6. I’m using a different model camera, so there’s not a lot of help I can offer. My experience has been that this is all relatively low level USB stuff. The USB stack can be a little cantankerous at times.

    We’ve been using the Amazon Basic USB 3.0 7 port hub: http://amzn.to/2qzbQDL

    Thanks for reading.

    1. It’s the new Realsense dev kit intel is selling. Now it’s either SR300 or ZR300 depending on the range you need. They both are supported out of the box by librealsense, and seem to be closely related to the R200, as many of their configurable parameters still start with r200_… . I got both, both work with the librealsense example after your install process.

      I haven’t gotten the ZR300 to work with ROS yet, although I’m optimistic it will soon. It’s missing the _rgdb.launch config.

      Thanks, I’m getting the same hub you are, it will either fix it or prove that my hub was not at fault.

      Hopefully you’ll soon be able to update your article to mention both SR300 and ZR300 also work :-).

  7. Maybe this can be of use to someone else:
    – This also works well with Intel ZR300 and SR300 Realsense Cameras. Just take the more recent librealsense commit or manually add zr300_nodelet_rgbd.launch.
    – I’d recommend using the same AmazonBasics USB 3.0 7 port hub as Kangalow.
    Vantec USB 3.0 hubs, both the 7 ports and the 4 ports version, work unreliably for this (timing issues, haven’t tried to debug them).
    Switching from these hubs to the AmazonBasics one fixed the problems I was experiencing.

  8. Hi Kangalow,
    I’ve just got the RealSense working with ROS on TX1 and TX2 with L4T28.1, but it only started working when I apt-get removed ros-kinetic-librealsense.

    Prior to removing ros-kinetic-librealsense I would get an error “/camera/driver – No cameras detected!” when running roslaunch realsense_camera r200_nodelet_rgbd.launch

    If I instead uninstall the original, non-ROS installation of librealsense, rather than removing ros-kinetic-librealsense, I still get the above error.

    Anyway, I’m not sure exactly why this works but any insight would be much appreciated!

    Current working config:

    |Version | Your Configuration |
    |:———-|:———-|
    |Operating System | Ubuntu 16.04 LTS|
    |Kernel | 4.4.38-tegra|
    |ROS | kinetic|
    |ROS RealSense | 1.8.0|
    |librealsense | Cannot locate [librealsense]|
    |Camera Type | Intel RealSense R200|
    |Camera Firmware | 1.0.71.06|

    (before removing ros-kinetic-librealsense, the above read: |librealsense | 1.12.1| )

  9. Hi Jim,
    I plan to change my orbbec astra camera, for a R200 or highter.

    Did you do :
    power mesurements with amcl,
    tx2 CPU load,
    quality of color camera in low/high luminosity (important in tracking, recognition)

    And, more than all, what do YOU thing of both cameras.
    Jetsonhacks in my new bible.
    Thanks Jin

    1. Hi Vincent,
      The R200 is now out of production, Intel is bringing out the D400 series. It’s supposed to have much better performance. We shall see.
      As to all of the performance bits, it depends on the application. The R400 streams the depth map and images directly from the camera, so what the Jetson does with it will determine its CPU load. I would say that the RGB camera is pretty noisy in low light situations, it’s probably similar in performance to a webcam. I don’t know anything about the Astra. Thanks for reading!

  10. I’m having the error where the camera realsense note works only one time. Should I Ctrl-C the node and relaunch it, nothing works, and a flag is raised to SIGTERM. any subsequent launches of the realsense node brings pages full of red errors. Have not found a fix in the forums

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclaimer

Some links here are affiliate links. If you purchase through these links I will receive a small commission at no additional cost to you. As an Amazon Associate, I earn from qualifying purchases.

Books, Ideas & Other Curiosities