# Intel RealSense Camera Installation – NVIDIA Jetson TX1

NOTE: This article has been updated for use with L4T 28.1+. Please see: Intel RealSense Camera librealsense – NVIDIA Jetson TX Dev Kits

Intel RealSense cameras can use an open source library called librealsense as a driver for the Jetson TX1 development kit. Looky here:

## Background

In earlier articles we talked about the Intel RealSense R200 Camera, which is a relatively inexpensive RGBD device in a compact package. The camera uses USB 3.0 to communicate with a computer.

Intel has made available an open source library, librealsense on Github. librealsense is a cross platform library which allows developers to interface with the RealSense family of cameras, including the R200. Support is provided for Windows, Macintosh, and Linux.

There are two major parts to getting the R200 camera to work with the Jetson. First, operating system level files must be modified to recognize the camera video formats. When doing development on Linux based machines you will frequently hear the terms “kernel” and “modules”. The kernel is the code that is the base of the operating system, the interface between hardware and the application code.

A kernel module is code that can be accessed from the kernel on demand, without having to modify the kernel. These modules provide ancillary support for different types of devices and subsystems.

A module is compiled code which is stored as a file separately from the kernel, typically with a .ko extension. The advantage of having a module is that it can be easily changed without having to modify the entire kernel. We will be building a module called uvcvideo to help interface with the RealSense camera. Normally uvcvideo is built-in to the kernel, we will designate it as a module as part of our modification. We will modify uvcvideo to recognize the RealSense camera data formats.

The second part of getting the R200 to work with the Jetson TX1 is to build and install librealsense.

## Kernel and Module Building

Note: In the video above, the installation was performed on a newly flashed L4T 24.2.1 TX1 using JetPack 2.3

We have covered building the kernel for the Jetson TX1 in a previous article. Here are the major steps involved:

The script files to build the kernel on the Jetson TX1 are available on the JetsonHacks Github account in the buildJetsonTX1 repository.

$git clone https://github.com/jetsonhacks/buildJetsonTX1Kernel.git$ cd buildJetsonTX1Kernel

The script getKernelSources.sh gets the kernel sources from the NVIDIA developer website, then unpacks the sources into /usr/src/kernel.

$./getKernelSources.sh After the sources are installed, the script opens an editor on the kernel configuration file. In the video, the local version of the kernel is set. The stock kernel uses -tegra as its local version identifier. Make sure to save the configuration file when done editing. Next, patchAndBuildKernel.sh, patches one of the sources files to more easily compile the kernel:$ ./patchAndBuildKernel.sh

and proceeds to build the kernel and modules using make. The modules are then installed in /lib/modules/3.10.96[local version name]

## Install librealsense

A convenience script has been created to help with this task in the installLibrealsense repository on the JetsonHacks Github account.

$cd$HOME
$git clone https://github.com/jetsonhacks/installLibrealsenseTX1.git$ cd installLibrealsenseTX1
$./installLibrealsense.sh This will build the librealsense library and install it on the system. This will also setup udev rules for the RealSense device so that the permissions will be set correctly and the camera can be accessed from user space. ## USB Video Class Module Note: This step assumes that the kernel is located in /usr/src/kernel and that the kernel is to be installed on the board. If this is not your intent, modify the script accordingly. applyUVCPatch.sh has the command to patch the UVC driver with the RealSense camera formats. The third major step is to build the USB Video Class (UVC) driver as a kernel module. This is can be done using the script:$ ./buildPatchedKernel.sh

The buildPatchedKernel script will modify the kernel .config file to indicated that the UVC driver should be built as a module. Next, the script patches the UVC driver to recognize the RealSense camera formats. Finally the script builds the kernel, builds the kernel modules, installs the modules and then copies the kernel image to the /boot directory.

Note: The kernel and modules should have already been compiled once before performing this step. Building the kernel from scratch as described in the ‘Kernel and Module Building’ section above pulls a few shenanigans to get things to build properly the first time through.

One more minor point of bookkeeping. In order to save power, the Jetson TX1 will auto-suspend USB ports when not in use for devices like web cams. This confuses the RealSense camera. In order to turn auto-suspend off, run the following script:

$./setupTX1.sh Once finished, reboot the machine for the new kernel and modules to be loaded. ## Conclusion So there you have it. This has been a little bit more involved than some of our other projects here, but if you are interested in this kind of device, well worth it. ## Notes There are several notes on this project: • In the video above, the installation was done on a Jetson TX1 running L4T 24.2.1 immediately after being flashed by JetPack 2.3 • These scripts only support the 64-bit L4T series, 24.X • One difference between L4T 24.2 and L4T 24.2.1 is that a soft link issue with Mesa drivers has been resolved. If you are using L4T 24.2, you may have to:$ cd /usr/lib/aarch64-linux-gnu
$sudo rm libGL.so$ sudo ln -s /usr/lib/aarch64-linux-gnu/tegra/libGL.so libGL.so

• QtCreator and Qt 5 are installed as dependencies in the librealsense part of the install. There are QtCreator project files located in the librealsense.qt directory. The project files build the library and example files. If you do not use QtCreator, consider modifying the installer script to take QtCreator out.
• The librealsense examples are located in librealsense/bin after everything has been built.
• These scripts install librealsense version v1.11.0 (last commit 74ff66da50210e6b9edc3157411bad95c209740f)
• The RealSense R200 is the only camera tested at this time.
• Examples using librealsense are located in librealsense/bin
• Intel RealSense Stereoscopic Depth Cameras is a comprehensive overview of the stereoscopic Intel RealSense RGBD imaging systems

• You are welcome, and thanks for trying it out! Looking forward to see your robots in action.

1. Hi,

We have successfully made the streaming of Intel RealSense SR300 camera in L4T R24.1 by modifying the script and L4T R24.2 using the steps given above.

Thanks kangalow for the detailed steps provided.

• You are welcome, and thanks for reading!

2. Firstly, thanks for the detail explanation!

However, when I follow the steps, and connect the Intel RealSense R200 to usb3.0 port of Astro carrier board for jetson TX1. The dmesg shows “can not enable port 2. Maybe USB cable bad?” The board can recognize normal usb webcam, but not the RealSense.

Since it works well on windows, so I think is not the cable problem. Any suggestions is appreciated!!

3. Hi,
Unfortunately I do not have any experience with the Astro Carrier Board. If you have it plugged into the USB 3.0 port, it should enumerate properly assuming that USB 3.0 is enabled. Hope you get it figured out!

I found that in the video the uvcvideo is auto loaded at the start up. However, in my case, I have to run “sudo modprobe uvcvideo” manually after reboot.

(I am using the method stated in http://qiita.com/furushchev/items/a95acd598b7106fd52ba to boot from SATA SSD. It seems that jetson TX1 will first examine the extconfig inside bootable emmc partition, so that I have to manually copied the generated kernel to /media/ubuntu/xxxxx/boot to boot the correct kernel.)

I am not sure if it is normal that we should manually loading uvcvideo driver?

• In the method above, the UVC Video driver is marked as an external module and built. After it is built, it is copied to the correct place in the driver file hierarchy. As you note, it is a file called: ‘uvcvideo.ko’. The local version of the OS determines placement, so it’s important that the uvcvideo module be built using the same local version as the kernel that is being used.

With that said, is it possible that you have ucvideo.ko in the wrong place? I’m not sure where all of the drivers are loaded from when booting from the SSD, but it may be that the uvcvideo.ko may not be seen during the booting process. When booting from the internal flash as shown in the examples, it just loads like all of the other modules.

• I’m getting the same issue, but I am using the TX1 breakout board. I followed the steps exactly as you laid out here, but the module is not loaded at boot, so the R200 does not activate. I double-checked the module location, and it is in drivers/media/usb/uvc/, which looks right. Is there something obvious that I might have overlooked?

• What is the result of:
$uname -a Is the camera plugged in when you boot the machine? • Thanks for the response! The camera is indeed plugged in at boot. I wound up trying to use JetPack 2.3 (instead of 2.3.1, the latest) and it appeared that the output of my build was the same as yours. I occasionally get different errors (in addition to the one above) now, but it appears that whether the uvcvideo module loads at boot is not consistent. uname -a: Linux tegra-ubuntu 3.10.96- #2 SMP PREEMPT Sun Feb 26 02:05:44 UTC 2017 aarch64 aarch64 GNU/Linux When the module decides to load, it recognizes the camera as the R200, but fails when it attempts to create a debugfs directory. I verified that the udev was set correctly, adn the rule was added to rules.d properly (though the executable that was added does not exist in /usr/local/bin and adding it does not change anything). • It looks like I should have used a different delimiter for the local version, but in the output of uname -a above should be 3.10.96-(local-version). • I would advise having a local delimiter of some sort, remember that it must begin with a dash ( ‘-‘ ). Have you turned off USB autosuspend? If the USB autosuspend is on, things tend to get tricky. What executable are you looking for in /usr/local/bin ? Do you get the same results when you do a plug/replug on the R200, and what type of USB hub are you using? • Exactly–I used angle brackets to denote the local version, but they got swallowed up by the commenting system. The local version actually looks like 3.10.96-test, or something similar. I turned off USB autosuspend using setupTX1.sh, and looking at the file in /boot/, it is set to -1. I have also tried setting the usb_owner_info to 2 instead of 0, but that did not appear to change anything. The executable is /usr/local/bin/usb-R200-in, which is what is specified by the rule in 99-realsense. It might be the other one, I’m going from memory on this. This is from the librealsense script install-r200-udev-fix.sh. I’m not using a hub–I’ve connected the camera directly into the USB port, and I have a mouse+keyboard combo going into the other USB port. When I do a plug/replug, it does do the same thing (can not enable port 2. Maybe USB cable bad), but the error also varies between the debugfs directories, “device not accepting address”, and the one above. However, the debugfs error only appears when the uvcvideo module loads at boot, which it does not always do. When the above error occurs, lsmod almost always shows no uvcvideo loaded. • Ah, the html thingy. I hate that about comments on here. As for the /usr/local/bin executables, I didn’t copy them over and don’t use them. I seem to recall there being issues if they were installed. I rebuilt a kernel yesterday following the instructions above. Fortunately it’s been long enough that I’ve forgotten the process and need to do it step by step. It worked fine here, so I don’t know how to fix the issue that you’re having. I did you use USB 3.0 hub yesterday, unfortunately I had to reflash the system for another project or I would have tried it straight into the USB plug now. I seem to remember that it worked when plugged directly into the port, but it’s been a while since I actually wrote this article and thankfully I forget such things rather easily. At this point I can’t think of anything else that may be causing the issue. Hopefully you can hunt it down on your own and let us know if you find a fix. • No worries–thank you for talking it through with me! I’ll try the step-by-step instructions again and see what happens. One last thing that I thought of: do you know of any firmware updates on the R200 itself that might cause incompatibility with the older versions of the kernel (since the steps call for JetPack 2.3 instead of 2.3.1)? The only other thing I can think of is that the scripts grab 24.2 even though they are built with 24.2.1 (and the github repo suggests that the commands should be run after flashing with 24.2 instead of 24.2.1 as in the video). Would that be anything to worry about? • Glad to hear you got it working! • I have not updated the R200 firmware since I received it, so that may be a difference. I should update the kernel script, but since both 24.2 and 24.2.1 report the same kernel name I always just thought that they were the same. I’ve used both .2 and .2.1 and not seen any difference with the 24.2 kernel. Unfortunately this whole process is a lot like the children’s show ‘Sesame Street’ where you play “One of these things is not like the other, one of these things just doesn’t belong” Getting this to work in the first place was a multi-week process, I’ve been hesitant to change or update it since then since it’s so much work. • Absolutely agree with the analogy! I will keep poking around, and report back if I find anything that could contribute to why it won’t work (with the exact same setup). Thank you for your help! • For anyone else who might have this problem: try using a powered USB3 hub. After doing some digging around the internet, some have written that the R200 required the addition of an external hub (with its own power supply). That seemed to do the trick for me! 4. when I run buildJetsonTX1Kernel/scripts/installKernelSources.sh at the end it doesnt open the terminal. I have done this before and it worked any idea on how to open this manually? thanks heaps for the videos • its make xconfig thats not working 5. all good i had’t sudo apt-get install g++ • Glad you got it to work! Thanks for watching the videos. 6. Hi, I have just tarted working on TX1 and I’m trying to use R200. My tx1 has l4t 23.2 version with 32bit ubuntu. Should I also follow the same procedure as you have outlined for 24.2.1. Thanks • Hi Dhiraj, A major issue is how to handle the UVC video module. I have not attempted to build that kernel onboard a Jetson because of the mixing of the architecture (64-bit kernel with 32 bit user space). The ideas are the same, but the procedure would probably be quite different. Thanks for reading! 7. hi,thank you for your video. But i met a error, when i did “./buildPatchedKernel.sh”, the terminal showed “kernel changes cancelled “. May you tell me how to do about it? Thank you very much. 8. Hello, I am stuck right now, because, although there was no error while installing the library, I could not see uvcvideo after typing the ‘lsmod’ command. While I’m trying to catch the error, I found the messages that might be relevant to the uvcvideo not being installed. after typing ./installLibRealSense.sh or ./, I got the following: Err:27 http://archive.ubuntu.com/ubuntu trusty/main arm64 Packages 404 Not Found [IP: 91.189.88.152 80] Could it be because the necessary package was not installed? Please help me with this! 9. This is not enough information to provide any useful help. 10. After following the directions exactly, the uvcvideo module does not work for me. When I run:$lsmod
the module does not come up. So I ran:
$dmesg | grep uvcvideo gives me “Unknown video format” error. After many hours of trying to figure it out, I think the problem may be that the realsense video formats are not being patched correctly, but i’m not sure how to solve that problem. Some more information:$lsusb shows that the device is being recognized, so not a USB3 problem
I followed your videos on flashing the Jetson TX1 and building the modified kernel
I tried with Jetpack 3.0, 2.3.1, and 2.3 – Same error in all situations
I tried modifying the script to run the realsense-camera-formats_ubuntu16.patch instead of realsense-camera-formats.patch

Any guidance would be much appreciated!

There isn’t enough information to help. What version of L4T are you using? How did you compile the kernel? Did you cross compile the kernel, or compile it on board? What is the result of \$ uname -r
Is that result the same as the local version of the kernel that compiled?
That you cannot lsmod the module indicates that it is not loaded. Is the camera plugged in? If it is not, then the module does not load. Each version of JetPack has different source code for the kernel, did you modify your install to take that into account?

11. I’m having a lot of trouble saving infrared and depth files (extension is .avi) using opencv2 and Intel’s RealSense.

Here are the hardware specification

embedded platform : Nvidia Jetson TX1

camera : Intel RealSense R200

libraries used : opencv2, librealsense, ROS(robot operating system)

This is the part of the code

// 2. get the intrinsic values from the device
// dev is an instance of the class device from librealsense
// any instances ending with _intrin are of the class intrinsincs, also from librealsense
_infrared_intrin= dev.get_stream_intrinsics( rs::stream::infrared );
_depth_intrin = dev.get_stream_intrinsics( rs::stream::depth );
_color_intrin = dev.get_stream_intrinsics( rs::stream::color );

// 3. Create the Mat frame for infrared, depth, and color
// Create infrared image
cv::Mat infrared( _infrared_intrin.height,
_infrared_intrin.width,
CV_8SC1,
(uchar *)dev.get_frame_data( rs::stream::infrared ) );

// Create depth image
cv::Mat depth16( _depth_intrin.height,
_depth_intrin.width,
CV_16SC1,
(uchar *)dev.get_frame_data( rs::stream::depth ) );

// Create color image
cv::Mat rgb( _color_intrin.height,
_color_intrin.width,
CV_8UC3,
(uchar *)dev.get_frame_data( rs::stream::color ) );

cv::Mat depth8u = depth16;

// 4. Create the display window for each
// isColor1 = (infrared.type() == CV_8UC3);
//cv::cvtColor( infrared, rgb, cv::COLOR_BGR2RGB );
cv::imshow( “Infrared Image”, infrared );
cvWaitKey( 1 );

// isColor2 = (depth8u.type() == CV_16U);
// isColor2 = True;
depth8u.convertTo( depth8u, CV_8UC1 /*,255.0/1000*/ );
cv::imshow( “Depth Image”, depth8u);
cvWaitKey(1);

isColor3 = (rgb.type() == CV_8UC3);
cv::cvtColor( rgb, rgb, cv::COLOR_BGR2RGB );
cv::imshow( “RGB Image”, rgb );
cvWaitKey( 1 );

// if a flag is triggered, then the program will save the recording using VideoClass.
if (flag == 1){

// to do : we need to implement such that only ONE folder is created
// every time we press the record button
//current_time();

if (!writer1.isOpened()){
ROS_INFO(“Open the writer Infrared….”);
std::string filename1 = real_filename + “/infrared.avi”;
writer1.open(filename1, codec, 10 , infrared.size(), isColor1);
}
if (writer1.isOpened()){
ROS_INFO(“Starts Recording Video_Infrared….”);
writer1.write(infrared);
cvWaitKey( 10 );

}

if (!writer2.isOpened()){
ROS_INFO(“Open the writer Video_DEPTH….”);
std::string filename2 = real_filename + “/depth.avi”;
writer2.open(filename2, codec, 10, depth8u.size(), isColor2);
}
if (writer2.isOpened()){
ROS_INFO(“Starts Recording Video_DEPTH….”);
writer2.write(depth8u);
cvWaitKey( 10 );

}

if (!writer3.isOpened()){
ROS_INFO(“Open the writer Video_RGB….”);
std::string filename3 = real_filename + “/rgb.avi”;
writer3.open(filename3, codec, 10, rgb.size(), isColor3);
}
if (writer3.isOpened()){
ROS_INFO(“Starts Recording Video_RGB….”);
writer3.write(rgb);
cvWaitKey( 10 );
}
}

VideoCapture was working fine for the three cameras (Infrared, Depth, and RGB), but when VideoWriter were used to save the files and the videos were playbacked, the infrared and depth files were corrupt. Any help will be appreciated, and if you are confused about my question, please let me know.

If you would like to see how videos are played, I will send you the screenshot of it.

12. What do you think are the best value stereo (depth) cameras for the TX1 for typical computer vision applications like object (autonomous navigation) and facial recognition/detection?

Logitech C922?
Intel Realsense?

Thanks.

• Depends on the application. To get started, here’s an article: https://stimulant.com/depth-sensor-shootout-2/

The RealSense cameras are being refreshed this month (the D400 series), many people are interested in seeing how well they perform.