MS Kinect V2 on NVIDIA Jetson TX1

With a USB firmware patch and an updated libfreenect2, the Microsoft Kinect V2 now runs on the Jetson TX1 Development Kit. Looky here:


For a stretch there it was not possible to run the open source Kinect V2 driver libfreenect2 on the Jetson TX1 because of an issue with the USB firmware. Fortunately NVIDIA has issued a firmware patch
(see the Jetson TX1 Forum, USB 3 Transfer Failures ) which fixes the issue. As you might recall, Microsoft now offers the Kinect V2 as a two part kit a Xbox One Kinect Sensor Bar along with a Kinect Adapter for Windows. You will need both a Kinect Xbox One sensor and the adapter for use with the Jetson, or the discontinued Kinect for Windows. The Kinect Adapter for Windows converts the output from the Kinect to USB 3.0. The advantage of this setup is that you can use the Kinect sensor from your Xbox One, or at least have an excuse to get a Xbox One + Kinect for “research” purposes.


The installLibfreenect2 repository on the JetsonHacks Github account contains convenience scripts for installing libfreenect2 and the USB firmware patch. First, get the repository:

$ git clone

Second, install libfreenect2 and compile the library and examples:

$ cd installLibfreenect2
$ ./installLibfreenect2

Third, you will need to patch the USB firmware:

$ ./

After installing the USB firmware patch, it is necessary to reboot the machine in order for the firmware changes to take effect.

When the machine reboots, you can run the example:

$ cd ~/libfreenect2/build/bin
$ ./Protonect

Some Notes

The installation of libfreenect2 in the video is on L4T 24.1, flashed by JetPack 2.2. CUDA is required. Both 32 bit and 64 bit versions of L4T are shown in the video, installation of libfreenect2 and the firmware patch is the same in both cases.

The Mesa libs installed by libfreenect2 overwrite, which causes issues. The fix as of this writing (July, 2016) is to link to the Tegra version.

The repository contains a script which sets the CPU and GPU to maximum clock values. This will increase the Jetson TX1 performance at the cost of power consumption.

L4T 23.X Notes

The JPEG decompressor under L4T 23.X produces RGBA format, where as the Protonect viewer consumes BGRA format. This makes the video appear with a purple hue.

Update 8-28-16 In the NVIDIA Jetson Dev Forum user kassinen wrote:

Actually you can fix it by changing the line 56 from viewer.h

typedef ImageFormat<4, GL_RGBA, GL_BGRA, GL_UNSIGNED_BYTE> F8C4;


typedef ImageFormat<4, GL_RGBA, GL_RGBA, GL_UNSIGNED_BYTE> F8C4;

End Update.

The repository contains a patch which adds a simplistic algorithm to rearrange the bytes appropriately. If you intend to use this script with L4T 23.X, you will need to uncomment the line:

# patch -p 1 -i $PATCHDIR/bgra.patch

Also, if you plan to use this library in production L4T 23.X code, your application should consider writing specialized code to do the RGBA→BGRA conversion more efficiently.

For L4T 24.1, there is no patch applied as the JPEG decompressor produces BGRA format natively.


    • Hi Frank,
      Short answer is my usual waffle, it depends.

      It depends on what you measure. In the video above, the example app Protonect runs 4 panes. Pane 1 is the IR stream, Pane 2 is a color RBG stream, Pane 3 is a Depth frame, and Pane 4 is a registered color/depth frame.

      When data is received from the Kinect, it is decompressed with a JPEG decoder. On the TK1 and TX1 I believe that this is done in CUDA code. Both decompress frames faster than “real time”. The actual Kinect camera provides all streams at up to 30 fps, so both the JTK1 and JTX1 consume and decompress the streams with a minimum of lag.

      If all you were to do is collect the data both the JTK1 and JTX1 are equivalent speed wise as they read and decode the data in parallel.

      If you want to manipulate the data, or display a complex representation, then in practice the JTX1 has about a 50%-100% performance gain. As an example, the registration of the color and depth maps is done in software in the Protonect example, you can see the difference in speed cranking the CPU and GPU clocks on the JTX1 makes. While I haven’t test the JTK1 using the same test, I would guess the frame rate on a cranked JTK1 would be a little less than the normally clocked JTX1.

      One important point is that the JTX1 has more computational reserve left running the Kinect, that’s the advantage of having more CUDA cores and a different processor architecture. A lot of use cases are to gather data from the Kinect and process it on board. The Jetson then passes it to another control module. That would be things like identifying people, objects, hands, and so on. So while the display of something like a point cloud is great eye candy, in practice a lot of applications will build the point cloud, display the point cloud, but not do both at the same time. Or do something all together different. Like I wrote, it depends 😉

  1. Hi kangalow

    Once we got Protonect working with the Jetson TK1 and Kinect V2 the next logical step was to find a driver and the tools needed to receive data from the Kinect V2 sensor, in a way useful for robotics. Specifically, a data bridge between libfreenect2 and ROS. As a start, we just want the Kinect V2 to send data to a mobile base (i.e., Kobuki) to do SLAM. The great work done by Thiemo Wiedemeyer on the “IAI Kinect2” package ( seemed like a good next step (and perhaps the only next step). After several attempts to get the “kinect2_bridge” tool to work, I reached a stalemate, as follows:

    1. The current release of JetPack for the Jetson TK1 and TX1 does not support OpenCL, primarily to do with issue related to installing it on the ARM7 architecture, and OpenCL seems to be a prerequisite for the “kinect2_bridge” tool (on a Jetson TK1 anyway). NVIDIA is understandably pushing CUDA for both the TK1 ( and the TX1 .

    2. The current release of the “IAI Kinect2” package does not support CUDA (as implemented on the Jetson TK1 anyway). Several forks have been attempted in the “IAI Kinect2” package to get the Jetson working but ultimately there were issue in the root code and Thiemo said: “Since I don’t own a Jetson, I can’t provide a release, therefore someone else has to do and maintain it. For now the answer would be: no/maybe, if someone volunteers.”

    So I am hoping you might have some insight into the following. Are you aware of any:

    1. new efforts to support OpenCL on the Jetson TK1?
    2. alternative drivers and the tools that would allow a bridge between libfreenect2 and ROS, using a Jetson TK1 or TX1?
    3. new efforts to get Thiemo’s “kinect2_bridge” tool working with the Jetson TK1 version of CUDA?

    The Jetson TK1 and TX1 show so much promise for robotics and the Kinect V2 is arguably the best depth sensor on the market, with respect to availability, cost and resolution. But if we don’t have a bridge between libfreenect2 and ROS, we can never realize that potential.

  2. Thank you for the reference to the OpenPTrack project and the potential ROS bridge! Sometimes these jewels of info just seem to evade one’s effort to find them. 🙂

    • You’re welcome. If you’re planning on using the Kinect V2 for robotics, you may want to also consider the Intel RealSense cameras. Here’s a some info on the R200 and the Jetson:
      There’s a ROS bridge for it. The libraries are not quite mature yet on the Jetson (but mostly because the library is new), and the cameras need some tuning to work in any given environment. However, the camera does work outdoors and with ROS. The depth map processing and color registration are done in hardware onboard the camera itself, and the packaging physically lends itself to robotics. The packaging and power wiring alone will save you quite a bit of headache when mounting a camera on a robot.

  3. Have anyone succeeded in running kinect v2 on the latest l4t 24.2? Compilation of protonect leads to ‘undefined reference to drm…’ error, and replacing doesn’t help.

Leave a Reply

Your email address will not be published.