kangalow

849 Comments on kangalow

  1. I get an error:

    Kinect camera test
    Number of devices found: 1
    Could not open audio: -4
    Failed to open motor subddevice or it is not disabled.Failed to open audio subdevice or it is not disabled.Could not open device

    • Hi Alessandro

      try to disable FREENECT_DEVICE_MOTOR, and only leave FREENECT_DEVICE_CAMERA. You have to do this in each place where kinect device is initialized, for instance in:

      wrappers/c_sync/libfreenect_sync.c if you want to use c_sync wrapper
      examples/glview.c

      in particular in order to disable motor subsystem remove FREENECT_DEVICE_MOTOR:

      freenect_select_subdevices(ctx, (freenect_device_flags)(FREENECT_DEVICE_MOTOR | FREENECT_DEVICE_CAMERA));

      so the line becames:

      freenect_select_subdevices(ctx, (freenect_device_flags)(FREENECT_DEVICE_CAMERA));

      of course this is only a workaround, this not allow the use of the motor at least in 1473 kinect model

      good luck!

      -Alex

  2. Hi, I’m following your guide to install OpenNI2 on my Jetson TK1 to be used with the Asus XTion PRo Live on my robot.
    I have a doubt: do I need libfreenect driver? I thoughtit was needed only for Microsoft Kinect… doesn’t it?

    Anyway, good job… this is a really clear guide!!!

    • Hi Myzhar,
      The libfreenect driver is a device driver for PrimeSense based devices. The Kinect, the Asus XTion Pro Live, and the Occipital Structure Sensor are all based on the PrimeSense chipsets. It is my understanding that you can use libfreenect as the driver, and must configure it as an OpenNI2 driver as noted above. Of course you could always try it without it, and see what happens.

      When they first started, libfreenect and OpenNI were two “competing” projects and were not at all compatible. Over the last couple of years, people started integrating OpenNI with libfreenect as the device driver. It is unclear to me what the minimal configuration is. I believe that ROS also has some integration with depth cameras too, including the ASUS.

      Your robot looks like it’s coming along, it was great to see the videos from Makers Faire!

    • Hi moreteavicar,
      This is just using mostly the ARM cores. I got sidetracked on to other things, but would like to revisit the Kinect project using the GPU for mapping the scene in GLSL.

  3. Thanks for your Instruktionen.
    After upgrading there is no more Audio
    Device. Do you have any idea.
    Host is 12.04

    Best Regards
    Frank

    • Hi Frank,

      You are welcome. For the audio problem:

      I have upgraded my two Jetsons here, and both have working audio after the upgrade. I don’t think it has anything to do with the actual process of upgrading. However, it is always hard to tell what changes during the production runs on these boards. Both of my boards were relatively early, and built at around the same time.

      I have noticed that someone else is having troubles on the Jetson Forum:

      https://devtalk.nvidia.com/default/topic/789386/embedded-systems/21-1-vanishing-audio/

      so hopefully we can get help there. Don’t hesitate to post on the forum and try to get help, there are a lot of good people on there.

      I would have felt better if the boards I have were having audio issues, that way I could work on fixing the issue. As it is, I don’t know how to replicate the issue.

      Thanks for watching the video!

  4. Hi. I have a question for you. Does increasing the swap memory and installing a ssd hard drive increase the performance in terms of data logging to disk?

    I’m trying to log data at 60 frames per second from a camera (640×480). But, it doesn’t go more than 20-25 frames per second even though the camera supports it and the program supports it.

  5. Hi Rahul,
    Although it’s hard to tell for any given application without actually testing it out, the installation of the SSD will definitely make your disk access and writing much faster. For a data acquisition program, the swap memory probably won’t get you much more performance, since the machine is actually writing to disk when it’s swapping memory in and out. Swapping memory introduces a little bit of a performance hit as the pages get swapped in and out of RAM.

    With that said, the first thing to test in your application is to see if you are actually getting the requested frame rates. Request a few seconds of video saving it to memory, then save it to a file. Uncompressed, one second of 640×480 60 fps stream is about 74MB. Examine the file to make sure that it is actually saving 60 fps.

    SATA is currently level II on the Jetson, 2.5 Gbs, which is about 400-500 MBs give or take, so you can certainly fit the stream through the pipe. The SSD is build for Sata III, so it shouldn’t have any trouble keeping up with the writing.

    However, remember that you may have to thread the program properly to write to disk while you are capturing, so that you use more than one CPU core.

    Hope this helps

  6. Hi, I have installed the L4T 21.1 as you did. Also I installed the Opencv following your steps. But I’m getting errors trying to compile a program.

    /usr/bin/ld: cannot find -lopencv_nonfree
    collect2: error: ld returned 1 exit status
    ubuntu@tegra-ubuntu:~/Downloads$

    Does it comes with the lopencv_nonfree library? Do you know how to install it?

    Thanks in advance.

    • The NVIDIA Jetson does not officially support the lopencv_nonfree library, which means that algorithms such SIFT and SORT are not available without building them yourself.

  7. Hi, I’m having troubles with the lopencv_nonfree library in L4T R21.1. I can’t compile some programs. Do you know how to solve it? Thanks in advance.

    • The NVIDIA Jetson does not officially support the lopencv_nonfree library, which means that algorithms such SIFT and SORT are not available without building them yourself.

      • I might be missing something, but I’m trying to compile OpenCV following the guidelines from the http://elinux.org/Jetson/Installing_OpenCV. However I’m getting the following error.

        .
        .
        .
        [ 75%] Building CXX object apps/traincascade/CMakeFiles/opencv_traincascade.dir/imagestorage.cpp.o
        Linking CXX executable ../../bin/opencv_traincascade
        [ 75%] Built target opencv_traincascade
        make[1]: *** [modules/gpu/CMakeFiles/opencv_gpu.dir/all] Error 2
        make: *** [all] Error 2
        ubuntu@tegra-ubuntu:~/opencv-2.4.9/build$

        In both L4T 21.1 and L4T 21.2 I’m getting the same error. However in L4T 21.2 the error appears around at 85%. I’m compiling the library because I’m using some libraries that are not prebuilt version.

        Any help will be highly appreciated.

        Thank you beforehand folk!

  8. I’m trying to follow your steps. However I run the first command chmod +x JetPackTK1-1.0-cuda6.5-linux-x64.run But when I’m about to run it, two extra files appears with the name invalid encoding, any idea what am I missing?

    Thank you.

  9. I think I lost you on the comparison with Big Iron. The TK1 takes ~200ms per image and the K40 takes ~2ms per image. These are 2 orders of magnitude of difference, not one. But, perhaps, I’m missing something.

  10. Hi,
    Any directions on how can one debug OpenCv code and place step by step breakpoints to debug I/O pins?
    You said JTAG is not required but then what is the alternative?

    BR
    General_heat

    • Hi General_heat,

      If you have to debug hardware, then JTAG is usually the easiest way to go. The blog post was a joke on how difficult it is to use JTAG (or hardware debuggers in general) when you are writing high level software. In the vast majority of cases, if you are writing OpenCV code you are not worrying about what is showing up on I/O pins. JTAG is like any other tool, it is appropriate to use when needed. In my experience, when working with hardware engineers that is one of the relatively few number of ‘software’ tools that they use, so they tend to use the hardware debuggers when software based debuggers are more appropriate. In most cases, low level interface libraries/drivers should be written as an abstraction to interface the hardware (such as I/O pins) against much higher level constructs such as OpenCV.

  11. I’m having a couple issues with the cuDNN install for CAFFE, and they all seem related to the MDB map size.

    The make all reported a large integer truncation warning for convert_mnist_data.cpp, and it also reported the same warning for db.cpp

    Then the make runtest errored out on db.hpp with a MDB_MAP_FULL: Environment mapsize limit reached.

    I believe the solution for both is to lower the mapsize to 1GB (1073741824).

    There is a setting for this in both cpp files.

    I’m not sure how this happened. I’m positive I downloaded the correct R1 version for the Jetson TK1, and it uses a 32bit version of Ubuntu.

    • Interesting. I encountered the truncation warning also and didn’t think too much about it. However in my case, ‘make runtest’ did not have any issues and performed all of the tests as seen in the video.

      Not much help, but the compilation and tests were done immediately after a clean install of LT4 21.2, CUDA 6.5 and OpenCV using JetPack 1.0. Caffe and cuDNN were installed using the gist scripts on github noted in the blog post.

      • I haven’t been able to replicate the problem.

        I am going to try with cuDNN RC 3 with the new 21.3 LT4 release and see if there are any issues, but it’s hard to address an issue I haven’t replicated.

      • Here’s something that might be causing the issue that people have reported on Github: “I think this issue is due to the Jetson being a 32-bit (ARM) device, and the constant LMDB_MAP_SIZE in src/caffe/util/db.cpp being too big for it to understand. Here’s the whole line:

        const size_t LMDB_MAP_SIZE = 1099511627776; // 1 TB

        The solution suggested by Боголюбский Алексей of using 2^29 (536870912) instead works at least well enough to get all the tests to run successfully”.

    • Here’s something that might be causing the issue that people have reported on Github: “I think this issue is due to the Jetson being a 32-bit (ARM) device, and the constant LMDB_MAP_SIZE in src/caffe/util/db.cpp being too big for it to understand. Here’s the whole line:

      const size_t LMDB_MAP_SIZE = 1099511627776; // 1 TB

      The solution suggested by Боголюбский Алексей of using 2^29 (536870912) instead works at least well enough to get all the tests to run successfully”.

  12. Hey I was wondering if your opencv installation resulted in opengl related functions in opencv to actually work? When I tested for opengl in the samples it came up as “BUILT WITHOUT OPENGL”.

    How do I build this module without wasting an hour reinstalling the whole library?

  13. I have not tried OpenCV in those circumstances. Typically I don’t use the OpenGL related functions as I tend to separate out those functions into my own render loops. Sorry I can’t be of more help.

  14. Hi kangalow,
    I faced Unity locks using Chromium with the latest 19.4 release of L4T, so I decided to continue to use Firefox also with v21.2, even if I would prefer to use Chrome… do you think that now it is working better?

    Thank you for the scripts, I will modify the guide on my blog replacing my “manual” tweak installation with your scripts ^_^

    • Hi Myzhar,

      I’ve found that the system as a whole is a lot more stable when the CPU is set into performance mode. For me, I’ve noticed that as long as there aren’t a lot of windows/tabs open in Chromium, the system is relatively stable. However, it still does crash from time to time but it’s probably related to Jetson rather than Chromium itself, probably some type of driver issue. Browser is a personal preference, but not being able to download a .ZIP file in Firefox is a big minus for me as I tend to download a lot of packages off of Github using that feature.

      It’s probably a good idea to take the scripts in the repository and modify them to your taste. You’ll probably want to run different configurations and applications when placing them on your robot. I tend to flash the boards a lot, so I thought the scripts might be useful for setup for people when they’re just starting out.

      • Hi,
        in the last days I noticed a really strange behaviour on my Jetson.
        Your script to maximize performances has always worked very well, but since a couple of days my Jetson starts randomly with only one or two cores active.
        If I run the “maxPerformance.sh” script with “sudo” I get an error accessing “/sys/devices/system/cpu/cpuX/online”.
        If I want to make the script working correctly I must to run it under “sudo su”.

        Really strange!

        Have you ever faced a similar issue?

  15. Hi , im was a little bit rough when i disassembled my ps3 eye; accidentaly i scratched the board with my screwdriver and kicked the resistor r29 ( down left) out into nowhere…. Could you please measure and tell me the value of that resistor? I dont want to buy a new camera because of that 2cent mistake.
    Thank you and Greetings

  16. Hello. i’m still rather noob-ish at using linux.

    I ran the Gist script and followed it to ensure there were no errors along the way. It was completed problem-free, but when I restart my Jetson board, it brings me back to Ubuntu. I’m not sure what I did wrong.

    • Two questions. Which version of LT4 are you running? You must be running a version > 21.1.

      The second question, is your SD card in the slot when you boot the machine up?

      • I feel sheepish. I didn’t notice that I had to be running a certain version to do this.

        I can’t find version info as system settings becomes unresponsive after I open it up, but I am pretty certain I should need an update. I haven’t connected it to the internet since I first got the board running.

        The sd card remains in its place at all times, but I will update L4t before I try again.

        • No worries. The board is shipped with a version of L4T 19.x. The newer version (L4T 21.x) uses a different bootloader which allows for the Jetson to boot from the SD card. This in turn allows one to boot straight into Android when the SD card is in the slot, as the newer boot loader looks to boot from the SD card first.

          Good luck!

  17. I’m trying to get something similar working, but my syba pcie card doesn’t seem to be detected. Did you have to do anything special, or was it just plug-and-play? e.g. lspci doesn’t show anything beyond the TK1’s builtins, and there is an error message about link 0 being down during probing.

    • I did not have to do anything special. Even if it doesn’t show up as a USB hub, it should show up in lspci. I’m assuming that you have reseeded the card a couple of times. I am running LT4 21.2, if that’s of any help.

      • Ok, maybe I have a bad card or board. Yes, I did try different combinations of replugging and power cycling, to no help. I tried both 21.2 and the new 21.3. Thanks all the same =)

  18. Well done, enjoy the sense of humor. Maybe not the most important ingredient, but definately a dependency. Takes me back to the days of hacking the ACSI interface so I could sink about $300, and untold hours connecting a 20 Meg hard drive to my Atari ST. Trust me, drugs wouldn’t have helped. So my delema is do I spend the hours hammering out a driver for my Broadcom BCM4352, or just drop $20 on one that already works, and live without 802.11AC, which probably wouldn’t work anyway. That is, until the next upgrade.

    • The drugs would have helped me forget. The Intel 7260 has 802.11ac, and has built in support in the L4T 21.x releases. The new releases just require you to download the 7260 firmware:

      $ sudo apt-get install linux-firmware

      and after that it is up and running. I’ve had issues with not being able to see access points through the pull down, but if the SSID is known you can just type it in. If you recompile the module, then everything works as normal. I haven’t tested L4T 21.3 yet to see if they have fixed that particular bug.

      I think you’ve run into the dilemma that all dev kits have, or maybe Linux in general. How much time do you spend on stuff that ultimately is a $20 problem, and doesn’t really effect your project or enjoyment? Personally, I had been away from Linux long enough to need to relearn the module building skills, so I bit the bullet and learned how to do it. The unfortunate part is it’s one of those things where you need to ‘know’ so many parts of the architecture to jam it in there that you get overwhelmed by the nomenclature. For the Intel wireless, there was Module, Driver, and Firmware you needed to know, but no easily accessible map telling you which each one of those were named or their function (or even that you needed all three). One can make educated guesses, but things like ‘iwlwifi’ don’t mean a whole bunch to the NooB.

      With the above guide, it shouldn’t take too long to get the BCM4352 up and running, but certainly if you don’t enjoy that type of thing (or value your time at all), it’s probably easier and cheaper to get another wireless card.

      • Completely agree. There is no overall map. My entire life with Lynux has been “do the next step, fix the error, rinse and repeat”. And this is not only my own development projects, but the FDA regulated servers I work on for my day job as well. Just about every time, even though it eats up a rediculous amount of time, I’m amazed at how much I learn.

        Your guide is tempting me back in. It’s just that when I start, I can’t seem to quit until it works. The older I get, the less I believe that sleep is overrated.

        Oh, and as bad as the terminal program is, I absolutely refuse to ever touch vi again. Lucky for me there are many alternatives. If I’m given the choice of begging in the street to feed my kid or use vi, I’m getting the organ grinder and tin cup out.

        I may have forgotten to say it, but thanks for taking the time to post this. Very well done, and helpful.

  19. Hey,

    I was there as well, it was an awesome experience. I got to bring a Jetson TK1 board home with me 🙂 I look forward to using it for some projects I have in mind.

    I was at the NVIDIA part of the EXHIBIT but too bad we didn’t meet, maybe next year? 🙂

  20. I spoked a lot with the guys of StereoLab. That camera seems really amazing. Only one concern about the distance of the two cameras. The cameras are 12 cm far, so the minimum measurable distance is over 1 meter. Really good for outdoor, but not useful for indoor operation used standalone.
    What is really amazing is the USB3.0 and the 120 fps at 640×480!!!

  21. I was just able to handle it for a few minutes, it seemed to be well packaged. It struck me that it’s designed for outdoor use. In an overview of the sensors at the show, it seems that there is currently no ‘all in one’ solution that works both indoors and out for close sensing and far sensing. That’s not surprising, that’s why a wide variety of sensors have been created over the years, each with their own niche.

    It is a little surprising, though the market is still very young, that there isn’t a sensor package that combines a couple of different technologies to provide both near and far sensing indoors.

    • whats the accuracy or axial resolution of this camera?
      in another words, if someone use this camera as a motion tracking device, whats the smallest motion quantity which can be detect via this device?

      • Hi Milad,
        It’s not easy to speak about “axial resolution” in a Stereo System because it’s a parameter that depends on the complexity of the environment. What I can say to you is that you cannot reach the resolution of the ZED in any other stereo system because it’s the only one that has full HD at that high frame rate.
        Furthermore the algorithms used by StereoLabs developers generates a depth map so defined that I never saw in any other stereo vision system that I used in my past researches.
        Finally you stimulated my curiosity, so maybe that in the next days I will make a full test to understand the real definition capabilities of the camera.
        I will put the results on my blog, and I’m sure that Kangalow will reply them here since I will use a Jetson TK1 and a Jetson TX1.
        Stay tuned
        Walter

    • The CTO spoke extensively about implementing their algorithms on the Jetson TK1. However, note that this is not a currently shipping product. I’m not sure what platforms they plan on supporting. I’ll note that representatives from STEREOLABS were speaking with the Jetson marketing team, but I’m not sure that means that they will be releasing a Linux version.

      • Yeah, when I talked to them at their booth, they mentioned that they didn’t support Linux yet, but they’d support it in a few months.
        Either way, it’s an awesome camera!

  22. >the Kinect needs to be initialized with a Windows 8.1 machine.

    Please tell me how to do it. Just install drivers and plug-in Kinnect?

    • This is several months ago for me, I don’t recall what the exact procedure was. Please note that Microsoft has discontinued this particular product. The replacement is a Kinect for Xbox one with an add-on adapter.

      As I remember, you plug the Kinect for Windows into the Windows 8.1 box. It is a plug and play device. You can then download the SDK samples from Microsoft (the links are available from the packaging material included with the Kinect), and run sample programs. The camera is then ready to be used on the Jetson.

      Hope this helps.

      • kangalow, Thanks for your response

        >Please note that Microsoft has discontinued this particular product. The replacement is a Kinect for Xbox one with an add-on adapter.
        I am trying to bring “Kinect for Xbox one with an add-on adapter” up right now.
        And now I see why the first one was out of stock everywhere.

        Now I am running LT4 21.3 and have these:

        ubuntu@tegra-ubuntu:~/tmp/libfreenect2/examples/protonect/bin$ ./Protonect
        terminate called after throwing an instance of ‘std::runtime_error’
        what(): JPEG parameter struct mismatch: library thinks size is 560, caller expects 536
        Aborted

        It seems there are lots of troubles with USB 3.0 controllers and hubs even for ordinary Windows PC:
        https://social.msdn.microsoft.com/Forums/en-US/bb379e8b-4258-40d6-92e4-56dd95d7b0bb/confirmed-list-of-usb-30-pcie-cardslaptopsconfigurations-which-work-for-kinect-v2-during?forum=kinectv2sdk.

        Because of that I haven’t had any chance yet to “initialize” my brand-new Kinect. And I thought that can be a reason.

        • Hi laborer,
          Certainly the USB card has been an issue for a while, from what I’ve seen some of those issues have been resolved with later updates. 3.0 USB in isochronous mode is still relatively untested out in the real world. Later versions of the Linux kernel (3.16+ I believe) address these specifically, but I don’t know what the Windows boxes have to do to get it to work.

          The JPEG library mismatch is probably an issue with paths and the two different libJPEGS that are being used. On the Jetson, if you ‘ldd’ you should see that two libjpeg are linked, one is the Tegra version, one is the non-accelerated version. Lingzhu Xiang, who I forked on Github, also has instructions: https://github.com/xlz/libfreenect2/tree/jetsontk1.

          Good luck!

          • kangalow,

            I haven’t find yet a supported USB 3.0 controller for the Windows machine.
            Therefore I cannot assert that I “initialized” my Kinect(But I tried).
            Nevertheless Kinect is working for me. And I’m on 21.3 L4T.

            As for jpeg, I turned off ENABLE_TEGRA_JPEG in the CMakeList.txt.
            This’s not a solution. CPU seems to be choking. But I saw that my Kinect is good.

          • That’s great to hear! The Tegra JPEG issue is a conflict between the different libJPEG libraries that are used. There’s the regular one, and then there’s the Tegra one. Make sure that you have the headers from the Tegra bundle in the ‘depends’ directory. I know that when I have a fresh install and run the install script I put up for libfreenect2 on Github, it works. However, that doesn’t automatically mean that it works under all conditions (or other peoples machines). That’s the challenge. It is tricky to get setup.

  23. Hello,

    thank you for this wonderful demonstration,
    I trying to open an example from Kyle McDonald’s openFrameworks with CodeBlocks.
    but it failed with can’t find manythings… I tried to add lib to the project, but finally failed in something about opnGL@@

    Do I missing something about the OfxCV? @@

    • I found that the installation of OpenFrameworks on the Jetson was a little bit of a black art.

      I have a branch on Github of openFrameworks of for version 8.4 for the Jetson: https://github.com/jetsonhacks/openFrameworks/tree/jetson, which fixes the OpenGL/ES issues, along with some of the other quirks. The biggest hurdle is that most of the other embedded ARM systems use OpenGL ES, the Jetson seems happier with OpenGL. Make sure that you run the Ubuntu install_dependencies script in openFrameworks/scripts/linux/ubuntu to set everything up. One of the outstanding issues is that the tesselator doesn’t work correctly which can lead to all sorts of issues with UI components.

      Hopefully this will help you get openFrameworks up and running. I haven’t been coding with it recently, but it should at least get you up and running.

      • Thank you kangalow 🙂
        It seems that still had a lot to work out..
        it’s totally a black art for me so far.

        after run the dependencies, my OpenCV4Tegra is removed.
        then -lopencv_* seems not working anymore…
        I think maybe I should reflash my jetson?@@

        then I run the compileOF.sh which feels good to see something about gstreamer, but finally shows a line after “Done!”.
        chown: cannot access ‘../lib/linux/*’: No such file or directory

        I think I didn’t set the project correctly (can’t even find ofMain.h) :'(

        infact, I just want to grab a 1920X1080 resolution frame from c920 which might need h264, and do the facedetection with GPU@@ that’s why I want to use openframework…
        or does it possible to use gstreamer directly in C++ file?

        Thanks again, I will keep trying ><

        • It took me more than a few tries to get openFrameworks running on a Jetson, it’s not surprising that you would run into issues. The openFrameworks install overwrites a lot of the Tegra files and replaces them with the more generic ones. The easiest way to get things back to normal is to either reflash the Jetson or install OpenCV4Tegra. If you just reinstall OpenCV4Tegra, then you may run into issues with library conflicts and header include files. Reflash to get back to the baseline.

          The good thing about openFrameworks is that all the source is included. You should be able to grep for gstreamer in the source code and you should be able to find the outline on how they implemented their calls to gstreamer, and how they go about decoding the pipeline. In general, you will need to write a pipeline in Gstreamer (something similar to https://gist.github.com/jetsonhacks/3e4bcbd8212cff1091dc ) and call this with Gstreamer, which will give you the buffer to pass to OpenCV for face detection. In the demo, I showed a Haar Cascade Classifier doing the face detection.

          All of this is not straightforward, but can be done in regular C++ without openFrameworks, or in OpenCV itself if the Gstreamer camera wrapper options are compiled in. Unfortunately this something that requires much more than a simple blog entry or comment can explain to get it to work, but with a little perseverance you should be able to get it to run. Good luck!

          My experience was that this is a non trivial project.

          • Thanks again Kangalow 🙂
            It’s excited to see the gst-lauch working and show the 1080p image smoothly from c920

            but programming … I’ve tried to get frame from gst-buffer in my main.cpp and find many things on the internet, however, I felt really frustrated
            that I got no idea how to solve it… haha :'(
            a lot of this kind of msgs …undefined reference to `gst_object_get_type’

            I got to try it more times

            Thanks
            Best regards.

      • Dear kangalow,

        I am very interested in openframeworks on Jetson and your demo was amazing and I try to do it, but I get an error when compiling.

        I have download a branch on Github of openFrameworks of for version 8.4 for the Jetson: https://github.com/jetsonhacks/openFrameworks/tree/jetson then I run the the compileOF.sh but I get an error :

        In file included from ../../../libs/openFrameworks/app/ofAppEGLWindow.cpp:26:0:
        ../../../libs/openFrameworks/app/ofAppEGLWindow.h:59:21: fatal error: EGL/egl.h: No such file or directory
        #include
        ^
        compilation terminated.
        make[1]: *** [../../../libs/openFrameworksCompiled/lib/linuxarmv7l/obj/Debug/libs/openFrameworks/app/ofAppEGLWindow.o] Error 1
        make: *** [Debug] Error 2
        there has been a problem compiling Debug OF library
        please report this problem in the forums
        chown: cannot access ‘../lib/linux/*’: No such file or directory

        Can you provides basic guide in configuring openframeworks to be used on Jetson
        Thanks

  24. This article is superb! I was trying to install OPENNI one my TK1 as well as on my pcduino for days without luck! something used to fail. But then your article came along and installation was a breeze. Can’t thank you enough. I got my Xtion working on tk1 now (atleast the NiViewer is showing depth). Now I’ll get to building a depth sensing robot based on the tk1.

    I’ll also try to use your instructions on the pcduino and see if it works.

    Thanks again

  25. Is any more detail available on how to build one of these

    Specifically how to connect the various components and R/C controls to the Jetson TK1

    • Hi David,
      My current understanding:
      The Jetson TK1’s pulse width modulation (PWM) output signals drive the motor electronic speed controller (ESC) and steering servomotor on the Traxxas, bypassing the RC Receiver.
      They used the “Grinch” kernel
      They used the Robot Operating System (ROS) framework
      Existing ROS drivers (urg_node, razor_imu_9dof, pointgrey_camera_driver, and px4flow_node) receive data from the sensors.
      The Lidar Scanner is Ethernet – should be able to plug into the Ethernet port
      The Point Grey camera is USB 3.0
      The IMU is I2C – This is an easy interface to the Jetson on the GPIO pins
      The PX4Flow – The optical flow, distance sensor is also I2C, I’m not sure how they implemented that. In one of the pictures on the website, it looks like there’s a little breakout board of some sort, but I can’t be sure.
      There appears to be a USB hub mounted in front of the Jetson on the “drivers” side

      • I looked up the Sparkfun Razor IMU and noticed that it is actually a TX/RX serial output. So they may have done a FTDI USB connection. In the photos, there are wires coming out of the Jetson GPIO connectors, so it is interfacing with something, it could be a UART connection, or it could just be the PWM interface.

  26. Hi ,
    Thank you for instructions.

    I installed the OpenNI2 on Jetson TK1 and also on ODROID-C1. I can grab frames from Xtion pro live but I can not grab IR and depth frame simultaneously. My program can grab either IR or Depth frame not both at the same time. Why is that? I have written the application in C++ with OpenCV.

    • Hi Johan,

      Does NiViewer work on your setup? I believe that it grabs the depth and IR streams at the same time. Unfortunately I don’t have a Xtion Pro, so I’m not sure that other than guessing I’m much help to you.

  27. Thank you for the video
    my project is about emotion recognition and i wonder if the jetson kit can ensure my work and which platform and language would I work with?

    • Hi,

      That sounds like quite an ambitious project.

      Unfortunately there are so many questions to answer, I’m not sure I’m able to help much.

      First, you’ll have to answer for yourself which programming language(s) you’re most comfortable with programming. I can say that the Jetson TK1 is a standard Ubuntu desktop, so it will run C, C++, Python, Java, etc. OpenCV, which a lot of people use for computer vision processing, has bindings for most of the major languages, including C++ and Python. I don’t think the challenge is the programming language in your case.

      The second question that you’ll have to answer for yourself is what type of performance you need for your project. The Jetson is fast for an embedded processor (think of any of the current tablets or smart phones), but compared to a desktop PC with a NVIDIA GPU card, still quite slow.

      The third question that you’ll have to answer for yourself is how you are going to do the emotion recognition. A lot of researchers are working on emotion recognition using Deep Learning which requires a large set of data to train the neural net. Basically you would have a data set of faces which are categorized into categories (angry, happy, sad, etc) which is used to train the neural net. In general, these data sets are pretty large (which requires a large amount of memory), and training takes a significant amount of time, even on desktop machines. Once the neural net is trained, it can be deployed on an embedded processor like the Tegra K1 on the Jetson.

      Some researchers do the training in the cloud (like Amazon AWS), and then download trained net to run locally. This page is typical of that approach:

      https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data

      There are other ways to do the recognition, I’m sure you know more about the subject than I do.

      Good luck on your project!

  28. Hi,
    While installing caffe using the installcaffe.sh script, error occurred. The message is fatal error: cublas_v2.h: No such file or directory
    #include
    ^
    Pls help

    Thanks

    • Hi,
      Which version of L4T is installed on your Jetson? Which version of CUDA do you have installed? cutlas_v2.h is part of CUDA, is that on your machine?

  29. Hi

    Thanks for the post. I am getting following error. I am using onboard usb;
    [Freenect2Impl] enumerating devices…
    [Freenect2Impl] 4 usb devices connected
    [Freenect2Impl] found valid Kinect v2 @2:37 with serial 501189441942
    [Freenect2Impl] found 1 devices
    [Freenect2DeviceImpl] opening…
    [UsbControl::claimInterfaces(IrInterfaceId)] failed! libusb error -6: LIBUSB_ERROR_BUSY
    [Freenect2DeviceImpl] closing…
    [Freenect2DeviceImpl] deallocating usb transfer pools…
    [Freenect2DeviceImpl] closing usb device…
    [Freenect2DeviceImpl] closed
    [Freenect2DeviceImpl] failed to open Kinect v2 @2:37!
    no device connected or failure opening the default one!

    Can you please advice, what am I doing wrong. Thanks in advance.

    • Which version of LT4 are you using? The LIBUSB_ERROR_BUSY probably means that the device was being used at one time, but did not shut down correctly when it was trying to be used again. For me, the only way I could get it to work after that is to reboot. I’ve had issues with the onboard USB, if Kinect2 is plugged in when the system boots up, it might not work despite correct autosuspend settings. Reconnecting or plugging in Kinect2 after system boots seems to solve this problem. You should also run the auto suspend script after reconnecting it.

      • Hi Kangalow,

        Thank you very much for your prompt reply. Your advice was for very helpful for getting it to work. How can I avoid re-connecting Kinect to work each time I power off and on Jetson, because in real world application it won’t be good. Further I have permanently set the autosuspend to -1 from boot by adding it to exntlinux.conf.
        Do you think if I use Mini PCIe card with USB 3 I can over come this problem.
        Thanking in advance.
        Best regards
        Jo

        • Glad to hear you got it working. The internal Jetson USB 3.0 re-connect is a known issue, with the only work around that I know is to replug after startup. The Syba USB 3.0 mPCIe card mentioned above does not have that issue. If that’s an important issue for you, consider adding the extra card.

        • Hi Jo

          >How can I avoid re-connecting Kinect to work each time I power off and on Jetson, because in real world application it won’t be good.

          That question is also very interested to me. I think we have to wait until more fresh kernel will be released by NVIDIA(not from 3.10 branch) or we will be able to run vanila kernel on the Jetson TK1.

          Meanwhile, Perhaps “software reconnect” will help. I mean “modprobe -r” and next “modprobe” for USB modules.

          • Hi Laborer,

            Thank you. If I find a way to overcome that I will let you know about it.

            Best Regards
            Jo

  30. Hi Kangalow,

    I wanted to be bit adventurous and tried to do the following since I am not familiar with cmake. I copied the relevant headers files to /usr/include/libfreenect2 & the lib files to usr/lib and then I copied the protonect.cpp to my home directory and renamed it as p.cpp and pointed it the usr/include for header files. I ran the following command;
    sudo g++ -o p `pkg-config opencv –cflags` p.cpp `pkg-config opencv –libs` -L /home/ubuntu/libfreenect2/examples/protonect/lib

    It wouldn’t compile and the out put as follows;
    .cpp:(.text+0x2b4): undefined reference to `libfreenect2::Freenect2::Freenect2(void*)’
    p.cpp:(.text+0x2be): undefined reference to `libfreenect2::Freenect2::openDefaultDevice()’
    p.cpp:(.text+0x35c): undefined reference to `libfreenect2::SyncMultiFrameListener::SyncMultiFrameListener(unsigned int)’
    p.cpp:(.text+0x6e2): undefined reference to `libfreenect2::SyncMultiFrameListener::waitForNewFrame(std::map<libfreenect2::Frame::Type, libfreenect2::Frame*, std::less, std::allocator<std::pair > >&)’
    p.cpp:(.text+0xa04): undefined reference to `libfreenect2::SyncMultiFrameListener::release(std::map<libfreenect2::Frame::Type, libfreenect2::Frame*, std::less, std::allocator<std::pair > >&)’
    p.cpp:(.text+0xa76): undefined reference to `libfreenect2::SyncMultiFrameListener::~SyncMultiFrameListener()’
    p.cpp:(.text+0xa80): undefined reference to `libfreenect2::Freenect2::~Freenect2()’
    p.cpp:(.text+0xc9e): undefined reference to `libfreenect2::SyncMultiFrameListener::~SyncMultiFrameListener()’
    p.cpp:(.text+0xcaa): undefined reference to `libfreenect2::Freenect2::~Freenect2()’
    collect2: error: ld returned 1 exit status

    Can you please advice me how can I compile it from command line instead of camke. I want to do some changes and explore. For your info I am new to Linux. Thanks in advance.

    • Hi jo,
      Good to hear that you’re ready to work on it.
      Some comments:
      1) On Linux, most projects are built using CMake which builds Make files. The make files are then executed. This is because that any project that has more than a few files or library dependencies is almost impossible to build manually on the command line. You should use those tools.
      2) Community projects on Github allow you to use Git for source control. This means that you can create your own branch locally and not have to worry about making a mess.
      3) Not quite sure what you were trying to do with your command line. There are not any libraries defined to link against (that’s why it can’t find the symbols in libfreenect2).

      • Hi kangalow,

        Thank you for the prompt reply. Can you please point to me where the Lib files are and also do I have to point to files in the src folder as well.

        Thank you once again.

        • Hi Jo,
          If you have a working copy of an executable file (like Protonect) you can use the command ‘ldd’ to list all of the libraries that the executable contains. If you are just looking for a given file, you can use the ‘find’ command. You can get help for most Linux commands on a Terminal by adding ‘–help’ after the command, i.e.
          $ ldd –help
          The location of the libraries is dependent on your machine and the way it is set up, which is beyond the scope of what can be discussed in a blog post.
          Good luck!

          • Hi Kangalow,

            Thank you for your prompt reply. The information you provided was very valuable. I learned a lot by playing with ldd command. I manage to compile the code from command line and run it, with out any problem.
            I compiled the code as shown below;
            sudo g++ -o p `pkg-config opencv –cflags` p.cpp `pkg-config opencv –libs` -L/home/ubuntu/libfreenect2/examples/protonect/../../depends/libusb/lib -rdynamic /usr/lib/libfreenect2.so -lusb-1.0

            Thank you once again for your help.
            Best regards

  31. Hi Kangalow,

    I am working on with IR and Depth frames, How can I get Z values at particular x and y from depth->data.

    Thanks inn advance.

  32. Hi Kangalow,

    Thank you for the link & I will look into it. Anyway I did manage do my own calibration and converted the depth frame pixel values to depth values in meters. I works fairly well and simple as well. Further, what is the reason when the protonect is running in the terminal it comes with the message “Depth frame skipped because processor not ready”.

    Thanks once again for your post, it greatly helped me with my project. I am planing to do it on ODROID-XU3 and compare the performance.

    • It means that a depth frame was received, but could not be processed in time before more depth buffers became available. This is common when first starting up, and also when there are other tasks that taking too much time doing other things such that the depth frame is not processed.

      • Hi kangalow,

        Thank you. I wish I can find way to over come that.

        So far playing with that I could measure depth of 4mtrs without any problem. Further I am considering Multiple Kinect in one Jetson board by mini PCIe to USB 3card.
        I am not sure I could do that but will give it go and let you know.

  33. hey,

    The information was really helpful. Are you working on multiple kinect implementation?
    CAn you help me out with that?

  34. When the jetson is connected through pc host usb port, how can we know jetson ipaddress? It is needed to push some datas. Thanks

    • You can get the Jetson IP address in several different ways. First, on a Terminal on the Jetson you can execute ‘$ ifconfig’ and the IP address will be listed. Also on the Jetson, you can go to “Settings->Network->Wired” where the IP Address is listed.

      If you are on you Host PC, you can channel Trinity from the Matrix and use the program ‘nmap’. The command

      $ nmap tegra-ubuntu.local

      should list the IP address. The command ‘arp’ could also be used.

  35. Hi to all. I am having problems at the cmd $tar jxvf jedroid_v1.31.tar.bz2 -C ~/jedroid_workspace
    The error is:
    ubuntu@tegra-ubuntu:~/Downloads$ tar jxvf jedroid_v1.31.tar.bz2 -C ~/jedroid_workspace
    bzip2: (stdin) is not a bzip2 file.
    tar: Child returned status 2
    tar: Error is not recoverable: exiting now

    Any tip, thanks.
    darm

  36. A little note: it is really easy to damage a LiPo battery. The charge level of a single cell should not go under 3.0V, otherwise the cell will be damaged and the battery life reduced.
    Furthermore the discharge rate is not linear and a cell takes very little time to drop from 3.3V to 3.0V.

    To be sure to not damage the battery you can use a simple circuit like this
    http://www.amazon.com/Integy-C23212-Voltage-Checker-Warning/dp/B003Y6E6IE/ref=pd_sim_21_3?ie=UTF8&refRID=1ZR75X7PAMV58XKSSEJJ
    The circuit must be connected to the “balance” connector and will advise you with a noisy beep that your battery is near to dangerous charge level.

    • That’s a good point, I’ll move this up in the article. I was planning to introduce this when I actually put it on the little robot I’m building, but it’s a good idea to have it in this article too.

  37. Hi JetsonHacks,

    When I ran sh installLibfreenect2.sh, it reached a fatal error at 47%. It shows:

    [ 47%] Building NVCC (Device) object CMakeFiles/cuda_compile.dir/src/./cuda_compile_generated_cuda_depth_packet_processor.cu.o
    /home/sebastian/gist1bd2830cc1a5790a6ac2-3d786bbb39cd936a5e823a13bea703817447c2fb/libfreenect2/examples/protonect/src/cuda_depth_packet_processor.cu:29:25: fatal error: helper_math.h: No such file or directory
    #include

    Do you know what the problem is?

  38. For CUDA, I tried the command and it said command not found. For L4T, I could not find the file in the place that you listed. Do I have to download it?

    • Thanks Corey, those are certainly good tips! The Grinch kernel has Kinect Xbox 360 support built it, so it makes it a little bit easier. I’ve updated the article with the permissions information that you shared. The Kinect post and video were one of the first ones that I did, so revisiting with what I’ve learned since makes me wince a little. Cheers!

      • When I run the glview example the screen appears black and the light on the kinect becomes red

        • I forgot to mention that when I run the glview example I can control the motor and the led light on the kinect.

    • Hi Corey,
      I wrote up a short piece on the subject: http://wp.me/p51nTS-d9
      The script just blindly installs OpenCV with GPU optimizations. This is mostly because I want to use visual odometry in my applications and need the proprietary algorithms that opencv-nonfree provides. The script tries to install OpenCV, so if the package gets ‘fixed’ so that aptitude recognizes OpenCV4Tegra as the real OpenCV it will use that. If you use SIFT/SURF, that’s going to be a problem as OpenCV4Tegra doesn’t contain those items.

  39. Thank you so much for doing this! One additional dependency I found that I was missing was libudev which I got with apt-get install libudev-dev.

    • Hi edu,
      Thank you for reading the article!

      The dependency is apparently from a change made in the source since this article was published. I have added it to the above code snippet. Thanks for pointing this out!

  40. Hi,
    I tried to use Gstreamer in my Jetson Tk1 to show my see3cam_80 camera. I ran your code
    “gst-launch-1.0 -v v4l2src device=/dev/video0 \
    ! image/jpeg, width=1920, height=1080, framerate=30/1 \
    ! jpegparse ! jpegdec \
    ! videoconvert ! videoscale \
    ! xvimagesink sync=false”

    but it does not work, appears the next error message:

    “Setting pipeline to PAUSED …
    ERROR: Pipeline doesn’t want to pause.
    ERROR: from element /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0: Could not initialise Xv output
    Additional debug info:
    xvimagesink.c(1765): gst_xvimagesink_open (): /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0:
    Could not open display (null)
    Setting pipeline to NULL …
    Freeing pipeline …”

    I will appreciate if you can help me, any suggestion will be helpfull
    thanks a lot

    • Hi Higuera,

      I am part of the camera team at e-consystems. The camera you are using (See3CAM_80) supports only YUV422 format, whereas you are trying to get JPEG data from the camera itself. You can modify the gstreamer command as follows to directly display the data from the camera.

      gst-launch-0.10 -v v4l2src device=/dev/video0 \
      ! “video/x-raw-yuv, format=(fourcc)YUY2, \
      width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1” \
      ! xvimagesink sync=false

      Also you need to be running this command from a GUI based terminal window and not through the serial port of the Jetson TK1.

      If you have any other queries regarding the camera, please feel free to contact sales@e-consystems.com or visit our website http://www.e-consystems.com to chat with our experts.

  41. Dear

    I followed your guide and I also successed to format my SD card to ext4.
    However, when I tried to copy data to SD card, it didn’t work
    Could you give me some advices?

    Thanks you very much
    Best regards

    Nguyen Huy Hung

  42. Dear kangalow,

    Thanks to the tuturial i got the kinect v2 working on the jetson via the Proton example.

    However i want to start a kinect project in the qt editor.
    i tried starting a qt app and inculded the nesecery libs and the same for a plain c++ project.
    afther getting everything compiling i get the jpeg parameter struct mismatch on runtime.

    i tried to figure out what the problem was using ldd to no afail.
    can you point me in the right directions to get the proton example working in qt? (it works fine with your script and setup)

    • From the ldd output of:
      The working proton
      libopenjpeg.so.2 => /usr/lib/arm-linux-gnueabihf/libopenjpeg.so.2 (0xb308b000)
      libjpeg.so => /usr/lib/arm-linux-gnueabihf/libjpeg.so (0xb6616000)
      libturbojpeg.so.0 => /usr/lib/arm-linux-gnueabihf/libturbojpeg.so.0 (0xb6671000)

      The Not working proton:
      libopenjpeg.so.2 => /usr/lib/arm-linux-gnueabihf/libopenjpeg.so.2 (0xb2efb000)
      libjpeg.so.8 => /usr/lib/arm-linux-gnueabihf/libjpeg.so.8 (0xb6171000)
      libjpeg.so => /usr/lib/arm-linux-gnueabihf/tegra/libjpeg.so (0xb654d000)
      libturbojpeg.so.0 => /usr/lib/arm-linux-gnueabihf/libturbojpeg.so.0 (0xb65bc000)

      So it seems i need to unlink libjpeg.so.8??
      or am i on the wrong track here?

      • The source code is proton.cpp copied into a main.cpp
        and this is my current .pro file witch results in runtime jpeg struct mismatch:

        TEMPLATE = app
        CONFIG += console
        CONFIG -= app_bundle
        CONFIG -= qt

        SOURCES += main.cpp

        LIBS += -L$$PWD/../../../../../usr/local/cuda-6.5/lib/ -lcudart
        LIBS += -lopencv_contrib
        LIBS += -lopencv_highgui
        LIBS += -lopencv_imgproc
        LIBS += -lopencv_core

        LIBS += -L$$PWD/../../../libfreenect2/examples/protonect/lib/ -lfreenect2

        LIBS += -L$$PWD/../../../libfreenect2/examples/protonect/lib/ -lglfw

        INCLUDEPATH += $$PWD/../../../libfreenect2/examples/protonect/include
        DEPENDPATH += $$PWD/../../../libfreenect2/examples/protonect/include

        INCLUDEPATH += $$PWD/../../../../../usr/local/cuda-6.5/include
        DEPENDPATH += $$PWD/../../../../../usr/local/cuda-6.5/include

    • Hi Hylke,
      I’ve been thinking about it, but could not figure out what the issue might be. I’m assuming that Qt links against it’s own libjpeg which is in the executable. Perhaps it might help to compile a very simple program to see which libjpeg Qt uses. My guess is that Qt and the linker are upset about using tegra/libjpeg.so. Another guess is that libjpeg.so is a symbolic link to whatever the latest version of libjpeg.so (libjpeg.so.8 may be correct).
      Sorry I can’t be of more help.

  43. Really enjoying the build story. Please keep them coming. A little worried that I wasn’t able to see either the fire extinguisher or Sharky (!) in this video. Please don’t start taking chances like that – I want to see how this ends.
    Corey

    • It’s a long story about Mr. Fire Extinguisher. His brother gave his all during a 24 hours at Le Mans working on one of the fire suppression crews. Since then, Mr. Extinguisher just about refuses to go out to any test track, more or less the one at the Absurdium. I told him that he’s going to have to work through the issue, but I just don’t have the heart to press him on the matter after seeing him cry late at night when he thinks no one is watching.

      Sharky, on the other hand… Well, he’s Australian. You know the term “drinks like a fish”? The term may not have been coined for Sharky, but he makes no bones about living up to it. Sharks have huge appetites, and let’s just say that when he gets to the bar with his buddies he indulges in *all* of his cravings if you catch my drift. I will say that if you ever get in a bar fight, you want Sharky by your side.

      About the only thing that can tempt him out of the bar is a laser (sharks have almost a Pavlovian response when they see one). Or pay his bail, but that adds up quick. Don’t even get me started on his dressing room demands for the show. Let’s just say that if I never have to sort through another bag of M&Ms I’ll die happy.

      Thanks for following along and watching!

    • I think if you spent a few hours on it, you wouldn’t have any trouble getting it to work. The recognizer.py script is straightforward, and is built for a differential drive robot. There were only three “tricks”. Remapping cmd_vel to something JetsonBot understands, using a virtual X11 display, and setting up a headset to use with the JetsonBot. Looking forward to seeing the next MyzharBot out and running about!

  44. Hello, when I get to the “make -j 4 all” step, I get the following errors:

    ubuntu@tegra-ubuntu:~/cudnn-6.5-linux-armv7-R1/caffe$ make -j 4 all
    CXX .build_release/src/caffe/proto/caffe.pb.cc
    CXX src/caffe/common.cpp
    CXX src/caffe/layers/deconv_layer.cpp
    CXX src/caffe/layers/dropout_layer.cpp
    In file included from ./include/caffe/util/device_alternate.hpp:40:0,
    from ./include/caffe/common.hpp:19,
    from src/caffe/common.cpp:5:
    ./include/caffe/util/cudnn.hpp:64:32: error: variable or field ‘createTensor4dDesc’ declared void
    inline void createTensor4dDesc(cudnnTensorDescriptor_t* desc) {
    ^
    ./include/caffe/util/cudnn.hpp:64:32: error: ‘cudnnTensorDescriptor_t’ was not declared in this scope
    ./include/caffe/util/cudnn.hpp:64:57: error: ‘desc’ was not declared in this scope
    inline void createTensor4dDesc(cudnnTensorDescriptor_t* desc) {
    ^
    ./include/caffe/util/cudnn.hpp:69:29: error: variable or field ‘setTensor4dDesc’ declared void
    inline void setTensor4dDesc(cudnnTensorDescriptor_t* desc,
    ^
    ./include/caffe/util/cudnn.hpp:69:29: error: ‘cudnnTensorDescriptor_t’ was not declared in this scope
    ./include/caffe/util/cudnn.hpp:69:54: error: ‘desc’ was not declared in this scope
    inline void setTensor4dDesc(cudnnTensorDescriptor_t* desc,
    ^
    ./include/caffe/util/cudnn.hpp:70:5: error: expected primary-expression before ‘int’
    int n, int c, int h, int w,
    ^
    ./include/caffe/util/cudnn.hpp:70:12: error: expected primary-expression before ‘int’
    int n, int c, int h, int w,
    ^
    ./include/caffe/util/cudnn.hpp:70:19: error: expected primary-expression before ‘int’
    int n, int c, int h, int w,
    ^
    ./include/caffe/util/cudnn.hpp:70:26: error: expected primary-expression before ‘int’
    int n, int c, int h, int w,
    ^
    ./include/caffe/util/cudnn.hpp:71:5: error: expected primary-expression before ‘int’
    int stride_n, int stride_c, int stride_h, int stride_w) {
    ^
    ./include/caffe/util/cudnn.hpp:71:19: error: expected primary-expression before ‘int’
    int stride_n, int stride_c, int stride_h, int stride_w) {
    ^
    ./include/caffe/util/cudnn.hpp:71:33: error: expected primary-expression before ‘int’
    int stride_n, int stride_c, int stride_h, int stride_w) {
    ^
    ./include/caffe/util/cudnn.hpp:71:47: error: expected primary-expression before ‘int’
    int stride_n, int stride_c, int stride_h, int stride_w) {
    ^
    ./include/caffe/util/cudnn.hpp:77:29: error: variable or field ‘setTensor4dDesc’ declared void
    inline void setTensor4dDesc(cudnnTensorDescriptor_t* desc,
    ^
    ./include/caffe/util/cudnn.hpp:77:29: error: ‘cudnnTensorDescriptor_t’ was not declared in this scope
    ./include/caffe/util/cudnn.hpp:77:54: error: ‘desc’ was not declared in this scope
    inline void setTensor4dDesc(cudnnTensorDescriptor_t* desc,
    ^
    ./include/caffe/util/cudnn.hpp:78:5: error: expected primary-expression before ‘int’
    int n, int c, int h, int w) {
    ^
    ./include/caffe/util/cudnn.hpp:78:12: error: expected primary-expression before ‘int’
    int n, int c, int h, int w) {
    ^
    ./include/caffe/util/cudnn.hpp:78:19: error: expected primary-expression before ‘int’
    int n, int c, int h, int w) {
    ^
    ./include/caffe/util/cudnn.hpp:78:26: error: expected primary-expression before ‘int’
    int n, int c, int h, int w) {
    ^
    ./include/caffe/util/cudnn.hpp:102:5: error: ‘cudnnTensorDescriptor_t’ has not been declared
    cudnnTensorDescriptor_t bottom, cudnnFilterDescriptor_t filter,
    ^
    ./include/caffe/util/cudnn.hpp: In function ‘void caffe::cudnn::setConvolutionDesc(cudnnConvolutionStruct**, int, cudnnFilterDescriptor_t, int, int, int, int)’:
    ./include/caffe/util/cudnn.hpp:105:70: error: there are no arguments to ‘cudnnSetConvolution2dDescriptor’ that depend on a template parameter, so a declaration of ‘cudnnSetConvolution2dDescriptor’ must be available [-fpermissive]
    pad_h, pad_w, stride_h, stride_w, 1, 1, CUDNN_CROSS_CORRELATION));
    ^
    ./include/caffe/util/cudnn.hpp:12:28: note: in definition of macro ‘CUDNN_CHECK’
    cudnnStatus_t status = condition; \
    ^
    ./include/caffe/util/cudnn.hpp:105:70: note: (if you use ‘-fpermissive’, G++ will accept your code, but allowing the use of an undeclared name is deprecated)
    pad_h, pad_w, stride_h, stride_w, 1, 1, CUDNN_CROSS_CORRELATION));
    ^
    ./include/caffe/util/cudnn.hpp:12:28: note: in definition of macro ‘CUDNN_CHECK’
    cudnnStatus_t status = condition; \
    ^
    ./include/caffe/util/cudnn.hpp: In function ‘void caffe::cudnn::createPoolingDesc(cudnnPoolingStruct**, caffe::PoolingParameter_PoolMethod, cudnnPoolingMode_t*, int, int, int, int, int, int)’:
    ./include/caffe/util/cudnn.hpp:117:13: error: ‘CUDNN_POOLING_AVERAGE_COUNT_INCLUDE_PADDING’ was not declared in this scope
    *mode = CUDNN_POOLING_AVERAGE_COUNT_INCLUDE_PADDING;
    ^
    ./include/caffe/util/cudnn.hpp:124:41: error: there are no arguments to ‘cudnnSetPooling2dDescriptor’ that depend on a template parameter, so a declaration of ‘cudnnSetPooling2dDescriptor’ must be available [-fpermissive]
    pad_h, pad_w, stride_h, stride_w));
    ^
    ./include/caffe/util/cudnn.hpp:12:28: note: in definition of macro ‘CUDNN_CHECK’
    cudnnStatus_t status = condition; \
    ^
    make: *** [.build_release/src/caffe/common.o] Error 1
    make: *** Waiting for unfinished jobs….
    In file included from ./include/caffe/util/device_alternate.hpp:40:0,
    from ./include/caffe/common.hpp:19,
    from src/caffe/layers/dropout_layer.cpp:5:
    ./include/caffe/util/cudnn.hpp:64:32: error: variable or field ‘createTensor4dDesc’ declared void
    inline void createTensor4dDesc(cudnnTensorDescriptor_t* desc) {
    ^
    ./include/caffe/util/cudnn.hpp:64:32: error: ‘cudnnTensorDescriptor_t’ was not declared in this scope
    ./include/caffe/util/cudnn.hpp:64:57: error: ‘desc’ was not declared in this scope
    inline void createTensor4dDesc(cudnnTensorDescriptor_t* desc) {
    ^
    ./include/caffe/util/cudnn.hpp:69:29: error: variable or field ‘setTensor4dDesc’ declared void
    inline void setTensor4dDesc(cudnnTensorDescriptor_t* desc,
    ^
    ./include/caffe/util/cudnn.hpp:69:29: error: ‘cudnnTensorDescriptor_t’ was not declared in this scope
    ./include/caffe/util/cudnn.hpp:69:54: error: ‘desc’ was not declared in this scope
    inline void setTensor4dDesc(cudnnTensorDescriptor_t* desc,
    ^
    ./include/caffe/util/cudnn.hpp:70:5: error: expected primary-expression before ‘int’
    int n, int c, int h, int w,
    ^
    ./include/caffe/util/cudnn.hpp:70:12: error: expected primary-expression before ‘int’
    int n, int c, int h, int w,
    ^
    ./include/caffe/util/cudnn.hpp:70:19: error: expected primary-expression before ‘int’
    int n, int c, int h, int w,
    ^
    ./include/caffe/util/cudnn.hpp:70:26: error: expected primary-expression before ‘int’
    int n, int c, int h, int w,
    ^
    ./include/caffe/util/cudnn.hpp:71:5: error: expected primary-expression before ‘int’
    int stride_n, int stride_c, int stride_h, int stride_w) {
    ^
    ./include/caffe/util/cudnn.hpp:71:19: error: expected primary-expression before ‘int’
    int stride_n, int stride_c, int stride_h, int stride_w) {
    ^
    ./include/caffe/util/cudnn.hpp:71:33: error: expected primary-expression before ‘int’
    int stride_n, int stride_c, int stride_h, int stride_w) {
    ^
    ./include/caffe/util/cudnn.hpp:71:47: error: expected primary-expression before ‘int’
    int stride_n, int stride_c, int stride_h, int stride_w) {
    ^
    ./include/caffe/util/cudnn.hpp:77:29: error: variable or field ‘setTensor4dDesc’ declared void
    inline void setTensor4dDesc(cudnnTensorDescriptor_t* desc,
    ^
    ./include/caffe/util/cudnn.hpp:77:29: error: ‘cudnnTensorDescriptor_t’ was not declared in this scope
    ./include/caffe/util/cudnn.hpp:77:54: error: ‘desc’ was not declared in this scope
    inline void setTensor4dDesc(cudnnTensorDescriptor_t* desc,
    ^
    ./include/caffe/util/cudnn.hpp:78:5: error: expected primary-expression before ‘int’
    int n, int c, int h, int w) {
    ^
    ./include/caffe/util/cudnn.hpp:78:12: error: expected primary-expression before ‘int’
    int n, int c, int h, int w) {
    ^
    ./include/caffe/util/cudnn.hpp:78:19: error: expected primary-expression before ‘int’
    int n, int c, int h, int w) {
    ^
    ./include/caffe/util/cudnn.hpp:78:26: error: expected primary-expression before ‘int’
    int n, int c, int h, int w) {
    ^
    ./include/caffe/util/cudnn.hpp:102:5: error: ‘cudnnTensorDescriptor_t’ has not been declared
    cudnnTensorDescriptor_t bottom, cudnnFilterDescriptor_t filter,
    ^
    ./include/caffe/util/cudnn.hpp: In function ‘void caffe::cudnn::setConvolutionDesc(cudnnConvolutionStruct**, int, cudnnFilterDescriptor_t, int, int, int, int)’:
    ./include/caffe/util/cudnn.hpp:105:70: error: there are no arguments to ‘cudnnSetConvolution2dDescriptor’ that depend on a template parameter, so a declaration of ‘cudnnSetConvolution2dDescriptor’ must be available [-fpermissive]
    pad_h, pad_w, stride_h, stride_w, 1, 1, CUDNN_CROSS_CORRELATION));
    ^
    ./include/caffe/util/cudnn.hpp:12:28: note: in definition of macro ‘CUDNN_CHECK’
    cudnnStatus_t status = condition; \
    ^
    ./include/caffe/util/cudnn.hpp:105:70: note: (if you use ‘-fpermissive’, G++ will accept your code, but allowing the use of an undeclared name is deprecated)
    pad_h, pad_w, stride_h, stride_w, 1, 1, CUDNN_CROSS_CORRELATION));
    ^
    ./include/caffe/util/cudnn.hpp:12:28: note: in definition of macro ‘CUDNN_CHECK’
    cudnnStatus_t status = condition; \
    ^
    ./include/caffe/util/cudnn.hpp: In function ‘void caffe::cudnn::createPoolingDesc(cudnnPoolingStruct**, caffe::PoolingParameter_PoolMethod, cudnnPoolingMode_t*, int, int, int, int, int, int)’:
    ./include/caffe/util/cudnn.hpp:117:13: error: ‘CUDNN_POOLING_AVERAGE_COUNT_INCLUDE_PADDING’ was not declared in this scope
    *mode = CUDNN_POOLING_AVERAGE_COUNT_INCLUDE_PADDING;
    ^
    ./include/caffe/util/cudnn.hpp:124:41: error: there are no arguments to ‘cudnnSetPooling2dDescriptor’ that depend on a template parameter, so a declaration of ‘cudnnSetPooling2dDescriptor’ must be available [-fpermissive]
    pad_h, pad_w, stride_h, stride_w));
    ^
    ./include/caffe/util/cudnn.hpp:12:28: note: in definition of macro ‘CUDNN_CHECK’
    cudnnStatus_t status = condition; \
    ^
    In file included from ./include/caffe/loss_layers.hpp:11:0,
    from ./include/caffe/common_layers.hpp:12,
    from ./include/caffe/vision_layers.hpp:10,
    from src/caffe/layers/dropout_layer.cpp:9:
    ./include/caffe/neuron_layers.hpp: At global scope:
    ./include/caffe/neuron_layers.hpp:501:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t bottom_desc_;
    ^
    ./include/caffe/neuron_layers.hpp:502:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t top_desc_;
    ^
    ./include/caffe/neuron_layers.hpp:584:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t bottom_desc_;
    ^
    ./include/caffe/neuron_layers.hpp:585:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t top_desc_;
    ^
    ./include/caffe/neuron_layers.hpp:669:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t bottom_desc_;
    ^
    ./include/caffe/neuron_layers.hpp:670:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t top_desc_;
    ^
    In file included from ./include/caffe/vision_layers.hpp:10:0,
    from src/caffe/layers/dropout_layer.cpp:9:
    ./include/caffe/common_layers.hpp:536:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t bottom_desc_;
    ^
    ./include/caffe/common_layers.hpp:537:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t top_desc_;
    ^
    In file included from src/caffe/layers/dropout_layer.cpp:9:0:
    ./include/caffe/vision_layers.hpp:249:10: error: ‘cudnnTensorDescriptor_t’ was not declared in this scope
    vector bottom_descs_, top_descs_;
    ^
    ./include/caffe/vision_layers.hpp:249:33: error: template argument 1 is invalid
    vector bottom_descs_, top_descs_;
    ^
    ./include/caffe/vision_layers.hpp:249:33: error: template argument 2 is invalid
    ./include/caffe/vision_layers.hpp:250:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t bias_desc_;
    ^
    ./include/caffe/vision_layers.hpp:450:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t bottom_desc_, top_desc_;
    ^
    In file included from ./include/caffe/util/device_alternate.hpp:40:0,
    from ./include/caffe/common.hpp:19,
    from ./include/caffe/blob.hpp:8,
    from ./include/caffe/filler.hpp:10,
    from src/caffe/layers/deconv_layer.cpp:3:
    ./include/caffe/util/cudnn.hpp:64:32: error: variable or field ‘createTensor4dDesc’ declared void
    inline void createTensor4dDesc(cudnnTensorDescriptor_t* desc) {
    ^
    ./include/caffe/util/cudnn.hpp:64:32: error: ‘cudnnTensorDescriptor_t’ was not declared in this scope
    ./include/caffe/util/cudnn.hpp:64:57: error: ‘desc’ was not declared in this scope
    inline void createTensor4dDesc(cudnnTensorDescriptor_t* desc) {
    ^
    ./include/caffe/util/cudnn.hpp:69:29: error: variable or field ‘setTensor4dDesc’ declared void
    inline void setTensor4dDesc(cudnnTensorDescriptor_t* desc,
    ^
    ./include/caffe/util/cudnn.hpp:69:29: error: ‘cudnnTensorDescriptor_t’ was not declared in this scope
    ./include/caffe/util/cudnn.hpp:69:54: error: ‘desc’ was not declared in this scope
    inline void setTensor4dDesc(cudnnTensorDescriptor_t* desc,
    ^
    ./include/caffe/util/cudnn.hpp:70:5: error: expected primary-expression before ‘int’
    int n, int c, int h, int w,
    ^
    ./include/caffe/util/cudnn.hpp:70:12: error: expected primary-expression before ‘int’
    int n, int c, int h, int w,
    ^
    ./include/caffe/util/cudnn.hpp:70:19: error: expected primary-expression before ‘int’
    int n, int c, int h, int w,
    ^
    ./include/caffe/util/cudnn.hpp:70:26: error: expected primary-expression before ‘int’
    int n, int c, int h, int w,
    ^
    ./include/caffe/util/cudnn.hpp:71:5: error: expected primary-expression before ‘int’
    int stride_n, int stride_c, int stride_h, int stride_w) {
    ^
    ./include/caffe/util/cudnn.hpp:71:19: error: expected primary-expression before ‘int’
    int stride_n, int stride_c, int stride_h, int stride_w) {
    ^
    ./include/caffe/util/cudnn.hpp:71:33: error: expected primary-expression before ‘int’
    int stride_n, int stride_c, int stride_h, int stride_w) {
    ^
    ./include/caffe/util/cudnn.hpp:71:47: error: expected primary-expression before ‘int’
    int stride_n, int stride_c, int stride_h, int stride_w) {
    ^
    ./include/caffe/util/cudnn.hpp:77:29: error: variable or field ‘setTensor4dDesc’ declared void
    inline void setTensor4dDesc(cudnnTensorDescriptor_t* desc,
    ^
    ./include/caffe/util/cudnn.hpp:77:29: error: ‘cudnnTensorDescriptor_t’ was not declared in this scope
    ./include/caffe/util/cudnn.hpp:77:54: error: ‘desc’ was not declared in this scope
    inline void setTensor4dDesc(cudnnTensorDescriptor_t* desc,
    ^
    ./include/caffe/util/cudnn.hpp:78:5: error: expected primary-expression before ‘int’
    int n, int c, int h, int w) {
    ^
    ./include/caffe/util/cudnn.hpp:78:12: error: expected primary-expression before ‘int’
    int n, int c, int h, int w) {
    ^
    ./include/caffe/util/cudnn.hpp:78:19: error: expected primary-expression before ‘int’
    int n, int c, int h, int w) {
    ^
    ./include/caffe/util/cudnn.hpp:78:26: error: expected primary-expression before ‘int’
    int n, int c, int h, int w) {
    ^
    ./include/caffe/util/cudnn.hpp:102:5: error: ‘cudnnTensorDescriptor_t’ has not been declared
    cudnnTensorDescriptor_t bottom, cudnnFilterDescriptor_t filter,
    ^
    ./include/caffe/util/cudnn.hpp: In function ‘void caffe::cudnn::setConvolutionDesc(cudnnConvolutionStruct**, int, cudnnFilterDescriptor_t, int, int, int, int)’:
    ./include/caffe/util/cudnn.hpp:105:70: error: there are no arguments to ‘cudnnSetConvolution2dDescriptor’ that depend on a template parameter, so a declaration of ‘cudnnSetConvolution2dDescriptor’ must be available [-fpermissive]
    pad_h, pad_w, stride_h, stride_w, 1, 1, CUDNN_CROSS_CORRELATION));
    ^
    ./include/caffe/util/cudnn.hpp:12:28: note: in definition of macro ‘CUDNN_CHECK’
    cudnnStatus_t status = condition; \
    ^
    ./include/caffe/util/cudnn.hpp:105:70: note: (if you use ‘-fpermissive’, G++ will accept your code, but allowing the use of an undeclared name is deprecated)
    pad_h, pad_w, stride_h, stride_w, 1, 1, CUDNN_CROSS_CORRELATION));
    ^
    ./include/caffe/util/cudnn.hpp:12:28: note: in definition of macro ‘CUDNN_CHECK’
    cudnnStatus_t status = condition; \
    ^
    ./include/caffe/util/cudnn.hpp: In function ‘void caffe::cudnn::createPoolingDesc(cudnnPoolingStruct**, caffe::PoolingParameter_PoolMethod, cudnnPoolingMode_t*, int, int, int, int, int, int)’:
    ./include/caffe/util/cudnn.hpp:117:13: error: ‘CUDNN_POOLING_AVERAGE_COUNT_INCLUDE_PADDING’ was not declared in this scope
    *mode = CUDNN_POOLING_AVERAGE_COUNT_INCLUDE_PADDING;
    ^
    ./include/caffe/util/cudnn.hpp:124:41: error: there are no arguments to ‘cudnnSetPooling2dDescriptor’ that depend on a template parameter, so a declaration of ‘cudnnSetPooling2dDescriptor’ must be available [-fpermissive]
    pad_h, pad_w, stride_h, stride_w));
    ^
    ./include/caffe/util/cudnn.hpp:12:28: note: in definition of macro ‘CUDNN_CHECK’
    cudnnStatus_t status = condition; \
    ^
    In file included from ./include/caffe/loss_layers.hpp:11:0,
    from ./include/caffe/common_layers.hpp:12,
    from ./include/caffe/vision_layers.hpp:10,
    from src/caffe/layers/deconv_layer.cpp:7:
    ./include/caffe/neuron_layers.hpp: At global scope:
    ./include/caffe/neuron_layers.hpp:501:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t bottom_desc_;
    ^
    ./include/caffe/neuron_layers.hpp:502:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t top_desc_;
    ^
    ./include/caffe/neuron_layers.hpp:584:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t bottom_desc_;
    ^
    ./include/caffe/neuron_layers.hpp:585:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t top_desc_;
    ^
    ./include/caffe/neuron_layers.hpp:669:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t bottom_desc_;
    ^
    ./include/caffe/neuron_layers.hpp:670:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t top_desc_;
    ^
    In file included from ./include/caffe/vision_layers.hpp:10:0,
    from src/caffe/layers/deconv_layer.cpp:7:
    ./include/caffe/common_layers.hpp:536:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t bottom_desc_;
    ^
    ./include/caffe/common_layers.hpp:537:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t top_desc_;
    ^
    In file included from src/caffe/layers/deconv_layer.cpp:7:0:
    ./include/caffe/vision_layers.hpp:249:10: error: ‘cudnnTensorDescriptor_t’ was not declared in this scope
    vector bottom_descs_, top_descs_;
    ^
    ./include/caffe/vision_layers.hpp:249:33: error: template argument 1 is invalid
    vector bottom_descs_, top_descs_;
    ^
    ./include/caffe/vision_layers.hpp:249:33: error: template argument 2 is invalid
    ./include/caffe/vision_layers.hpp:250:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t bias_desc_;
    ^
    ./include/caffe/vision_layers.hpp:450:3: error: ‘cudnnTensorDescriptor_t’ does not name a type
    cudnnTensorDescriptor_t bottom_desc_, top_desc_;
    ^
    make: *** [.build_release/src/caffe/layers/dropout_layer.o] Error 1
    make: *** [.build_release/src/caffe/layers/deconv_layer.o] Error 1

    Any advice? Thanks!

    • It looks like you git cloned Caffe into the cuDNN directory instead of ~/
      The first thing I would try is to open a Terminal (which will put you into the home directory, ~/) then git clone Caffe there. Then try compiling it.

  45. Hello all. When I get to the “make -j 4 all” step, I get:
    CXX .build_release/src/caffe/proto/caffe.pb.cc
    CXX src/caffe/common.cpp
    CXX src/caffe/layers/deconv_layer.cpp
    CXX src/caffe/layers/dropout_layer.cpp

    Then I get a bunch of compile errors, about 300 or so lines of them, that have a lot to do with the ./include/caffe/ directory. Any advice? Thanks!

        • My first guess is that there is an issue with the working directory. In your post, you tried to compile from: ~/cudnn-6.5-linux-armv7-R1/caffe

          In the video, Caffe was compiled from ~/caffe. In the instructions in this article, there appears to be a missing ‘cd ~/’ before downloading caffe. In the video, Caffe was already installed as demonstrated previously in another video. Try getting rid of Caffe from the cudnn directory, download Caffe into ~/ and and build compile. Let me know if that works, so I can change the article. Try to get Caffe working before adding cuDNN support if you’re still having issues.

          • Okay, I downloaded it into the home directory and built, but it still didn’t do anything until I commented back the # USE_CDNN := 1 line and changed LMDB_MAP_SIZE from 1099511627776 to 536870912, then it built just fine. However, when I uncommented the USE_CDNN := 1 line, it gives me far fewer compiler errors, but still some errors. Any advice on how to make it compile with CUDNN support? Thanks!

  46. Thanks!! I will try that tomorrow after 3 p.m. and let you know how it goes. Thank you for your advice!!

  47. Okay, I cloned it into the home directory and built, but I was still getting the errors. However, when I commented the #USE_CDNN := 1 line and changed the LMDB_MAP_SIZE, it compiled successfully. But once I uncommentted the USE_CDNN := 1 line and built, I get far fewer compile errors, but still errors. Any advice on how to get it to work with CUDNN support? Thanks!

      • It had different errors, but right now I am getting “make: Nothing to be done for ‘all’.” when I “make – j 4 all”.

        • $ make clean
          and then
          $ make -j 4 all
          Should force everything to recompile.

          If it doesn’t, you’ll have to find the object directory for Caffe and clean it out, and delete the executable.

          USE_CUDNN:=1

          Was the only thing that I did in the video to make it compile correctly. The cudnn install could be bad, you might want to delete the cudnn and reinstall it. After that, I’m running out of ideas.
          Sorry you’re having these issues.

  48. I will try that. In my Makefile.config, instead of #USE_CDNN := 1, I had USE_CUDNN :=1. Maybe that’s something. I’ll try it out. You’ve been very helpful, thank you!

  49. Hi,

    Thank you for the detailed steps in the video. I downloaded the latest version of Jetson TK1 Development pack from https://developer.nvidia.com/jetson-tk1-development-pack (L4T r21.4 )

    While I successfully flashed the Jetson, there was an error when it came to the section: Push and Install Components on Target.

    It downloaded some files for a while and then an error message popped up stating:

    “CUDA cannot be installed on device. Please use apt-get command in a terminal to make sure following packages are installed correctly on device before continuing:
    cuda-toolkit-6-5 libgomp1 libfreeimage-dev libopenmpi-dev openmpi-bin”

    I am not sure how and where to run the apt-get command.

    Also, before the flash began, I could detect the Jetson in the lsusb command. After the flash began and it reached the error stated above, I ran the lsusb again and this time it did not detect the Jetson. Is that how it should be? Or was I supposed to push force recovery when the Jetson rebooted itself after the flash.

    Thanks.

    • Hi Parth,
      To run the apt-get command, on the Jetson open up a Terminal.
      I believe that it is asking you to:
      $ sudo apt-get cuda-toolkit-6-5 libgomp1 libfreeimage-dev libopenmpi-dev openmpi-bin

      After flashing and the Jetson reboots, it goes into normal mode which means that it will not be detected by lsusb. The CUDA and OpenCV4Tegra libraries are transferred via ethernet to the Jetson from the host PC. Issues with the ethernet connection will cause the installation of CUDA and OpenCV to fail.

      Hope this helps

      • Hi Kangalow,

        I figured it might have been due to an internet issue. I uninstalled the software from my host machine and re-flashed the jetson with dedicated ethernet cables to both the devices and I have it working now 🙂

        However, I have another issue with the wireless and bluetooth connectivity.
        Before I flashed the Jetson, I could connect to the internet using a USB WiFi dongle.
        However, now the bluetooth settings are locked as well. I tried the basic steps suggested by people on various ubuntu forums to unlock the bluetooth, but nothing works.

        Would you happen to have any insight on this issue?
        Greatly appreciate your help.

        Thanks again.

  50. Now I trying that run libfreenect2 on Jetson TK1

    and, there are some issues.

    If you know, please some adbises.

    ————————————————

    1.I tried to adjust CPU clock and GPU clock

    # cat /sys/kernel/cluster/active
    G

    # cat /sys/kernel/debug/clock/gbus/rate
    852000000

    ————————————————

    2.And I use a mini PCI-Express USB3.0 19pin (Renesas nPD720200)

    Power is on mora 4Pin on Jetson.

    After some tried, I used external power supply, but the same issue is happening

    ————————————————

    3.Result of lsusb

    $ lsusb
    Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
    Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    Bus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    Bus 002 Device 003: ID 045e:02c4 Microsoft Corp.
    Bus 002 Device 002: ID 045e:02d9 Microsoft Corp.
    Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
    Bus 001 Device 002: ID 045e:02d9 Microsoft Corp.
    Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    ————————————————

    4. I have installed them as described below.

    https://gist.github.com/jetsonhacks/1bd2830cc1a5790a6ac2#file-installlibfreenect2-sh

    That .sh file is as described below

    sudo apt-get install -y build-essential libturbojpeg libtool autoconf libudev-dev cmake mesa-common-dev freeglut3-dev libxrandr-dev doxygen libxi-dev libjpeg-turbo8-dev

    git clone https://github.com/jetsonhacks/libfreenect2.git

    wget http://developer.download.nvidia.com/mobile/tegra/l4t/r21.2.0/sources/gstjpeg_src.tbz2

    tar -xvf gstjpeg_src.tbz2 gstjpeg_src/nv_headers

    mv gstjpeg_src/nv_headers/ libfreenect2/depends/

    rmdir gstjpeg_src/

    cd libfreenect2/depends

    sh install_ubuntu.sh

    sudo ln -s /usr/lib/arm-linux-gnueabihf/libturbojpeg.so.0.0.0 /usr/lib/arm-linux-gnueabihf/libturbojpeg.so

    cd ../examples/protonect/

    cmake CMakeLists.txt

    make && sudo make install

    cd ../..

    sudo cp extras/90-kinect2.rules /etc/udev/rules.d/90-kinect2.rules

    /bin/echo -e “\e[1;32mFinished.\e[0m”

    ————————————————-

    The following message is displayed when I start the Protonect.

    ————————————————

    $ sudo ./bin/Protonect
    [sudo] password for ubuntu:
    modprobe: FATAL: Module nvidia not found.
    [CudaDepthPacketProcessorKernel::initDevice] device 0: GK20A @ 852MHz Memory 1892MB
    [CudaDepthPacketProcessorKernel::initDevice] selected device 0
    [Freenect2Impl] enumerating devices…
    [Freenect2Impl] 8 usb devices connected
    [Freenect2Impl] found valid Kinect v2 @2:3 with serial 017234451747
    [Freenect2Impl] found 1 devices
    [Freenect2DeviceImpl] opening…
    [Freenect2DeviceImpl] opened
    [Freenect2DeviceImpl] starting…
    [Freenect2DeviceImpl] ReadData0x14 response
    92 bytes of raw data
    0x0000: 00 00 15 00 00 00 00 00 01 00 00 00 43 c1 1f 41 …………C..A
    0x0010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 …………….
    0x0020: 0a 21 33 55 20 00 17 ba 00 08 00 00 10 00 00 00 .!3U ………..
    0x0030: 00 01 00 00 00 10 00 00 00 00 80 00 01 00 00 00 ……….�…..
    0x0040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 …………….
    0x0050: 00 00 00 00 00 00 00 00 07 00 00 00 …………

    ==== CuDepthPakePro CommandResponse ==== 1302882
    [Freenect2DeviceImpl] ReadStatus0x090000 response
    4 bytes of raw data
    0x0000: 00 22 00 00 .”..

    [Freenect2DeviceImpl] ReadStatus0x090000 response
    4 bytes of raw data
    0x0000: 00 22 00 00 .”..

    [Freenect2DeviceImpl] enabling usb transfer submission…
    [Freenect2DeviceImpl] submitting usb transfers…
    [Freenect2DeviceImpl] started
    device serial: 017234451747
    device firmware: 2.3.3913.0.7
    [TegraJpegRgbPacketProcessor] avg. time: 15.8725ms -> ~63.0022Hz
    [RgbPacketStreamParser::handleNewData] skipping rgb packet!
    [RgbPacketStreamParser::handleNewData] skipping rgb packet!
    [TegraJpegRgbPacketProcessor] avg. time: 15.9096ms -> ~62.8553Hz
    [TegraJpegRgbPacketProcessor] avg. time: 15.1333ms -> ~66.0795Hz
    [TegraJpegRgbPacketProcessor] avg. time: 15.1105ms -> ~66.1793Hz
    [TegraJpegRgbPacketProcessor] avg. time: 15.1506ms -> ~66.0038Hz
    [TegraJpegRgbPacketProcessor] avg. time: 15.1426ms -> ~66.039Hz
    [TegraJpegRgbPacketProcessor] avg. time: 15.1493ms -> ~66.0098Hz
    [TegraJpegRgbPacketProcessor] avg. time: 14.9997ms -> ~66.6679Hz
    [TegraJpegRgbPacketProcessor] avg. time: 14.4716ms -> ~69.101Hz
    [TegraJpegRgbPacketProcessor] avg. time: 14.429ms -> ~69.3047Hz
    [RgbPacketStreamParser::handleNewData] skipping rgb packet!
    [TegraJpegRgbPacketProcessor] avg. time: 14.908ms -> ~67.0779Hz
    [TegraJpegRgbPacketProcessor] avg. time: 15.0251ms -> ~66.5553Hz
    [TegraJpegRgbPacketProcessor] avg. time: 15.1235ms -> ~66.1221Hz
    [TegraJpegRgbPacketProcessor] avg. time: 15.1282ms -> ~66.1015Hz
    [TegraJpegRgbPacketProcessor] avg. time: 15.1414ms -> ~66.0441Hz

  51. Hey- I’m noticing a lot of comments here referencing errors of the form “cudnn* not declared in this scope. I also had this issue and was confounded by it for a while. I only have the error when using cuddn R2- seems that the R2 update isn’t in the branch of caffe mentioned in this post.
    Still really unclear as to what was causing the problem specifically (error makes it look like caffe is just failing to import cudnn, despite environment variables) but it is definitely solved by using cudnn R1.

  52. hello thanks for you articles .I want to ask you how use the Gstreamer to achieve usb camera video capture and as same while I should send the video to internet server. how could i write the shell command .I am a student and search the TK1 only for a week . Thank you

  53. I tried to get this working last night with a v2 Lidar Lite. All the numbers coming out are the same but not -1. Checked over the wiring and all I can see it with I2c-tools. Works perfectly on the arduino so I’m wondering if there might be a difference in registers etc with the new unit. Says on their site that they are the same. Any Ideas? Also in the video what are you using to visualize the data at the end of the video? Is that something you wrote or PD software? I’m trying to build something similar to your jetson bot except use a lidar and camera instead of the Kinect since I want to use it outside.

    • I’m assuming that you’re running the example with sudo, and that you’re moving your hand over the sensor.
      When you say that it works on the Arduino, is that using the old library, or the new one?
      The visualizer at the end is just some hack I wrote, I never published it.
      Looking forward to seeing your robot! What kind of camera are you going to use?

      • I thought the visualizer at the end is included the exmple code. Can you please publish or send the code to me privately? I need it desperately.

        • The visualizer is not part of the example code, it’s just a sketch I wrote for the demo.
          I’ve placed the source code in a repository: https://github.com/jetsonhacks/LidarPlotSketch

          The code is provided as is, and while you’re welcome to use it, I will not be supporting or answering questions about it. The code in the video was compiled using Qt Creator 5.5. You can check on the blog or jetsonhacks YouTube channel for installation instructions for Qt.

          Good luck!

  54. I like to take photos at motorsports events. But I need a mobility scooter to do it. I rigged up a gimbal on a monopod so I can shoot sitting down. Works great but its a real pain to have to move the scooter swivel the seat then look through the viewfinder. So I want to give my scooter self driving capabilities so I can just pick up my feet and have it move where I want and keep looking through the viewfinder. Maps and GPS to find stuff easier and a little augmented reality display to help me navigate through the crowd. I managed to get the LL to start exhibiting the same behavior on the dino so I think I know whats up now. Needs the cap across the power and ground lines. I suspect there is a momentary dip in the power that puts it in a semi reset state and causes the numbers to be all the same. I set it up to use the high speed I2C and it happened all the time. Slow it down and it behaves most of the time.

    • Yes, looking at the recommended schematic on the V2 Lidar Lite they recommend the capacitor, I think the answer you came to is correct.

      That sounds like a very interesting application. I’m glad to hear that you got it working!

    • D Pollock,

      Can you confirm that you got the Lidar Lite v2 working on the Jetson? So far, I’ve followed this blog set of instructions, but to no avail. I2C reads do not appear to work for me. I get values, but they never change. I’ve added a 470 nF cap, but still no joy. Any suggestions?

      • Yes I did get it working with the Jetson. Also an IMU and GPS that are all on I2C. Are you sure you have the levels right? I used a level shifter. There was something else I found but I can’t remember right now. As soon as it comes back to my tired old brain I’ll add it to this post. I just started working on the rotating base for mine today so I’ll be getting it back out and hooking it back up soon. Working with a Zed camera instead of Lidar now. But I want Lidar too especially for low light or after dark roving.

  55. Hi,
    I attempting to compile QT GUI application on Nvidia EVB, I tried to cross compile my code using qmake with arm-g++-gnueabifh-gcc compiler but it won’t compile.
    Do you have a guide for compile QT application on ARM target?
    Regards, Yakov S.

    • I am unfamiliar with the NVIDIA EVB, what is it?
      Unfortunately I do not use a cross compiler, I only compile on the Jetson device itself. Once Qt Creator is installed as described in the video, it works as a regular Qt system.
      There is not enough information in “it won’t compile” for me to offer any suggestions.

  56. Thanks. I have Rviz working now and plotting the scans. Not totally working yet since the driver is looking for 360deg scans and I’m trying to do 180 with a 60deg up and down sweep with each click in X. Using a Trossen Robotics turret. It has an arduino clone to run it so I’m reading the Lidar Lite with that. Waiting on the Zed to get here. That should finish up the sensor array. I’m thinking of using the lidar as a detection and tracking device that works mainly where the Zed is not pointing. A quick 360 scan and if it picks up a mover track that until it comes into the view of the Zed camera. I managed to get some point clouds with a pair of life cams but the Zed is so much better and cuts out so much development time that I had to have one. Full SDK for the Jetson too. Although there is a current problem with the USB3 implementation that StereoLabs and Nvidia are working on. So right now all the resolutions don’t function properly. I figure it will be fixed by the time they start shipping in 2-3 weeks. If not its probably the 2K res the rest should work fine but maybe its just the ones that work with USB 2 worst case. We will see.

    • I’ve been talking with several people using the ZED on the Jetson, everyone says that the results have been very positive, even with the hiccup over USB3. Hopefully you can post some picture or videos of your setup once you get everything up and running.

      • Zed is awesome and new ROS driver in latest SDK is easy to install and get working. I put the finishing touches on my chassis tonight. Screwed up on sizing the motors but found a page and worked through that to get the correct ones this time 🙂 No worries though I’m using one of the ones with an encoder to spin my rotating base for the Lidar unit now. Going to use the other one with encoder to level the base using the IMU for input eventually. But I’m freezing any more mods till I get the thing operational. I’ll hack on it forever if I don’t 🙂 Finishing wiring up my motors tonight so I can test those tomorrow.

  57. Hello,

    Thank you for your posts & videos! It’s really great that you’ve been working with Jetson over a year and still post your findings regularly!

    Did you try to use those level-shifters to sense higher voltages, like 24V DC? Also, how would you go about integrating optocouplers/relays to safely sense/drive external signals?

    Andrew

    • Hi Andrew,
      First thanks for reading and watching!
      The level shifter mentioned above goes down to 1.8V on the low side, and about 10V on the high side. These devices are generally used to mix 3.3V/5V devices as it is a common voltage with a lot of sensors, flash cards, and displays with devices such as Arduinos and Raspberry Pi.

      Personally I would shy away from bringing 24V around the Jetson. Most of the time people build external boards to handle higher voltages/currents for things like driving motors.

      I’m not an electrical engineer, so you’ll have to take my answers with that in mind. Hopefully there’s enough of an answer here to get you started looking for a more complete answer.

      Typically for relay control, you’ll want to bring in extra power to drive the relays. To control a relay, a common application is a resistor and a transistor (for example, control voltage -> 1K ohm resistor -> 2N2222 transistor with a 1N4001 diode). That’s for simple on/off. To fully control a motor, you’ll need two relays, then there’s half-bridge relays and a whole little kingdom of relay combinations. For controlling several relays, some applications use a Darlington array (such as a ULN2003) which combines the necessary logic and support into an IC chip. A lot of smaller relays work on a 5V control signal, you may have to level shift the 1.8V from the Jetson to get them to play nice.

      An optocoupler can be used as a level shifter. This device decouples the input from the output entirely by putting a optical circuit in the middle. It depends on the device selected, but my initial guess would be that 1.8V to control the optocoupler might be on the low side. You may have to level shift 1.8V control signal to 5V to get the optocoupler to respond.

      Hope this helps

  58. Thank you for an extensive and prompt answer!

    Using transistor to drive a relay sounds like a great idea! As I understand, Jetson provides +5V among via extension pins? Thankfully, the relay is for a simple connection without any motors, so that should do. Didn’t hear about the Darlington arrays, thank you for the tip, I’ll research them.

    I will also try a 4N35 optocoupler soon and I think it’s more about the current I can supply it rather than voltage, so I think 10mA should make it respond quite well.

    Andrew

  59. 1. Is there a way to know Opencv4Tegra be using in application
    maybe just use standard OpenCV instead of Opencv4Tegra

    2.While use Opencv4Tegra, do both of them different in code like different function, lib ..

    Thank you.

    • Hi SeanYao,

      1. Is there a way to know Opencv4Tegra be using in application
      maybe just use standard OpenCV instead of Opencv4Tegra

      Most applications that are built do not use OpenCV4Tegra. The OpenCV4Tegra library has to be built into your executable, which usually means that it is purpose built. If you are writing your own programs, for example, you can use OpenCV4Tegra.

      2.While use Opencv4Tegra, do both of them different in code like different function, lib ..

      Except for the “non-free” libraries, the function calls are the same. The libraries, of course, are different. One is the OpenCV library that you would compile (or install using apt-get), the other is the OpenCV4Tegra.

  60. Thanks for the link! A bit offtopic, but what is the use for the YUV in the digital technology? I went on Wikipedia for a refresher and found that “Y’UV was invented when engineers wanted color television in a black-and-white infrastructure.”

    Antmicro wrote that they used them for “our and our customers’ R&D purposes”. But I’m just curious what direction of R&D could that be?

    Kind regards,
    Andrew

    • Hi Andrew,

      YUV in this context means the format of the digital stream being delivered by the camera. There are a wide variety of formats that cameras/imagers deliver, YUV (sometimes called YCrCb) simply defines the structure of the digital data being delivered. You’ll probably hear about Bayer and RGB in a similar context, and conversion between the different formats in terms like “Bayer conversion to YUV 4:2:0”. The main idea is that you have an image, and the image is in a given format whether it is YUV, RGB, Bayer and so on. It’s similar to photo images that are in a compressed format like JPEG or PNG. It’s the same image basically, but just stored in a particular way. The format gives you the map for dealing with the image.

      In most cases, the type of imager inside the camera makes data packing simpler in a given format, so that’s what manufacturers tend to deliver ‘natively’.

      I believe that the intent by antmicro is to deliver a hardware platform for building vision enabled embedded systems and applications (which is the Development part of R&D). The ‘Research’ is for implementing CUDA code for image/video processing and possible solutions to vision processing, exploration or novel tasks.

  61. Complete Jetson newbie here. I followed the steps in this guide and upon reboot I see the lines of text scrolling and then a NVIDIA splash screen at which point the screen goes blank. I cannot seem to reach the console from here.

    Any help or guidance would be appreciated.

      • Similar problem here, I had the old 19.2 Kernel. I’m guessing you are going to say to flash the L4T 21.3 Kernel first, then the Grinch install. Is there a way to avoid having to flash, similar to the Grinch install?

        Cheers,

        Jon

        • Hi Jon,

          Am I that predictable? That’s exactly what I was going to say.
          The Grinch install described here is for 21.X version of L4T. For the older L4T 19.X kernels, the Grinch Kernel 19.3.8 described in the Jetson forum at: http://bit.ly/1jcKBug is the correct one.
          Unfortunately the only way to upgrade the OS is to flash the board. Version 21.X has a different boot loader than the 19.X series of L4T.

          • Hi, I have the same problem. What should I do now? After Installing Grinch for I can’t see anything on the screen.

            PS please help

  62. Thank you Jim (I hope I’m not mistaken),

    I thought that RGB is the dominating format in the camera sensors (just looked up the datasheet for the first camera in antmicro’s list – OV5640 and it’s raw format is JPEG). But then you mentioned “Bayer conversion to YUV 4:2:0” and I started to think about the subsampling – and it seems to be the case for existence of the non-RGB color spaces, “4:2:2 Y’CbCr scheme requires two-thirds the bandwidth of (4:4:4) R’G’B’.” (https://en.wikipedia.org/wiki/Chroma_subsampling). And the same article explains that the visual artifacts are smallest for reduction in color quality opposed to reduction in luminance channel.

    Thanks for the link and the answer, TIL why RGB is not ruling the world (yet)!

    Andrew

    • Hi Andrew,
      Yes, there are a lot of factors in determining the best color space, a lot of them are related to the physical constraints of the imager and bandwidth restrictions. A JPEG format may be used because it’s cheaper to put an encoder on board than to have a fatter pipe to transfer the bits. Or there’s a limitation on the actual bandwidth itself, like a USB 2.0 interface. It’s a whole discipline in and of itself.

      I’m sure you’re familiar with the idea of a small imager vs large imager, the physical size of the light sensing device itself. As the marketing race to more megapixels started to heat up, the more pixel elements there were per die. But the size of the actual light sensing elements decreased. You’ll see that in DSLR types of cameras all the time where devices with fewer actual pixels have significantly better picture quality because larger sensor elements can gather more light.

      Another thing to take into account is how fast the images have to be acquired. When you have a stream of 4K video running 60fps, the requirements are obviously a lot different than 1080@30fps. So there are all sorts of tradeoffs the engineers make (especially on inexpensive cameras) to get the best image/performance possible at a given price point. That’s where you can imagine subsampling and luminance ‘cheats’ for better overall image quality or low light performance. Typically imaging is easy when there is a bunch of light about, a lot of tradeoffs get made as things get darker to minimize the noise in the image or deal with moire effects on small, high resolution imagers.

      Lots of things to take into account. In the broad sense, the actual encoding of the image is just moving some bits around in the end, so the hardware guys don’t get real excited about that, they’re just happy to get the bits out in the first place.

  63. Use your instruction, and when i run ./Protonect I get the images, but very slow.
    And I have this output:

    ubuntu@tegra-ubuntu:~/libfreenect2/examples/protonect/bin$ ./Protonect
    [Freenect2Impl] enumerating devices…
    [Freenect2Impl] 11 usb devices connected
    [Freenect2Impl] found valid Kinect v2 @2:3 with serial 034327244547
    [Freenect2Impl] found 1 devices
    [Freenect2DeviceImpl] opening…
    [Freenect2DeviceImpl] opened
    [Freenect2DeviceImpl] starting…
    [Freenect2DeviceImpl] ReadData0x14 response
    92 bytes of raw data
    0x0000: 00 00 12 00 00 00 00 00 01 00 00 00 43 c1 1f 41 2e2e2e2e2e2e2e2e2e2e2e2e432e2e41
    0x0010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2e2e2e2e2e2e2e2e2e2e2e2e2e2e2e2e
    0x0020: 0a 21 33 55 c2 00 17 20 00 08 00 00 10 00 00 00 2e2133552e2e2e202e2e2e2e2e2e2e2e
    0x0030: 00 01 00 00 00 10 00 00 00 00 80 00 01 00 00 00 2e2e2e2e2e2e2e2e2e2e802e2e2e2e2e
    0x0040: 31 33 00 00 00 01 0e 01 47 4b 53 36 35 30 2e 31 31332e2e2e2e2e2e474b533635302e31
    0x0050: 58 00 00 00 00 00 00 00 07 00 00 00 582e2e2e2e2e2e2e2e2e2e2e

    [Freenect2DeviceImpl] ReadStatus0x090000 response
    4 bytes of raw data
    0x0000: 01 26 00 00 2e262e2e

    [Freenect2DeviceImpl] ReadStatus0x090000 response
    4 bytes of raw data
    0x0000: 03 26 00 00 2e262e2e

    [Freenect2DeviceImpl] enabling usb transfer submission…
    [Freenect2DeviceImpl] submitting usb transfers…
    [TransferPool::submit] failed to submit transfer: LIBUSB_ERROR_IO
    [TransferPool::submit] failed to submit transfer: LIBUSB_ERROR_IO
    [TransferPool::submit] failed to submit transfer: LIBUSB_ERROR_IO
    [TransferPool::submit] failed to submit transfer: LIBUSB_ERROR_IO
    [TransferPool::submit] failed to submit transfer: LIBUSB_ERROR_IO
    [TransferPool::onTransferComplete] failed to submit transfer: LIBUSB_ERROR_IO
    [TransferPool::submit] failed to submit transfer: LIBUSB_ERROR_IO
    [DepthPacketStreamParser::onDataReceived] not all subsequences received 0
    [TransferPool::submit] failed to submit transfer: LIBUSB_ERROR_IO
    [TransferPool::onTransferComplete] failed to submit transfer: LIBUSB_ERROR_IO
    [TransferPool::submit] failed to submit transfer: LIBUSB_ERROR_IO
    [TransferPool::onTransferComplete] failed to submit transfer: LIBUSB_ERROR_IO
    [TransferPool::submit] failed to submit transfer: LIBUSB_ERROR_IO
    [TransferPool::submit] failed to submit transfer: LIBUSB_ERROR_IO

    ………………………………………

    [DepthPacketStreamParser::onDataReceived] not all subsequences received 512
    [TransferPool::submit] failed to submit transfer: LIBUSB_ERROR_IO
    [TransferPool::submit] failed to submit transfer: LIBUSB_ERROR_IO
    [TransferPool::submit] failed to submit transfer: LIBUSB_ERROR_IO
    [Freenect2DeviceImpl] started
    device serial: 034327244547
    device firmware: 2.3.3913.0.7
    [TransferPool::onTransferComplete] failed to submit transfer: LIBUSB_ERROR_IO
    [DepthPacketStreamParser::onDataReceived] not all subsequences received 31
    [DepthPacketStreamParser::onDataReceived] not all subsequences received 830
    [DepthPacketStreamParser::onDataReceived] not all subsequences received 63
    [DepthPacketStreamParser::onDataReceived] not all subsequences received 447
    [DepthPacketStreamParser::onDataReceived] skipping depth packet
    [DepthPacketStreamParser::onDataReceived] skipping depth packet
    [RgbPacketStreamParser::onDataReceived] skipping rgb packet!
    [RgbPacketStreamParser::onDataReceived] skipping rgb packet!

    What I forgot?
    Thanks.

    • How is your Kinect V2 connected to the Jetson? Is it through a PCIe card, straight to the USB 3.0 connector, or through a hub? Do you have USB autosuspend turned off. I am assuming that USB 3.0 is enabled, or it probably would not work at all.

      • I am use hub.
        Yes, I have USB autosuspend turned off.
        ubuntu@tegra-ubuntu:~$ grep . /sys/bus/usb/devices/*/power/autosuspend
        /sys/bus/usb/devices/1-2/power/autosuspend:-1
        /sys/bus/usb/devices/1-3.1.1/power/autosuspend:-1
        /sys/bus/usb/devices/1-3.1.2.1/power/autosuspend:-1
        /sys/bus/usb/devices/1-3.1.2/power/autosuspend:-1
        /sys/bus/usb/devices/1-3.1/power/autosuspend:-1
        /sys/bus/usb/devices/1-3/power/autosuspend:-1
        /sys/bus/usb/devices/2-1/power/autosuspend:-1
        /sys/bus/usb/devices/usb1/power/autosuspend:-1
        /sys/bus/usb/devices/usb2/power/autosuspend:-1
        /sys/bus/usb/devices/usb3/power/autosuspend:-1

        My lsusb:
        Bus 002 Device 026: ID 045e:02c4 Microsoft Corp.
        Bus 002 Device 002: ID 2109:0812
        Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
        Bus 001 Device 007: ID 09da:c10a A4 Tech Co., Ltd
        Bus 001 Device 006: ID 1a40:0101 Terminus Technology Inc. 4-Port HUB
        Bus 001 Device 005: ID 0b38:0003 Gear Head Keyboard
        Bus 001 Device 004: ID 1a40:0101 Terminus Technology Inc. 4-Port HUB
        Bus 001 Device 003: ID 2109:2812
        Bus 001 Device 002: ID 8087:07dc Intel Corp.
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
        Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

        My udev rules:
        ubuntu@tegra-ubuntu:/etc/udev/rules.d$ cat 90-kinect2.rules
        # ATTR{product}==”Kinect2″
        SUBSYSTEM==”usb”, ATTR{idVendor}==”045e”, ATTR{idProduct}==”02c4″, MODE=”0666″
        SUBSYSTEM==”usb”, ATTR{idVendor}==”045e”, ATTR{idProduct}==”02d8″, MODE=”0666″
        SUBSYSTEM==”usb”, ATTR{idVendor}==”045e”, ATTR{idProduct}==”02d9″, MODE=”0666″

        And one more. When I run ./Protonect cl , I have this output:
        [Freenect2Impl] enumerating devices…
        [Freenect2Impl] 11 usb devices connected
        [Freenect2Impl] found valid Kinect v2 @2:26 with serial 034327244547
        [Freenect2Impl] found 1 devices
        OpenCL pipeline is not supported!
        [Freenect2DeviceImpl] opening…
        [Freenect2DeviceImpl] opened
        [Freenect2DeviceImpl] starting…

        maybe in this problem “OpenCL pipeline is not supported!”

        • In the example, I did not use the ‘cl’ switch with ./Protonect, it might be an issue.
          I also noticed that your hub uses a ‘Terminus Technology Inc. 4-Port HUB’, this might also be an issue. From a quick look on the Internet, this appears to be a USB 2.0 hub, a USB 3.0 hub is needed. Worth checking.

          • I tried connect kinect without hub and have this output:

            ubuntu@tegra-ubuntu:~$ lsusb
            Bus 002 Device 005: ID 045e:02c4 Microsoft Corp.
            Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
            Bus 001 Device 004: ID 8087:07dc Intel Corp.
            Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
            Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
            ubuntu@tegra-ubuntu:~$

            Problem still exists.

  64. Without wrong messages my output look like this:

    [OpenGLDepthPacketProcessor] avg. time: 25.4012ms -> ~39.3683Hz
    [OpenGLDepthPacketProcessor] avg. time: 25.3268ms -> ~39.4839Hz
    [TurboJpegRgbPacketProcessor] avg. time: 58.6912ms -> ~17.0383Hz
    [OpenGLDepthPacketProcessor] avg. time: 25.312ms -> ~39.5069Hz
    [OpenGLDepthPacketProcessor] avg. time: 24.9397ms -> ~40.0967Hz
    [OpenGLDepthPacketProcessor] avg. time: 24.9134ms -> ~40.1391Hz
    [OpenGLDepthPacketProcessor] avg. time: 25.2323ms -> ~39.6317Hz
    [TurboJpegRgbPacketProcessor] avg. time: 60.9061ms -> ~16.4187Hz
    [OpenGLDepthPacketProcessor] avg. time: 25.2316ms -> ~39.6329Hz
    [OpenGLDepthPacketProcessor] avg. time: 25.707ms -> ~38.8999Hz

    and

    $ cat /etc/nv_tegra_release
    # R21 (release), REVISION: 4.0, GCID: 5650832, BOARD: ardbeg, EABI: hard, DATE: Thu Jun 25 22:38:59 UTC 2015

    If I right understand my version lL4T is 21.4.

    • The only difference that I can think of is that the demo was done on a L4T 21.2 setup, while you’re running 21.4. There could have been a change in the TurboJPEG libraries since then that are holding things back. I did notice that there are requests in the libfreenect2 upstream to pull in CUDA changes to speed things back up, but I don’t know when they will be incorporated. Sorry I couldn’t be of more help.

  65. Hello
    I completed install.
    I got usb to run well with 3.0 USB hub.
    When I start Protonect one screen opens — no error messages — firmware — I see messages for frame rate — a great 60- an occasional RGB packet dropped.
    What I don’t see are the three additional windows — I never see them start — flash
    All I see is the one window with frame rate messages.
    The sensor has three red lights.
    What can I be doing wrong that the other three screens do not at least pop up?

    Thanks
    Troy
    I see no error messages at start up — what can I be doing wrong not to see the

  66. Hello
    I have found out a great deal more.
    Even though I see TegraJpegRgbPacketProcessor messages — I am not getting frames sent back to listener
    One time I started the application and all the windows came up and I saw everything
    I rebooted the machine — started up the application and no cv windows only the processor messages
    I can set the application to start at login — I then see three cv windows but no frames are displayed
    It is almost as if the application tries to read the usb stream too soon or gets out of synch
    I have tried multiple usb hubs — 3.o and they all do the same thing

  67. I can see frames coming into SyncMultiFrameListener but the lock being called from WaitForNewFrame is never released-it looks to me that a dead lock situation occures and there is no way out

  68. I have drilled further down.

    In the code, it expects Color, IR and Depth to be returned.
    The device gets itself into a state where only color is returned.
    If I modify the code to only look for color frames, it works like a charm.

    If you get this in time, where is it that the device is told to return all three streams?
    What determines that all three streams will come back in a specific sequence?

    Otherwise I will continue to dig deeper.

    Thanks for the support.

    In my application, I have a platform hexagon in shape. Every 60 degrees I have a Kinect device — the room is completely covered. I am monitor for specifics as multiple people enter the room.

  69. I am assuming that it is OK for for two devices to have the same ID

    Bus 002 Device 008: ID 2109:0812
    Bus 002 Device 009: ID 045e:02c4 Microsoft Corp.
    Bus 002 Device 007: ID 045e:02d9 Microsoft Corp.
    Bus 002 Device 006: ID 2109:0812
    Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
    Bus 001 Device 010: ID 093a:2521 Pixart Imaging, Inc.
    Bus 001 Device 009: ID 2109:2812
    Bus 001 Device 008: ID 0c45:7603 Microdia
    Bus 001 Device 007: ID 045e:02d9 Microsoft Corp.
    Bus 001 Device 006: ID 2109:2812
    Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    • Hi Walter,
      Good catch! The top row of the J3A2 header is mostly CSI lanes for a camera. As you probably already noticed, there are 40 output pins that have been brought over from the Jetson, here’s a schematic: http://neurorobotictech.com/Portals/0/Documents/JetduinoConn%20-%20Project.pdf
      Since there are 125 signals total (the Jetson TK1 Connector deals with 100 of these), it’s a matter of which signals should be brought over to the 40 pin. I think that the main signals that you want (UARTs, I2C, GPIOs, SPI, power) for most prototyping applications have been brought across. Obviously you don’t get all 125 signals, but you do get a good selection. Also having the design files for the board available makes it much easier for someone to create their own custom board if they need other signals from the Jetson.

      Looking forward to seeing the next Myzharbot, MakerFaire in Rome is only a couple of weeks away!

  70. Ok

    I got it to work. First an explanation. I initially thought I was having issues with OpenCl4Tegra. I rebuilt OpenCl for four processors and had the same issues. I was then that I realizex that I was only getting the Color Frames and nothing would be drawn until I was receiving all the frames.

    I started looking more at the locks — I still don’t entirely understand the model as implemented.

    What I ended up doing was to make sure that I had inbound frames of all three types before I started looking for frames to draw. Success.

    I will end up rewriting some of the code so that I better understand the lock model. I also need to be able to use the depth data to draw skeletons.

    Bottom line, I am off and running and want to thank you for your support.

    Thank You?

    • Hi Troy,

      I believe I may be running into the same issue you had, and was wondering if you could share or explain how you changed the example code to get it working.

      Any help appreciated,

      Thanks!

  71. Hi Troy,
    It’s great to hear that you got it working to your satisfaction! I’ve been traveling, sorry I didn’t get a chance to help you with this. Your project sounds like fun, I hope you can share some of it with us.

  72. Hello, I followed all u said but every time, when I try to connect from Host pc to jetson. It keeps asking me for password even I enter correct password. When I try to connect from phone, it keeps writing “obtaining Ip address”. Do you have any idea how to solve this issue?

  73. There is a lot of material covered in this video for the setup. I’m not sure what the issue is that you are encountering, but it probably has to do with the configuration and different parameters that are covered in the setup.
    Make sure to check your JetsonBot-Wifi file and make sure that the mode has been set to ‘ap’.
    What is the IP address of the Jetson once you’ve connected to the JetsonBot wifi network?

  74. Hello

    The issue that I had was a locking problem. Simply add a sleep between the start and the loop that listens in Protonect.cpp. This allows all three streams to start collection. This is a hack and not an enterprise solution — But I have been running for more than 72 hours and there is no difference between the data collected by the Jetson boards and the data collected by the Microsoft Surface laptops.

  75. Hello

    I want to be able to stream the depth and rgb data. I want MJPEG data. Is GStream the way to go? If so, are there any code samples? If not, suggestions are appreciated.
    Thanks

  76. I’m struggling to obtain a depth point cloud from a Kinect v2 plugged to a Jetson board. Looks like you have nailed it. Can you please give me some advice as to where should I begin ?? Thanks to your other posts I’ve got libfreenect2 up and running.

    Thank you in advance.

    • Hi Yoni,
      The video above is from a version 1 Kinect. I haven’t tried to render a point cloud out of a V2 yet, but I believe that people have been doing it with a ROS bridge into rviz. libfreenect2 should give you the depth bits and the rgb pixels, and it’s possible to actually get them registered with the new versions of libfreenect2, but I haven’t tied the two together personally.
      You can try to look through the Point Cloud Library (PCL) lists to see if anyone has worked with libfreenect2.

  77. Hi,
    I tried to use Gstreamer in my Jetson to view my webcam I am using a cheap webcam I bought on ebay. I ran your code and changed the resolution to 640*420
    gst-launch-1.0 -v v4l2src device=/dev/video0 \
    ! image/jpeg, width=640, height=420, framerate=30/1 \
    ! jpegparse ! jpegdec \
    ! videoconvert ! videoscale \
    ! xvimagesink sync=false
    and I get this error

    Setting pipeline to PAUSED …
    ERROR: Pipeline doesn’t want to pause.
    ERROR: from element /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0: Could not initialise Xv output
    Additional debug info:
    xvimagesink.c(1765): gst_xvimagesink_open (): /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0:
    XVideo extension is not available
    Setting pipeline to NULL …
    Freeing pipeline …

    Any help would be greatly appreciated.

  78. Thanks for getting back to me! Interesting that you needed to do level translation…it looks like both the Jetson and LidarLite use 3.3 V logic levels. But, it’s hard to find good docs on the differences between v1 and v2. If you think of the other thing you forgot, please let me know. Thanks!

    • My mistake I looked back through my notebook and I didn’t get it working on the Jetson. I did contact them and they said there is no change in the code. What I ended up doing is using the state machine code example on a trinket pro. But I’ll be getting back to figuring that out soon. Just waiting on my breakout boards to get here to hook everything up. I’m wondering since the v2 has two speeds it can run at if its coming up at a speed that is mismatched with what the Jetson expects. I’m going to check that out first. Sorry about that the old memory isn’t what it used to be.

  79. Cool what was the problem with the V1 stuff? I just figured out the motor board and was going to start on this. Good thing I looked on twitter first 🙂

    • Two problems:
      1) There weren’t enough sharks in the LIDAR-Lite v1 video.
      2) The original routine ‘readLidarLite’ used ‘i2c_smbus_read_byte_data’.
      The write/read needed a ‘STOP’ after the write on the v2, so it was changed to:
      i2c_smbus_write_byte
      i2c_smbus_read_byte

      • I don’t know. Anything over 10 hours and I would be happy to call it a replaceable wear item. Is that a mako maker shark?

        • I think the official part number I use in inventory is:

          Part # MSP001
          Description
          Mako Shark – Plush

          But I think it’s just because I like the word ‘plush’. Oh, and you do have to be a little careful, they do drink like a fish …

  80. This rotating LIDAR-lite looks good to me.
    https://www.youtube.com/watch?v=nIvOWfxFefc

    LIDAR-lite v2 should be able to output one reading per degree at 120 rpm.
    The step motor drive is fine for this task I think.
    Common step motors are:
    1.8 degrees per step at 200 steps/revolution
    0.9 degrees per step at 400 steps/revolution
    0.45 degrees per step at 800 steps/revolution

    A once-per-revolution index pulse from a photo interrupter would be enough for orienting the scan in relation to the robot base.

    Good range and much less expensive than off the shelf scanning lidar units.

    • Looks like a fun and interesting project.

      One thing to remember is that the scanning rate on most commercial units is quite a bit higher over the range of the sweep. In other words, a commercial unit may have readings at 40hz over 270 or 360 degrees. At 120 rpm, using two LIDAR-Lites, that’s about 4hz. This is adequate for some applications, but for a fast moving vehicle it might not be.

      An inexpensive entry in the 2D LIDAR is RPLidar which can run up to 10Hz scan rate with a sample frequency of 2000Hz. It’s in the $400 USD ballpark.

      Most of the more expensive 2D LIDARs rotate a mirror above the electronics, rather than rotate the electronics themselves. That way they don’t have to worry about slip-rings at 2400 rpm. Also, as the speed goes up you have to worry about balancing, shock resistance, and environmental issues and such (dirt and debris getting in bearings is not much fun). Still, they are expensive and with the push in automotive markets to get the price down, one would think that eventually that filters out into the robotics market.

  81. We have Atheros AR9462 Wireless+BT card and have installed Grinch using the instruction described here. The wireless portion is working well. However, we are not able to find the bluetooth device using “hcitool dev” and “dmesg | grep -i blue” does not show the bluetooth driver is being loaded. Does anybody have any success in using the Bluetooth with this card ? Thank you very much.

  82. Yes, the 360 degree sweep may be asking a little too much out of this unit.
    A first gen Kinect worked reasonably well as a ROS ‘fake’ indoor laser scanner for me.
    The strength of the LIDAR-lite is it’s outdoor capability in full sunlight.

    Kinect has a ~46 degree field of view.
    Kinect2 and PrimeSense have ~60 degree field of view.
    The ZED stereo camera has 110 degree diagonal field of view. Daytime use only of course.

    The LIDAR-lite v2 spec sheet mentions 750 readings a second.
    The ROS environment, in general, passes data around internally from 10 to 50Hz.
    Lets take 10 to 30Hz as a nice hobby level -outdoor- lidar full sweep return signal frequency.

    The ROS navigation packages would work well with a 75 degree sweep at 10Hz with 1 degree sensing resolution. (I have no ideas on the mechanicals yet!)
    Verifying a 40 meter return in direct sunlight with your LIDAR-lite would be a test I would be very interested in seeing.

  83. I have tried “sudo apt-get install linux-firmware” but no luck there.

    I looked at the directory: /lib/modules/3.10.40-grinch-21.3.4/kernel/drivers/bluetooth/, and only see btusb.ko there. I would expect something like ath9kbt.ko there as well ? So it’s probably missing firmware driver issue, as you have suggested.

    I’ll ask this question on the Jetson forum. Thank you very much,

    Norman

  84. With a drawer full of -other- 6 and 9 degree IMU’s, I hesitated to buy this.

    I am glad I did. The MEMS gyros and accelerometers in the other IMU’s all work well. I could never get the magnetometer component of 9DOF units to determine ‘magnetic north’ well enough to use. A lot of robot odometry routines fall back to using just the rotation gyro for this reason I suspect.

    This Bosch IMU does repeatably return heading values in relation to ‘north’ while mounted on my robot.
    To give the automatic calibration software the best chance, I did mount it as far away from the motors as possible.
    Recommended!

  85. Hi, I having trouble installing openni2 on my jetson.

    When I run the command

    cp -L lib/OpenNI2-FreenectDriver/libFreenectDriver* ${Repository}

    I get this error

    cp: target ‘”../../Bin/Arm-Release/OpenNI2/Drivers”’ is not a directory

    Any help would be great.

    • Hi Kalvik,
      You’ll need to check if the directory is actually there. This step is at here in the video: https://youtu.be/Bn9WqbYtNBw?t=6m4s
      Of course the full path will be different on your machine. You should be in the ‘OpenNI2/libfreenect2/build’ directory. The full repository path looks to be ‘OpenNI2/Bin/Arm-Release/OpenNI2/Drivers’
      You should check to make sure that it there.

  86. I went the the
    lib/OpenNI2-FreenectDriver/
    and the folder contains 3 files with the name libFreenectDriver with different extensions.

    So it is the right directory right?
    But I still get the error

    • Hmm, you’re trying to copy the binary files into the directory ‘OpenNI2/Bin/Arm-Release/OpenNI2/Drivers’
      Your error message says that the directory does not exist. Is that the case?

  87. I went to the ‘OpenNI2/Bin/Arm-Release/OpenNI2/Drivers’ directory and it exists but I keep getting the error. So I went ahead and copies the files from the ‘lib/OpenNI2-FreenectDriver/’ file to ‘OpenNI2/Bin/Arm-Release/OpenNI2/Drivers’ and then when I run
    $ pkg-config –modversion libopenni2
    from OpenNI2 directory the terminal dosent show me anything.

    • Hi Kalvik,
      It looks like there’s some formatting issues with the code. Sorry, horrible teething pains, but thank you for catching it.

      Try:
      Repository=../../Bin/Arm-Release/OpenNI2/Drivers
      $ cp -L lib/OpenNI2-FreenectDriver/libFreenectDriver* ${Repository}

      and:

      $ pkg-config –-modversion libopenni2

      There were no quotes in the Repository line. There are two hyphens instead of one in the pkg-config command.

  88. Mine is on order. It will be a much better platform for the Zed. Yea a little pricey but a lot of bang for the buck. And I’m sure compact robot orientated daughterboards aren’t far off. That should bring down the cost a bit. Although looks like you have to buy 10K of them to get just the module. Hopefully that will change. Or the big electronics houses will go for it and we can get them piecewise from them. Quite a few things on the board you don’t really need on a bot or drone. I2c Uarts sata and on a lot of applications you wouldn’t even need sata. So it could be very compact. I’ve been super happy with my TK1’s so I expect to be blown away with the TX1. 8 cores what’s not to love 🙂

  89. You state that deep learning frameworks, such as caffe, use SIFT/SURF features. I do not have direct experience with this framework, but I am familiar with the theoretical stuff about Deep Learning.

    I am planning to use a Jetson TK1 for deep learning, and use Caffe. But I would also like the speed from OpenCV4Tegra. Can you shed some light on the subject of how Caffe uses SIFT/SURF?

  90. For sure use an alarm. I switched my bot off and the switch was faulty and didn’t turn off the circuit. 2 cells at 0 volts and a puffy battery were the result. The switch I bought has to be pressed down then moved. I figured it would keep a branch etc from turning off the bot while roaming. After this happened I tested the switch. Worked about 80% of the time. Never thought about testing switches but I think I’ll start.

  91. The really good news is Jetpack 2.0 brings the VisionWorks SDK to the Jetson TK1.
    My TK1 runs the nvx_demo_feature_tracker sample program at ~25fps with a Logitech Quickcam Pro 9000 as the video source at a reported 1280×720 frame capture size.

  92. I have suceed installed Qt Creator on my LT4 r19 with this command, but after i flash it to make it LT4 r21.1 i cannot use this way to install it, do you know how to solve this problem?

  93. I have followed the above steps but Im unable to install the Torch extensions for cuDNN.

    This is the error I get upon typing in,

    sudo luarocks install cutorch

    Installing https://raw.githubusercontent.com/torch/rocks/master/cutorch-scm-1.rockspec
    Using https://raw.githubusercontent.com/torch/rocks/master/cutorch-scm-1.rockspec… switching to ‘build’ mode
    Cloning into ‘cutorch’…
    remote: Counting objects: 82, done.
    remote: Compressing objects: 100% (79/79), done.
    remote: Total 82 (delta 7), reused 31 (delta 0), pack-reused 0
    Receiving objects: 100% (82/82), 134.67 KiB | 134.00 KiB/s, done.
    Resolving deltas: 100% (7/7), done.
    Checking connectivity… done.
    cmake -E make_directory build && cd build && cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH=”/usr/local/bin/..” -DCMAKE_INSTALL_PREFIX=”/usr/local/lib/luarocks/rocks/cutorch/scm-1″ && make -j$(getconf _NPROCESSORS_ONLN) install

    — The C compiler identification is GNU 4.8.4
    — The CXX compiler identification is GNU 4.8.4
    — Check for working C compiler: /usr/bin/cc
    — Check for working C compiler: /usr/bin/cc — works
    — Detecting C compiler ABI info
    — Detecting C compiler ABI info – done
    — Check for working CXX compiler: /usr/bin/c++
    — Check for working CXX compiler: /usr/bin/c++ — works
    — Detecting CXX compiler ABI info
    — Detecting CXX compiler ABI info – done
    — Found Torch7 in /usr/local
    CMake Error at /usr/share/cmake-2.8/Modules/FindCUDA.cmake:548 (message):
    Specify CUDA_TOOLKIT_ROOT_DIR
    Call Stack (most recent call first):
    CMakeLists.txt:7 (FIND_PACKAGE)

    — Configuring incomplete, errors occurred!
    See also “/tmp/luarocks_cutorch-scm-1-2557/cutorch/build/CMakeFiles/CMakeOutput.log”.

    Error: Build error: Failed building.

  94. I not only try this method but also the method from elinux.org .It is almost the same ,but I cannot successfuly install FTDI module yet I don’t know why I can only detect FTDI attached to usb port but I cannot see any ttyUSB

  95. Hello, I installed Grinch 21.3, but I would like to go back to L4T 21.4 for driver development. What would be the easiest way to do so? I should be using u-boot, since I installed Grinch over L4T 21.4 without flashing. Thanks!

    • Unless you save a copy of the kernel before Grinch replaced it, I would probably flash L4T 21.4 on the board. I’m sure that there are other ways to go around accomplishing this, but replacing the kernel on a running system can be tricky and do things that take a long time to find/debug. For me, it’s always been better just to bite the bullet and reflash, especially if I’m doing driver development which requires a base system anyway.

  96. the .ko file is in that directory but lsmod show the module haven’t be install. I use lsmod but cannot find the module. I try to use insmod to mount the driver but it shows it cannot mount. I compile the kernel file on L4T.

    • The ‘depmod -a’ command should have loaded the driver. I believe that the driver belongs in
      /lib/modules/$(uname -r)/kernel/usb/serial
      but it should work in just the kernel directory.
      When you perform the insmod, are there any error messages?

  97. I put the drivers in /lib/modules/$(uname -r)/kernel directory .
    when I perform insmod it shows invalid module format.
    I try grinch kernel this afternoon and it can connect to ftdi chip now.
    I wan’t to know why the driver works with grinch kernel.any difference between the default kernel and grinch? and how can I change the kernel between each other?

    • This doesn’t describe a fix to your issue, but I tried to replicate the issue here.
      I tried the FTDI install script with a new installation of L4T 21.4 several times and did not experience any problems. Each test was:
      1. Flash L4T 21.4 from host (Both JetPack 1.2 and JetPack 2.0)
      2. Install Git
      3. git clone the FTDI installer from Github
      4. run the install script
      I did notice that a couple of times I needed to reboot the machine for the changes to take effect.
      The name of the created driver should have been ftdi_sio.ko
      Unfortunately I do not know what is wrong at this point and I am afraid that I cannot be of much more help. I’m sorry that this installation did not work for you.

      As for the Grinch, it is a modified version of a regular kernel that contains drivers for many. Unfortunately it is not maintained, but here’s a link of the changes:
      https://web.archive.org/web/20150908014143/https://devtalk.nvidia.com/default/topic/823132/embedded-systems/-customkernel-the-grinch-21-3-4-for-jetson-tk1-developed/
      I believe that the only way to go back to the regular kernel is to reflash the board from the PC.

  98. another question is that I used to use MINI PCI-E 3G network module on L4T 19 but when I change to 21.x it cannot use with mini PCI-E.I can only use the 3G module when I use a usb adapter to connect my mini PCI-E 3G network card and the board.I don’t know what is the problem

  99. insmod: ERROR: could not insert module cp210x.ko: Invalid module format

    some others meet the same problem
    did you install the L4T directory other than JetsonPack?

    • I installed L4T using JetPack, both version 1.2 and version 2.0. You can install L4T manually, of course.
      I an not sure that insmod works correctly without having the symver information, which means that you have to make all of the modules, i.e.
      $ make module
      It sounds like there are issues with your L4T install or development system setup. I would try to perform each step of the install script manually to see if there are any issues that might have been missed.
      Good luck!

  100. Diving a little deeper into the Bosch BNO055 and the ROS navigation stack…

    I am still a big fan of this device. Repeatable absolute magnetic bearing readings are nothing to be sneezed at for mobile ground robot applications.

    For those using the ROS navigation stack, this may be of interest.
    No solution yet, just interesting data points…

    From the ROS REP 103 Standard Units of Measure and Coordinate Conventions:
    “By the right hand rule, the yaw component of orientation increases as the child frame rotates counter-clockwise, and for geographic poses, yaw is zero when pointing east.

    This requires special mention only because it differs from a traditional compass bearing, which is zero when pointing north and increments clockwise. Hardware drivers should make the appropriate transformations before publishing standard ROS messages.”

    The navigation stack of ROS expects some fairly bizarre sensor outputs vs intuitive compass behavior. When using just gyro readings for yaw values not referenced to magnetic north, this is easily overlooked.

    The Bosch IMU has two data output modes, both increment yaw turning clockwise.

    The BNO055 allows inverting the sign of the Z axis in register settings. We will try that first before doing conversions further downstream.

    As usual, rvis will be the easiest debug tool for verification.

    • That’s interesting information. Coincidentally I’ve been playing with the BNO055 and found in the RTIMULib library BNO055 driver that the Euler angles had their axes remapped before being stored as a fusionPose in IMURead, and were then converted to a separate quaternion (fusionQPose). rtimulib_ros then publishes the quaternion (fusionQPose) angles to the “imu/data” topic. I’m certainly interested in hearing about what you learn.

  101. After resorting to painters tape on the floor to create a compass rose with both NED and ENU coordinate systems, the solution was actually fairly simple.

    1) I physically rotated the IMU to report zero Euler yaw with the robot facing east.
    2) Subtract the reported fused Euler yaw from 360 before sending it off to ROS.

    The Euler yaw to quaternion conversion is done in a ROS node on the TK1.

    My robot is doing all the odometry calculations on a mbed enabled nucleo STM32 development board. The Bosch IMU is being read by the mbed board in my case.

    X_odometry increases as the robot travels east.
    Y_odometry increases as the robot travels north.
    Euler yaw value increases as the robot turns counter-clockwise.

    The ROS navigation stack seems OK with this setup.
    It is strange to have the robot moving due east on a zero degree heading.

  102. How about the entire odometry code? x_odom, y_odom and yaw_enu are all that ROS needs to publish the robot pose. Things do get easier with faster microcontrollers. Most of the current odometry examples seem to be geared for legacy (slow) 8-bit microcontroller limitations.

    I will probably circle back around to this one day to see if fiddling with the IMU registers can accomplish the same thing. I have probably hosed the pitch and roll reporting, but yaw is really the only thing I am after from the IMU.
    ————————————————————————-

    imu.get_Euler_Angles(&euler_angles); // From Bosch BNO055 IMU
    yaw_enu = 360 – euler_angles.h;

    left_pulses_odom = left_enc.getPulses();
    right_pulses_odom = right_enc.getPulses();

    left_delta_odom = left_pulses_odom – prev_left_pulses_odom;
    right_delta_odom = right_pulses_odom – prev_right_pulses_odom;

    distance_delta_odom = (0.5 * (double)(left_delta_odom + right_delta_odom)) * MetersPerCount;
    distance_total_odom += distance_delta_odom;
    x_delta_odom = distance_delta_odom * (double)cos(yaw_enu*0.017453292);
    y_delta_odom = distance_delta_odom * (double)sin(yaw_enu*0.017453292);

    x_odom += x_delta_odom;
    y_odom += y_delta_odom;
    prev_left_pulses_odom = left_pulses_odom;
    prev_right_pulses_odom = right_pulses_odom;

  103. A followup, by using volume 1 of “ROS By Example” by R. Patrick Goebel as a step by step guide for standards checking of my scratch built ‘turtlebot cousin’, this second iteration of the build works well.

    Xbox Kinects have a narrow field of view to generate ‘fake’ laser scan data. My previous build with gyro only yaw was not accurate enough for ROS move_base and amcl to work as robustly as I knew they could.

    The Bosch IMU allows such tight odometry calculations, the ROS navigation stack now works like it should. Even with a less than optimum ‘laser’, the robot keeps localization within a map during ‘patrols’ very well.

    Where the Jetson will shine is adding some vision applications to the mix.
    Running ROS does not tax the JetsonTK1 capabilities at all.

    Only one minor gripe with the Bosch IMU. I do not think there is a way to store calibration data on the device between power cycles. This means on every power-up of the IMU, I have to pick up the robot and do the ‘figure eight’ maneuver to calibrate the IMU. This is not a problem on a small robot. Larger robots may need a way to unmount the IMU to wave it around and remount in the same position.

    • My understanding is that once you have the IMU calibrated, you can read the calibration information from the chip, save it somewhere, and then on startup set the calibration for the IMU from the saved data. The IMU will still calibrate itself, but it should be pretty close.

      The registers that hold the calibration data have names like:

      ACCEL_RADIUS ….
      MAG_RADIUS …
      ACCEL_OFFSET…
      MAG_OFFSET…
      GYRO_OFFSET…

  104. I meet a problem about the 3g mobile broadband . I used to use R19.3 and it works well both when I put the card to mini pci-e slot or put the card to usb through a usb adapter.
    Now I use 21.3.4 and now the mobile broadband card now just works with the adapter through usb but not work when I put the card into mini pci-e slot.I don’t know why.Who can tell me ?

    My card is Huawei 909s-821.

    • Hi,
      Unfortunately I do not have any experience with the Huawei, or what the issue in your particular case is. In general, to get these devices to work you will need the Linux device driver for the device (this article calls that a module), and the actual firmware for the device, which you can most likely get with the command:
      $ sudo apt-get install linux-firmware
      Sorry I can’t be of more help.

  105. Hello Jim,

    I’m a master student with almost no experience using linux/ubuntu, for my master’s thesis I have the task of setting up a Jetson TK1 so that I can cross-compile from my laptop. The Jetson is connected to a router through an ethernet cable and so is my PC (Wi-Fi). I’ve been handed the Jetson and professors have no way to help me with this task since all previous works were written, compiled and executed on the Jetson.

    I have tried to follow two blogs that Nvidia has for setting up this remote cross-compilation, but i have failed. The main reason is, I think, the blogs target a community with experience using Ubuntu.

    Since I have not found more information on the net I tried opening a topic in Nvidia’s forum, so far no replies. Maybe it is not directly related to the purpose of this website, but I would really appreciate an article regarding this topic or a link to information that can walk me through.

    Kind regards.

  106. Thank you for the haste with the reply. I’ll keep working on it until I get the desired results. Again, I appreciate the reply.

  107. Tried this on the TX1 everything works ok up to the visualization part. That fails with a segmentation fault. Any ideas?

  108. Its the visualization stuff. Going to rviz or rqt its good to go. Tried recompiling those three python packages and discovered one of them is what is keeping me from building ros-desktop. Wxtools. Gets a seg fault when trying to build ros as well. But the separate package compiles no problem. So finally I have everything hooked up to the tx1 I had on the tk1’s.

  109. have you meet this error?

    [ERROR] [1450191971.626716789]: Plugin cam_imu_sync load exception: Failed to lo ad library /opt/ros/jade/lib//libmavros_extras.so. Make sure that you are callin g the PLUGINLIB_EXPORT_CLASS macro in the library code, and that names are consi stent between this macro and your XML. Error string: Could not load library (Poc o exception = libcudart.so.6.0: cannot open shared object file: No such file or directory)
    [ERROR] [1450191971.642961757]: Failed to load nodelet [/viewer] of type [image_ view/image] even after refreshing the cache: Failed to load library /opt/ros/jad e/lib//libimage_view.so. Make sure that you are calling the PLUGINLIB_EXPORT_CLA SS macro in the library code, and that names are consistent between this macro a nd your XML. Error string: Could not load library (Poco exception = libcudart.so .6.0: cannot open shared object file: No such file or directory)
    [ERROR] [1450191971.643283086]: The error before refreshing the cache was: Faile d to load library /opt/ros/jade/lib//libimage_view.so. Make sure that you are ca lling the PLUGINLIB_EXPORT_CLASS macro in the library code, and that names are c onsistent between this macro and your XML. Error string: Could not load library (Poco exception = libcudart.so.6.0: cannot open shared object file: No such file or directory)

    I try to use camera but don’t know how to do .

    • Which version of L4T are you using? The error appears to be having an issue with the CUDA installation (libcudart is part of CUDA). Is CUDA loaded on your Jetson?

      It looks like you are trying to use ROS Jade, the demonstration above uses Indigo. I haven’t tried to use Jade yet.

  110. I use L4T 21.4 . I try to use jade to connect Pixhawk as well as a Logitech C270 Webcam .Firstly I use indigo but it tells me cannot open image_view,I install image_view use apt-get install ros-indigo-image-view but not works.then I try to use jade and meet this problem

        • The error message is that you are missing libcudart.so.6 so you need to find out if that is installed on your machine.
          If you are using L4T 21.4, the matching CUDA version 6.5
          I am not sure if having both versions on your machine is an issue.
          Does the webcam work in the Cheese application?

          • The webcam didn’t work in the Cheese. Cheese crashes.I remove cuda 6.5 and use 6.0 for L4T 19.4 and it work.I will check if it work with 6.5 later and post

  111. My robot has 24 volt motors. A 6s LiPo battery fits that requirement.
    Always on robot operation requires automatic docking and recharging.
    There are at least three techniques for finding the recharging station and docking for battery recharging that I have seen.
    1) IR led beacons as used in the Roombas and Turtlebots.
    2) Fiducial markers using a camera as the primary sensor.
    3) Line following of boundry markers to the docking station as used in lawn mowing robots.

    As kangalow mentioned, LiPo cells need to be individually monitored during charging or overvoltage/overheating/fire can result during the charge cycle.

    I have used Turnigy brand LiPo chargers from HobbyKing for years with good results.
    The drawback was having to select the charging profile and initiate charging with pushbuttons each time a battery is charged.

    Even with a good charger, it would be prudent to place the docking station in an area that would not spread a fire if the batteries were to burst into flames during charging.

    I have recently tested the HobbyKing Turnigy B6 LiPo charger and can verify that the charger starts the charge cycle with the battery and balance cable connected first, and a single power connection made second, as in a docking scenario. The charger automatically starts the charge cycle when power is applied with the battery and balance cable already connected. At ~USD $15 this charger takes care of an important element of the robot design.

    The Turnigy B6 Compact 50W 5A Automatic Balance Charger will remain on the robot and connected to the battery at all times.

    http://www.hobbyking.com/hobbyking/store/__73941__Turnigy_B6_Compact_50W_5A_Automatic_Balance_Charger_2_6S_Lipoly.html

    Recommended!

  112. Hi,
    I’m interested in the Jetson TK1 projects.
    Could you explain about a C1 680uf capacitor more detail?
    – Why do we need this part to work lidar lite?
    – Where I can buy C1 680uf capacitor?
    – Could you explain specification of the C1 680uf capacitor?

    • Hi Jony,
      The manufacturer recommends the 680uF capacitor. From the part placement, it appears to be acting as a decoupling capacitor. A decoupling capacitor is used to decouple one part of an electrical network from another. One way to look at is is that the capacitor acts as local energy storage for the device so that there is a little reserve available if the current drops momentarily. The capacitor effectively maintains power-supply voltage in the nanosecond to millisecond range during an interruption.

      You can buy capacitors almost everywhere. The one in the video is from here: http://amzn.to/1mtpZ2p
      I happened to have some on hand already, you can resize them for your own project accordingly. Hope this helps.

  113. Hi Jim,
    So I’ve flashed the TK1 from 3 different machines and I get the same result, No matter what I select as the device it always asks for the username and IP of the TX1 so I have to cancel it and rerun the install select the tk1 again and then it will go through, it only does it on the first try, it never completes the process after the tk1 reboots, everything appears to go correctly the Flash itself seems to work and then it reboots and starts copying files over, everything seems to go well and then it gets to Mandelbrot_gold.h 100%
    next line is readme.txt and then I get
    rm: cannot remove ‘cpdAdvancedQuicksort”: No such file or directory
    it does this the same for 4 more directories and then say’s
    Now running matrixMul Sample…
    bash: NVIDIA_CUDA-6.5_Samples/bin/arm7l/Linux/release/matrixMul: no such file or directory
    Please press enter key to continue
    everything else seems to work but I don’t know what else I’m missing
    I’m not sure what would have come next because when I hit enter it just exits out and closes the window and I’m sitting there with the network window up and the install process still sitting there waiting like I should select next again and start over.
    any Ideas on how I can trouble shoot this I’m fairly new at Linux ? I did take screenshots of the errors but it won’t let me post them here in the comments…

  114. Well I reinstalled everything from scratch and was very careful to do everything in the right order, after installing part 6-1 I noticed that it did not update the bashrc file like you showed in the video so I manually updated with this:

    export ROS_MASTER_URI=http://192.168.0.58:11311
    export ROS_HOSTNAME=192.168.0.58
    export TURTLEBOT_BASE=create2
    export TURTLEBOT_STACKS=circles
    export TURTLEBOT_3D_SENSOR=asus_xtion_pro
    export TURTLEBOT_SERIAL_PORT=/dev/ttyUSB0

    and continued on to finish the installs and it did without any errors that I saw then I tried to bring up the Bot and got this error…

    File “/opt/ros/indigo/lib/python2.7/dist-packages/xacro/__init__.py”, line 673, in main
    f = open(args[0])
    IOError: [Errno 2] No such file or directory: ‘/opt/ros/indigo/share/turtlebot_description/robots/create2_circles_asus_xtion_pro.urdf.xacro’
    while processing /opt/ros/indigo/share/turtlebot_bringup/launch/includes/robot.launch.xml:
    while processing /opt/ros/indigo/share/turtlebot_bringup/launch/includes/description.launch.xml:
    Invalid tag: Cannot load command parameter [robot_description]: command [/opt/ros/indigo/share/xacro/xacro.py ‘/opt/ros/indigo/share/turtlebot_description/robots/create2_circles_asus_xtion_pro.urdf.xacro’] returned with code [1].

    Param xml is
    The traceback for the exception was written to the log file
    ubuntu@tegra-ubuntu:~$ sudo nano ~/.bashrc
    ubuntu@tegra-ubuntu:~$ source ~/.bashrc
    ubuntu@tegra-ubuntu:~$ roslaunch turtlebot_bringup minimal.launch –screen
    … logging to /home/ubuntu/.ros/log/b11d4b94-a6f0-11e5-8310-00044b490816/roslaunch-tegra-ubuntu-2793.log

    I got this error on the last install as well, that’s why I did the re-install, I thought maybe I missed a step…

    • I had a similar issue. When I changed the export TURTLEBOT_BASE=create2 to TURTLEBOT_BASE=roomba in the bashrc, it worked.

      I followed the file path in the –>
      Invalid tag: Cannot load command parameter [robot_description]: command [/opt/ros/indigo/share/xacro/xacro.py ‘/opt/ros/indigo/share/turtlebot_description/robot/create2_circles_asus_xtion_pro.urdf.xacro’] returned with code [1].
      to find the folder holding all the possible combinations of hardware, roomba was one of the options.

      • Hi Quartinium,
        I had one line missing in the blog article. After the installJetsonBot:

        $ ./setupBot.sh

        sets up all the paths and directories for the bash file. I updated the article JetsonBot Software Install to reflect the change.
        My apologies.

        Certainly if you have pictures or videos of your bot send them to me, I’d love to see them. We can even post them on the site with your permission.

        • I’ll happy send along the few photos I’ve taken. I’m not sure they’d be worth posting but I’ll try to get a decent shot in the bunch.

  115. Machine 1 – both the TK1 and the host are hard wired to the same 4 port switch on a standard Cable Modem/Router.

    Machine 2 was a standard acer laptop – Wireless connection, TK1 was still connected to the router.

    Machine 3 was a VMWare Virtual Ubuntu 14.04

    All 3 Failed in the same way and in the same place, I have tried re-downloading jetpack 2.0 didn’t help…

    • I’m just curious, did you try giving it the IP address and credentials of the TK1 when asked for the TX1? It could be that there is a mislabeled dialog box, being a new program and all.

      In any case, it sounds like there is a bug there. You should report it to NVIDIA. Also, you should ask the question in the NVIDIA dev forum https://devtalk.nvidia.com/default/board/162/jetson-tk1/ as someone else might have encountered that issue. Unfortunately I haven’t seen it, so I’m not of much help. Also, I know that NVIDIA employees lurk there to gather the issues like these.

  116. I will post something to the NVidia forums, I also wanted to thank you for all your hard work and efforts, I’m going to try a new install completely from scratch in the morning( formatted and reinstalled the laptop ) and I’ll do the BOT and ROC installs and see how it works out.. Again thank you I’m learning a lot and I appreciate everything you’ve done with this project!

  117. My printed book collection has been pared-down dramatically over the years.

    In addition to the technical content, I bought this book as a robotics moment-in-time marker. The authors are some of the originators of the ROS environment. Computer science history is worth saving.

    It will sit next to my original copy of HOW TO BUILD YOUR OWN WORKING ROBOT PET
    BY FRANK DaCOSTA on my bookshelf.
    http://cyberneticzoo.com/cyberneticanimals/1979-robot-pet-frank-dacosta-american/

  118. I would be very interested in developing for CUDA in code::blocks. I’m not entirely new to linux, but I am new to developing on an embedded platform and for CUDA. I’ve been running into hiccups that wouldn’t be issues for me on a x86 build.

    Do you also have any suggestions for Python IDEs on the TK1?

    • I’ll try to cover CUDA soon, and how to use it within code::blocks. It’s not too bad, but there are some gotchas (and I’m sure I haven’t discovered them all yet). For python, I like spyder…since you asked, I went ahead and installed in on my Jetson, and it looks like it at least works. You can just run ‘sudo apt-get install spyder’ and it will install all the dependencies (like scipy and matplotlib). It’s a pretty nice IDE that resembles Matlab. It did install a lot of packages, so be careful if you’re low on disk space.

  119. Hi all
    I completely installed caffe and cudann (follow above video and after fixed some issues…). Now, i want to test with my data, how to do that?. I must prepare which data form? how to import my data to caffe. Thanks a lot.

  120. Hello!

    You mentioned we can use the Structure Sensor attached to a DSLR. Can we do this without a computer in cases I want to capture something outdoors?

    • Hi Uma,
      Unfortunately the Structure Sensor does not have local storage, so it needs to be attached to a computer of some sort. The original intent of the Structure Sensor is to connect it to an iPad (which has the computer built in of course). People who use it with a DSLR usually have a laptop or equivalent in a backpack that they carry when recording. Thanks for reading!

    • Cool! Hopefully you can share some of the things you learn with us as you build, and certainly add in any ideas or directions that you’d like to see the project take.

  121. hi, could the jetson tk1 be programmed while connected to the pc, and then separate it from the pc to be connected to zed for other purposes and projects?

    • Yes, you can cross compile on an Ubuntu host and download the resulting program to the Jetson. Most people just develop on the Jetson as it is a capable development environment on its own, but there are a group of users who develop on a PC.

  122. Hi, Thanks for great tutorials, did you have any success for extracting audio from Kinect V2 using Jetson Tk1?
    Best

    • I believe that the current version, libfreenect2 0.2, does *not* support audio. From what I understand, people have been working on it, but most people are interested just in the video/depth so there hasn’t been a lot of progress in this area. Thanks for reading!

  123. Thank you very much for fantastic tutorial!
    I have followed these steps, and everything work fine, despite one thing: I am unable to connect to jupyter remotely. I added certificate, and everything seems to work fine, but when i try to connect using web browser from another machine I obtain:
    “[W 13:03:57.249 NotebookApp] SSL Error on 7 (‘192.168.1.145’, 56748): [Errno 1] _ssl.c:510: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number”
    This is happening only on jetson. I have the same configuration on virtual machine and I am able to connect to https://[ip]:9999.
    I’ll be grateful for any suggestions.

    • Me too! There’s a little lull here while I’m waiting for more parts, but they’re starting to trickle in and there should be some good progress on the car soon.

    • I would think that it’s pretty straight forward, the only real issue is getting the software setup. For the robot part, the TurtleBot package should work without much effort.

  124. Hey all,
    I am getting the following error on make -j 4 runtest when compiling with cuDNN, while it works perfectly without compiling with cuDNN.

    F0206 03:23:00.169116 12663 cudnn_softmax_layer.cpp:15] Check failed: status == CUDNN_STATUS_SUCCESS (1 vs. 0) CUDNN_STATUS_NOT_INITIALIZED
    *** Check failure stack trace: ***
    @ 0x43255060 (unknown)
    @ 0x43254f5c (unknown)
    @ 0x43254b78 (unknown)
    @ 0x43256f98 (unknown)
    @ 0x43c2b80c caffe::CuDNNSoftmaxLayer::LayerSetUp()
    @ 0x43c5c9ee caffe::SoftmaxWithLossLayer::LayerSetUp()
    @ 0x43beedfc caffe::Net::Init()
    @ 0x43befec0 caffe::Net::Net()
    @ 0x43bdccc2 caffe::Solver::InitTrainNet()
    @ 0x43bdd792 caffe::Solver::Init()
    @ 0x43bdd978 caffe::Solver::Solver()
    @ 0x2c5634 caffe::SolverTest::InitSolverFromProtoString()
    @ 0x2bf9f0 caffe::SolverTest_TestInitTrainTestNets_Test::TestBody()
    @ 0x3b2d28 testing::internal::HandleExceptionsInMethodIfSupported()
    @ 0x3ad2ba testing::Test::Run()
    @ 0x3ad34a testing::TestInfo::Run()
    @ 0x3ad422 testing::TestCase::Run()
    @ 0x3aec7a testing::internal::UnitTestImpl::RunAllTests()
    @ 0x3aee6c testing::UnitTest::Run()
    @ 0xd7a8e main
    @ 0x4417d632 (unknown)
    make: *** [runtest] Aborted

    Please HElP!!
    Thanks a lot

  125. hi do you know if this can be ported to android 5.1 specifically. I have nvidia shield tv and kinect v2 , i connected and the microphone work

    • A port seems complicated, the software being used is a C library (libfreenect2). Unfortunately I don’t have much experience with Android ports, but developing on the Shield seems challenging.

  126. As nice as the MIT competition is, the Europeans have been holding a field robot competition for a number of years now. Slower than speed trials in the tunnels, but still interesting.
    https://www.youtube.com/playlist?list=PLrbOjyH_iiHW_tasX8sv8SuQmqxj5uPBQ

    One thing that struck me about the entrants this year. The best performers all use LIDAR.

    Perhaps near the end of 2016 or the beginning of 2017, the automotive MEMS LIDAR units will drive LIDAR costs down for price sensitive applications to gain some traction.

    • Sure, there are several competitions out there that have been doing interesting work over the last 5 years at least. Sparkfun has their annual race, and there are several different international competitions. The interesting thing for this website in particular is the inclusion in the MIT RACECAR of the Jetson, that is, replacement of a small PC with an capable embedded processor. Also, the University of Pennsylvania has a very similar Jetson based, TRAXXAS Rally robot that they use in their classes. There was even talk this year of replacing the human drivers in Formula E with autonomous behavior, which certainly would have been technically interesting (though I don’t know why people would watch it after the novelty wore off).

      LIDAR is a well proven performer in the field, that’s what the DARPA Challenge showed. In that camp is the Google self driving car. Unfortunately the LIDAR on the Google car costs about the same as the rest of the car combined. There is a competing camp, companies like Audi, Tesla and NVIDIA think that using multiple cameras along with radar is a viable approach. I believe that the thinking is that the advancement in computational power and algorithms will will outpace the current advantage that LIDAR enjoys.
      People have been talking about getting the MEMS LIDAR chips out at an inexpensive price point (for less that $1K in quantity 100,000) for a couple of years now, I think a lot of manufacturers are taking a wait and see approach at this point before placing their bets. This also does not rule out using multiple sensor types in the same vehicle, though the accountants aren’t going to be happy with more hardware.

  127. Hello,
    Thanks for sharing all this stuff !

    You plan to power TK1/TX1 directly from a 3S LiPo ?
    The spec says 5.5V – 19.6V so it probably work. But 12V is needed for PCIe. Do you know if the card step-up to 12v even if the lipo provide less voltage ? (A 3S Lipo provide 12,6 V fully charged and only 10V discharged)
    I’m thinking of using a 4S LiPo with a step-down circuit like this http://www.dx.com/p/dc-dc-adjustable-step-down-heatsink-power-module-blue-5a-319570 , what do you think of it ?

    Don’t forget to use a Lipo Alarm (http://www.dx.com/p/2-in-1-1-8s-lipo-battery-low-voltage-buzzer-alarm-for-rc-helicopter-white-black-180468) to avoid damaging your battery or even burn your car…

    • It should be interesting to watch. There’s been so much hype about this area for several years now it will be interesting to watch the adoption rate when affordable sensors and computing power is available.

  128. In one of the videos, you mentioned that original ESC can be replaced by an open source controller. Can you please provide the name of such an open source controller or its parameters? Thank you in advance!

  129. Incorrect testing speeds
    In the TK1 and the TX1 I agree that each image takes about 23ms and 18ms respectively during batch testing.However suppose I use an JPEG image and test it through a pyhton script/Caffe command line interface im clocking close to 1~0.7 s.

    for example take this test script below for a single image:

    ./build/examples/cpp_classification/classification.bin \ models/bvlc_reference_caffenet/deploy.prototxt \ models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel \ data/ilsvrc12/imagenet_mean.binaryproto \ data/ilsvrc12/synset_words.txt \ examples/images/cat.jpg

    testing takes 1s if u time it(excluding all the labels and printing).Do you know why this happens.?

    • I’m getting around 3 sec on running the classification.bin exe on my image. Did you just run the time command on the execution line?

  130. Hi kangalow,

    Thanks for bringing jetson to life :). However, when i pull your repo, I cannot see the makefiles in each example and the annoying mesa lib dependency problem persists. Codeblocks example builds fine(i prefer to write in vim), but “jetson” is not identified as proper type in makefile build. I am not sure changing “raspberry pi” definitions into “jetson” will solve the problem.

    • I don’t believe in 0.8.4 there were make files for examples per se, it’s all part of the openFrameworks build system.Typical cross platform support was through Code::Blocks and the openFrameworks Program Generator.

      In the more current releases, 0.9.2, Qt support is added and the make situation is somewhat different. I haven’t spent much time using openFrameworks, but ArturoC is working on a more modern branch: https://github.com/arturoc/openFrameworks/tree/feature-jetsontk1
      One of the keys is to set the environment variable PLATFORM_VARIANT=jetson so the proper make file is used.

      There are several issues with openFrameworks on the Jetson, first of which is openFrameworks for Armv7 normally uses OpenGL ES, whereas the Jetson prefers to use OpenGL proper. Another issue is the tesselator, one of the type casts in the library is incorrect for the Jetson, so that needs to be recompiled. As I recall, there is also an issue with the GLFW library, something with the version used by openFrameworks versus the current distribution. My suggestion would be to go to the 0.9.2 version and start figuring out what the issues are, and go from there as the version that’s in the JetsonHacks repository is rather old at this point. You’ll then be able to ask questions in the openFrameworks forum, where people seem happy to help others work through issues.

    • Hi Cactus,
      Thanks for your kind words. A couple of my friends and I get together and jam every couple of months, and I throw an audio recorder on. Since I’ve been doing this for more than ten years, I have a good backlog of background music to select from. Most of the songs are several minutes long, so they tend to match the length of the videos I do for this channel. Thanks for reading and watching (and listening!)

  131. Jetson RaceCar Series, when is part 7 coming, doing some programming?
    This is a great series that is truely some of the latest technology out there, for autonomous rovers.
    Do you like the TX-1, and the Zed combination. Are you going to combine this with some of the Deep Learning CV Neural Networks and or Robotic Operating System (ROS)?

    • Hi Gary,
      The Jetson will be running ROS, along with an Ackerman steering node. I still have to write the ESC controller code, so it will be a while yet before the next installment in the series comes out. Currently I’m working on the teleoperation node, when the ZED comes into play we’ll be looking at SLAM. Thank you for reading!

  132. Hello kangalow
    I just have Jetson TX1,I want to use it run kinect V2,but have some error.(I have install JetPack2.0)I think the problem is the vision of JetPack,but Jetson TX1 can not install JectPack1.0. I have no ideal,Do you make video tutorial about Jetson TX1 with kinectV2?If not,can you tell me how to do it?Thank you very much!There are some error when I run kinectV2 use Jetson TX1.

    using tinythread as threading library
    — Could NOT find OpenCL (missing: OPENCL_LIBRARIES OPENCL_INCLUDE_DIRS)
    CUDA_TOOLKIT_ROOT_DIR not found or specified
    — Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR CUDA_NVCC_EXECUTABLE CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY)
    CMake Error at /usr/share/cmake-2.8/Modules/FindCUDA.cmake:548 (message):
    Specify CUDA_TOOLKIT_ROOT_DIR
    Call Stack (most recent call first):
    /usr/share/OpenCV/OpenCVConfig.cmake:45 (find_package)
    /usr/share/OpenCV/OpenCVConfig.cmake:242 (find_host_package)
    CMakeLists.txt:47 (FIND_PACKAGE)

    — Configuring incomplete, errors occurred!
    See also “/home/ubuntu/libfreenect2/examples/protonect/CMakeFiles/CMakeOutput.log”.
    See also “/home/ubuntu/libfreenect2/examples/protonect/CMakeFiles/CMakeError.log”.
    ubuntu@tegra-ubuntu:~/libfreenect2/examples/protonect$ cmake CMakeLists.txt
    — using tinythread as threading library
    — Could NOT find OpenCL (missing: OPENCL_LIBRARIES OPENCL_INCLUDE_DIRS)
    CUDA_TOOLKIT_ROOT_DIR not found or specified
    — Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR CUDA_NVCC_EXECUTABLE CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY)
    CMake Error at /usr/share/cmake-2.8/Modules/FindCUDA.cmake:548 (message):
    Specify CUDA_TOOLKIT_ROOT_DIR
    Call Stack (most recent call first):
    /usr/share/OpenCV/OpenCVConfig.cmake:45 (find_package)
    /usr/share/OpenCV/OpenCVConfig.cmake:242 (find_host_package)
    CMakeLists.txt:63 (FIND_PACKAGE)

    — Configuring incomplete, errors occurred!
    See also “/home/ubuntu/libfreenect2/examples/protonect/CMakeFiles/CMakeOutput.log”.
    See also “/home/ubuntu/libfreenect2/examples/protonect/CMakeFiles/CMakeError.log”.
    ubuntu@tegra-ubuntu:~/libfreenect2/examples/protonect$

  133. showing this error:
    Failed to open I2C port – Failed to read BNO055 data
    Any ideas on this ?

    • Where you able to see the BNO055 on the I2C bus? There’s not enough information in your question to give you any help.

      Once the board is wired up, turn the Jetson TK1 on.
      In order to be able inspect the i2c bus, you will find it useful to install the i2c tools:

      $ sudo apt-get install libi2c-dev i2c-tools
      After installation, in a Terminal execute:

      $ sudo i2cdetect -y -r 1
      You should see an entry of 0x28, which is the default address of the IMU.

      • Hi kangalow,
        I have same problem,

        run -> roslaunch rtimulib_ros rtimulib_ros.launch
        showing this error:
        Failed to open I2C port – Failed to read BNO055 data
        Failed to open I2C bus1

        I check i2c entry of 0x28 is OK.
        Any ideas on this ? Thanks~

        • My first guess would be that the udev permissions are messed up. What is the content of the file /etc/udev/rules.d/90-i2c.rules

          There should be a line

          KERNEL==”i2c-[0-7]”,MODE=”0666″

          • [Possible Fix] I had this issue using the tx1 and the BNO055 IMU. I edited the RTIMULib.ini file under catkin_ws/src/rtimulib_ros/config. For IMUType the default was set to 10. I changed it to 0 and it started working for me. I now look at the .ini file after running once and it says 10 again but its working repeatably now.

  134. Hello kangalow,
    My writing in the above post was a bit ambiguous. To make it clear, I just rewrite my question as follows
    You provided the video tutorial for connecting Jetson TK1 with Kinect v2 using JetPack 1.0.
    But I just want to connect kinect V2 to Jetson TX1. To this end, I installed JetPack2.0, but find it hard get through with the task. Lots of errors as listed in the above thread. Is there a smooth way for us to run Kinect v2 to Jetson TX1? Thanks a lot!

  135. Wow just catching up on jetsonhacks articles. These are much better than I thought they would be and I have 4 jetsons 🙂 A swarm going to post up to the local r/c board and see if I can pick up a few chassis cheap. Drone racing is getting large and this is sure to follow.

  136. When I flash anything to my jetson I am ALWAYS wired to the jetson with the USB cable and I never have a problem. I understand that you might be able to flash it other ways, but if you are NOT using the USB I suggest you try it that way and let us know if you are still having problems.

  137. If you have used Jetpack to install Opencv4Tegra and such, how can you back it all out and just go on with OpenCV ?

    • I think it responds to the usual persuasion, depending on how brave you are. If you use ‘Synaptic Pacakage Manager’, run quick filter opencv4tegra and you should see libopencv4tegra-repo, libopencv4tegra, and libopencv4tegra-dev selected and installed. You can use Synaptic to remove them. You can also do it through the command line if you’re a little more adventurous:

      $ sudo apt-get purge libopencv4tegra libopencv4tegra-dev libopencv4tegra-repo

      I’ve never tried it this way, I have always just done a fresh install without opencv4tegra when I wanted to use OpenCV by itself.

      Hope this helps.

  138. I was looking for a cheaper car to use for this project, I’m only concerned about the kind of speed control used on the traxxas, I’m planning to use one of those [1] for which I found this [2] instructions for the spedd control calibration (page 6) … do you think I’ll be fine?
    Thanks a lot, I found each of your video .. just Amazing!

    [1] http://www.hobbyking.com/hobbyking/store/__55816__Basher_BSR_BZ_888_1_8_4WD_Racing_Buggy_RTR_.html to
    [2] http://www.hobbyking.com/hobbyking/store/uploads/773373990X365809X38.pdf

    • I’m guessing the answer is yes, any speed controller will work just fine .. but looking more carefully at the video, seems to me that the original speed control will not allow for delicate monovring in “difficult” environment (due to its aggressive start). An online search for the VESC by B. Vedder. shows an upcoming production in series of the controller (not cheap ~90 US$) .. but I guess is worth the $. (I wish I wans’t so bad in soldering microelectronic components)

  139. Hi

    I have buy one JetsonTK1,but it have some error when I install JetPack1.2 with JetsonTK1, I don’t know the reason.Thanks a lot!

    Error running /home/ubuntu/JetPackTK1-1.2/_installer/DownloadHelper http://developer.download.nvidia.com/devzone/devcenter/mobile/jetpack_tk1/007/common/docs.zip /home/ubuntu/JetPackTK1-1.2/jetpack_download/docs.zip -t “Downloading Documents” -r 10 -c a4ba028423b4920a18e6954b7de7d8e6 –use_md5 1:
    (DownloadHelper:4857): GLib-WARNING **: unknown option bit(s) set

    (DownloadHelper:4857): GLib-WARNING **: unknown option bit(s) set

    (DownloadHelper:4857): GLib-WARNING **: unknown option bit(s) set

    (DownloadHelper:4857): GLib-WARNING **: unknown option bit(s) set

  140. Hi

    Although I don’t find the real reason, but found a solution.Manually download packages that can not download and install.It works!

    Thanks a lot!

  141. Hi

    It have some error when I compile the example of protonect(shown below).
    I have installed JetPack 1.0(Manually download packages that can not download and installed)and also perform smoke example well,but it can not find cuda opencv and opencl(without install) according to above error.what’s more, I can’t find OpenCVConfig.cmake and opencv-config.cmake file:
    ***********************************************************************************
    ubuntu@tegra-ubuntu:~/libfreenect2/examples/protonect$ cmake CMakeLists.txt
    — using tinythread as threading library
    — Could NOT find OpenCL (missing: OPENCL_LIBRARIES OPENCL_INCLUDE_DIRS)
    CUDA_TOOLKIT_ROOT_DIR not found or specified
    — Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR CUDA_NVCC_EXECUTABLE CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY)
    CMake Error at CMakeLists.txt:47 (FIND_PACKAGE):
    By not providing “FindOpenCV.cmake” in CMAKE_MODULE_PATH this project has
    asked CMake to find a package configuration file provided by “OpenCV”, but
    CMake did not find one.

    Could not find a package configuration file provided by “OpenCV” with any
    of the following names:

    OpenCVConfig.cmake
    opencv-config.cmake

    Add the installation prefix of “OpenCV” to CMAKE_PREFIX_PATH or set
    “OpenCV_DIR” to a directory containing one of the above files. If “OpenCV”
    provides a separate development package or SDK, be sure it has been
    installed.

    — Configuring incomplete, errors occurred!
    See also “/home/ubuntu/libfreenect2/examples/protonect/CMakeFiles/CMakeOutput.log”.
    See also “/home/ubuntu/libfreenect2/examples/protonect/CMakeFiles/CMakeError.log”.
    ***********************************************************************************

    ****************************************************************
    ubuntu@tegra-ubuntu:~$ find OpenCVConfig.cmake
    find: `OpenCVConfig.cmake’: No such file or directory
    ubuntu@tegra-ubuntu:~$ find opencv-config.cmake
    find: `opencv-config.cmake’: No such file or directory
    ******************************************************************

    Maybe there is something wrong.but I don’t know where I was wrong.Thanks a lot!

  142. I have had problems with warping and shrinkage of the bottom and top pieces. Any ideas? The end pieces look great. Would you recommend a commercial printing service?
    Regards

  143. Thank you for your help.I got it to work.But I did not use Mini PCI-Express card,I just use 3.0 hub to connect kinect,it works!

    Thanks again.

  144. Hola yo quiero trabajar con kinect v2 pero sin NVIDIA JETSON TK1 se puede hacer?
    Yo tengo una laptop ASUS U56E INTEL i5.
    Saludos.

  145. Hello.
    Firstly, I’m japanesse.So,if there are some grammatical errors,I’m sorry about that.
    I use Jetson tk1 and Kinect v2.And I can get depth , color and ir images.But it’s too late.
    There are many messages like,
    [DepthPacketStreamParser::handleNewData] skipping depth packet because processor is not ready
    [RgbPacketStreamParser::handleNewData] skipping rgb packet!.
    I think processor isn’t ready.But I don’t know how to resolve this problem.
    And I do maximizing CPU performancs and controlling GPU performance (852000kHz) with looking (http://elinux.org/Jetson/Performance).
    But it is still later to change next images than this page’s animations.
    Is there any solution about this problem?

      • Thank you for replying.In my case, when I wave my hand about 100mm front from Kinect v2,I can see striped pattern on my hand in color image.
        Is there any reason to occur this?

          • Sorry for replying late,kangalow. When I wave my hand fast, that issue is occured. So I don’t think it is caused by cable issue.
            I tried to take a picuture of that issue. And I could do that the format of PNG. But I couldn’t do the format of JPG,because of Segmentation
            fault.
            I don’t know why does it occure.And this is the picture of stripe? issue’s url (http://www.fastpic.jp/images.php?file=4550342358.png).
            This picture will be deleted in a year.

          • Hi Hirokichi,
            My guess would be that it’s a vertical sync issue with the display, and probably requires something like graphics double buffering to fix. However, I have not experienced the issue, so I’m not a good resource for fixing this.

  146. Well made tutorial. I’m not sure, though, what I2C Gen1 and Gen2 are. Would you mind explaining what these ‘generations’ refer to?

    Thank you.

    • I2C originally came out in the 1980s, so there have been changes over the years. I’m sure a hardware guy would gleefully tell you all the differences, but In practical terms on the Jetson, the Gen1 I2C uses 1.8V logic levels (hence the need for a level shifter), where as the Gen 2 I2C uses the more common 3.3V logic levels.

  147. I noticed in your time runs that the forward and total pass times are actually worse when using cuDNN as opposed to CUDA only. I have observed the same results using cuDNN v2 on the TK1. Do you have any thoughts on why there appears to be no acceleration with cuDNN on the TK1 ?

    • I think there was a disconnect between the changes between cuDNN V1 and V2 between the Caffe development team and the cuDNN team. People were probably working towards adding features to cuDNN and lost a little bit of performance, while at the same time Caffe wasn’t working towards V2 integration until well after it was out. With that said, it could be just a Tegra K1 thing, it may have performed much better on an actual GPU card.

      The the Jetson TX1 using v4 of cuDNN, there is much better performance using cuDNN than CUDA alone.

  148. Hi, thank you for the great works on the JetsonHack series, saving me a lot of time.

    I’m new to Gstreamer, and I want to stream camera video from TK1 to another PC. I have a question here:
    If I want to stream only video , how can I do so?
    My guess is to comment the last line `$ASOURCE ! queue ! $AUDIO_ENC ! queue ! mux.audio_0` of the command in the bash file.
    However, I got error when I executed it.(output is showed below)
    What’s the problem with these error messages? I’ve done some search but no answer.
    Command in the previous post (webcam preview) worked for me.
    Note that my webcam is Logitech c310.
    Thanks

    My output:
    gst-launch-1.0 -vvv -e mp4mux name=mux ! filesink location=gtest1.mp4 v4l2src device=/dev/video0 ! video/x-h264, width=1280, height=720, framerate=30/1 ! tee name=tsplit ! queue ! h264parse ! omxh264dec ! videoconvert ! videoscale ! video/x-raw, width=1280, height=720 ! xvimagesink sync=false tsplit. ! queue ! h264parse ! mux.video_0 tsplit. ! queue ! h264parse ! mpegtsmux ! udpsink host=127.0.0.1 port=5000
    Setting pipeline to PAUSED …
    Inside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingPipeline is live and does not need PREROLL …
    Setting pipeline to PLAYING …
    New clock: GstSystemClock
    ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.
    Additional debug info:
    gstbasesrc.c(2865): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
    streaming task paused, reason not-negotiated (-4)
    EOS on shutdown enabled — waiting for EOS after Error
    Waiting for EOS…
    ERROR: from element /GstPipeline:pipeline0/GstH264Parse:h264parse0: No valid frames found before end of stream
    Additional debug info:
    gstbaseparse.c(1153): gst_base_parse_sink_event_default (): /GstPipeline:pipeline0/GstH264Parse:h264parse0
    /GstPipeline:pipeline0/GstMP4Mux:mux.GstPad:src: caps = video/quicktime, variant=(string)iso
    /GstPipeline:pipeline0/GstFileSink:filesink0.GstPad:sink: caps = video/quicktime, variant=(string)iso
    ERROR: from element /GstPipeline:pipeline0/GstH264Parse:h264parse1: No valid frames found before end of stream
    Additional debug info:
    gstbaseparse.c(1153): gst_base_parse_sink_event_default (): /GstPipeline:pipeline0/GstH264Parse:h264parse1
    ERROR: from element /GstPipeline:pipeline0/GstH264Parse:h264parse2: No valid frames found before end of stream
    Additional debug info:
    gstbaseparse.c(1153): gst_base_parse_sink_event_default (): /GstPipeline:pipeline0/GstH264Parse:h264parse2
    ERROR: from element /GstPipeline:pipeline0/MpegTsMux:mpegtsmux0: Could not create handler for stream
    Additional debug info:
    mpegtsmux.c(767): mpegtsmux_create_streams (): /GstPipeline:pipeline0/MpegTsMux:mpegtsmux0

  149. It’s been a long time since I have looked at this, but I believe that the split is being used to grab both the video and audio, so you should not need that. Unfortunately I’m way behind on a project due next week, so I can’t been of much more help.

    • Thanks for your reply. I’ve managed to make it work, but quality is not as good as yours. I’ll keep searching.

      • Did you ever figure it out? If you still have your code can you post it please, I’m trying to figure out the same thing!

  150. Hello,
    im trying to install Jetpack 2.1 on my Jetson TK1, I first had a problem with Dependencies

    package libstdc++6-armhf-cross 4.8.2-16ubuntu4cross0.11 failed to install/upgrade: trying to overwrite ‘/usr/share/gcc-4.8/python/libstdcxx/__init__.py’, which is also in package libstdc++6:i386 4.8.4-2ubuntu1~14.04.1

    and was able to fix it with

    $ sudo apt-get -o Dpkg::Options::=”–force-overwrite” install -f

    My problem is now not being able to install CUDA toolkit. the error log looks is down. Can you please help? I’ve been trying for 2 days straight now.

    Err http://archive.ubuntu.com trusty-updates/main armhf Packages
    404 Not Found [IP: 91.189.91.23 80]
    Err http://archive.ubuntu.com trusty-updates/restricted armhf Packages
    404 Not Found [IP: 91.189.91.23 80]
    Err http://archive.ubuntu.com trusty-updates/universe armhf Packages
    404 Not Found [IP: 91.189.91.23 80]
    Err http://archive.ubuntu.com trusty-updates/multiverse armhf Packages
    404 Not Found [IP: 91.189.91.23 80]
    Err http://archive.ubuntu.com trusty/main armhf Packages
    404 Not Found [IP: 91.189.91.23 80]
    Err http://archive.ubuntu.com trusty/restricted armhf Packages
    404 Not Found [IP: 91.189.91.23 80]
    Err http://archive.ubuntu.com trusty/universe armhf Packages
    404 Not Found [IP: 91.189.91.23 80]
    Err http://archive.ubuntu.com trusty/multiverse armhf Packages
    404 Not Found [IP: 91.189.91.23 80]
    Ign http://archive.ubuntu.com trusty/main Translation-en_US
    Ign http://archive.ubuntu.com trusty/multiverse Translation-en_US
    Ign http://archive.ubuntu.com trusty/restricted Translation-en_US
    Ign http://archive.ubuntu.com trusty/universe Translation-en_US
    Fetched 2.430 kB in 14s (167 kB/s)
    W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/trusty-security/main/binary-armhf/Packages 404 Not Found [IP: 91.189.91.14 80]

    W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/trusty-security/restricted/binary-armhf/Packages 404 Not Found [IP: 91.189.91.14 80]

    W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/trusty-security/universe/binary-armhf/Packages 404 Not Found [IP: 91.189.91.14 80]

    W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/trusty-security/multiverse/binary-armhf/Packages 404 Not Found [IP: 91.189.91.14 80]

    W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/binary-armhf/Packages 404 Not Found [IP: 91.189.91.23 80]

    W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty-updates/restricted/binary-armhf/Packages 404 Not Found [IP: 91.189.91.23 80]

    W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty-updates/universe/binary-armhf/Packages 404 Not Found [IP: 91.189.91.23 80]

    W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty-updates/multiverse/binary-armhf/Packages 404 Not Found [IP: 91.189.91.23 80]

    W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/main/binary-armhf/Packages 404 Not Found [IP: 91.189.91.23 80]

    W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/restricted/binary-armhf/Packages 404 Not Found [IP: 91.189.91.23 80]

    W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/universe/binary-armhf/Packages 404 Not Found [IP: 91.189.91.23 80]

    W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/multiverse/binary-armhf/Packages 404 Not Found [IP: 91.189.91.23 80]

    E: Some index files failed to download. They have been ignored, or old ones used instead.
    Reading package lists…
    Building dependency tree…
    Reading state information…
    Package cuda-toolkit-6-5 is not available, but is referred to by another package.
    This may mean that the package is missing, has been obsoleted, or
    is only available from another source

    E: Package ‘cuda-toolkit-6-5’ has no installation candidate
    E: Unable to locate package cuda-cross-armhf-6-5

  151. Hi,kangalow.
    I coudn’t solve the issue,but it’s ok.Because I don’t look some fast moving things.
    Many thanks.

  152. Hi,
    I would like to know what did you use to have in the same time the keyboard, the mouse, the ZED device and be directly on the TK1 ?
    As I have just a USB 2.0 HUB, I’m searching to use the keyboard and the mouse by a remote way without super-user rights.

  153. Reading DIFFs gives me a headache, not to mention I still might do it wrong.

    Any way you can take:

    $ gedit ThirdParty/PSCommon/BuildSystem/CommonCppMakefile

    Here’s the diff:

    — OpenNI2-2.2.0.30/ThirdParty/PSCommon/BuildSystem/CommonCppMakefile.old 2014-03-28 19:09:11.572263107 -0700
    +++ OpenNI2-2.2.0.30/ThirdParty/PSCommon/BuildSystem/CommonCppMakefile 2014-03-28 19:09:55.600261937 -0700
    @@ -95,6 +95,9 @@
    OUTPUT_NAME = $(EXE_NAME)
    # We want the executables to look for the .so’s locally first:
    LDFLAGS += -Wl,-rpath ./
    + ifneq (“$(OSTYPE)”,”Darwin”)
    + LDFLAGS += -lpthread
    + endif
    OUTPUT_COMMAND = $(CXX) -o $(OUTPUT_FILE) $(OBJ_FILES) $(LDFLAGS)
    endif
    ifneq “$(SLIB_NAME)” “”

    And just post what needs to be there ?

    Thanks,
    JT

  154. Hi,

    I would like to confirm that your “lsusb -t” message were similar to this after the installation :
    /: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=tegra-ehci/1p, 480M
    /: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=tegra-xhci/2p, 5000M
    |__ Port X: Dev X, If 0, Class=Video, Driver=uvcvideo, 5000M
    |__ Port X: Dev X, If 1, Class=Video, Driver=uvcvideo, 5000M
    /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=tegra-xhci/6p, 480M

    Further to the reply of my problem posted here :
    https://devtalk.nvidia.com/default/topic/928350/jetson-tk1/zed-camera-not-recognized-on-usb-3-0-port-after-the-basics-manipulations/

    It could explain why I have an error with the ZED tools and the 720 resolution.

  155. Thanks for the great article. I needed
    export LD_LIBRARY_PATH=/usr/local/cuda-7.0/targets/armv7-linux-gnueabihf/lib
    before the tests, but other than that things worked perfectly.

  156. Hi,
    First many thank you for providing useful information about jetson TK1.
    I have a question about the Intel 7460 wireless module activation.
    I was installed a jetpack 2.1 for L4T at my Jetson TK1 board.
    – L4T R21.4 (10. July. 2015)
    – R21.4 jet pack include a intel 7460 driver.

    And I was equipping the Intel 7460 wireless module to mini PCI-E slot. And then I turned on the Jetson TK1. But the Intel 7460 wireless module didn’t work after boot.
    The strange thing about this is activated after insert with mini wireless module (ipTime N100 mini) to usb port in Jetson TK1.

    As a result, the Intel 7460 wireless module didn’t be working stand alone. So
    I always have to connect the mini wireless module to usb.

    So could you explain why it is? What am I doing some wrong?

  157. Hi,

    When I try to install Torch on my Jetson TK1 (L4T 21.1, CUDA 6.5) I get the following problem:

    “Package libopenblas-dev is not available, but is referred to by another package.
    This may mean that the package is missing, has been obsolete, or is only available from another source

    E: Package ‘libopenblas-dev’ has no installation candidate”

    The same error appears when I follow the installation instructions form the torch website.

    I’ve tried to download and install the package ‘libopenblas-dev’ from ‘https://launchpad.net/ubuntu/+source/openbl’. However, during the installation processed I get an error message saying that the Jetson CPU architecture is not suitable for the libopenblas library.

    Any ideas on how to solve this problem?

    Thanks a lot

    PS: The tutorials are great, crystal clear

  158. Sorry, I forgot to mention. I’ve done

    sudo apt-get update

    and the problem still persists. Also, I have libblas-dev installed in the Jetson.

    Cheers

  159. I am following this tutorial and so far I am getting really good results. I am using edimax N 150 usb adapter and I must say its quite unstable. I am not able to get rviz to work. As soon as I
    roslaunch turtlebot_rviz_launchers view_robot_remote.launch –screen

    Suddenly the edimax usb wifi adapter acts weird and my Jetson goes out of reach.. Any suggestions to get it to work…

  160. Hi,

    I have successfully installed Caffe on my Jetson TK1 (L4T 21.4, CUDA 6.5, cuDNNv2). The tests for caffee are working fine (make -j 4 all; make -j 4 test; make -j 4 runtest), and I was able to use caffe to classify images from the ImageNet database follwing this example:

    http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb

    Then, I tried to use the deep residual network model ResNet-50 (from here: https://github.com/KaimingHe/deep-residual-networks) to classify images with the Jetson. So I adapted the previous python script and loaded the ResNet-50 model instead. Everything seemed to work fine until I ran the network for the classification in line 68

    output = net.forward()

    At this point ipython shuts downw with the message error “killed”. This is the last part of the output

    I0414 13:15:48.842650 6304 net.cpp:219] conv1_relu does not need backward computation.
    I0414 13:15:48.842701 6304 net.cpp:219] scale_conv1 does not need backward computation.
    I0414 13:15:48.842742 6304 net.cpp:219] bn_conv1 does not need backward computation.
    I0414 13:15:48.842803 6304 net.cpp:219] conv1 does not need backward computation.
    I0414 13:15:48.842855 6304 net.cpp:219] input does not need backward computation.
    I0414 13:15:48.842905 6304 net.cpp:261] This network produces output prob I0414 13:15:48.843305 6304 net.cpp:274] Network initialization done. I0414 13:15:50.019062 6304 upgrade_proto.cpp:66] Attempting to upgrade input file specified using deprecated input fields: ../models/ResNet/ResNet-50-model.caffemodel
    I0414 13:15:50.019403 6304 upgrade_proto.cpp:69] Successfully upgraded file specified using deprecated input fields.
    W0414 13:15:50.019623 6304 upgrade_proto.cpp:71] Note that future Caffe releases will only support input layers and not input fields.

    mean-subtracted values: [(‘B’, 104.0069879317889), (‘G’, 116.66876761696767), (‘R’, 122.6789143406786)]

    In [7]: output = net.forward()
    Killed

    ubuntu@tegra-ubuntu:~/caffe/examples$

    I am not sure how to debug this issue (I googled it and some people suggest that this might be a problem with the compiler).

    Any ideas what could be causing this?

    The full python script can be downloaded from here: https://github.com/Lisandro79/JetsonCaffe.git

    Thanks a lot

  161. Hi there,

    I’m wondering how you were able to install libi2c-dev and i2c-tools through the terminal? Do you know where these packages are located?

    When I’m trying to do this, I keep getting the error: “unable to locate package libi2c-dev” and the same for i2c-tools.

    Thanks!

  162. Hi Pooja,

    After your comment, I ran the command:

    $ sudo apt-get install libi2c-dev i2c-tools

    they were found, then installed. I did not encounter any issues, the packages were installed as directed.

    • Thanks for the quick response! Do you remember ever changing your /etc/apt/sources.list to add anything? Or would you mind posting the contents of your sources.list here so I can add whatever I may be missing?

  163. Yay! Got a response from Rover with the Sabertooth motor controller, with the Jetson TX1. Now to refine his response for driving, and to get it to work with the joystick.

  164. I am trying to get my IMU talking to ROS and I am running into a permissions issue. I have tried adding the rules file you describe on two different Jetson TK1s. The first is a vanilla 21.4 install, and the other is a grinch kernel. On both of them I added the rules file and rebooted, but the permissions for the i2c lines are still set to only be for root user. I used the single line command you provide, and I verified the file was created in the correct spot. Any ideas on what I could be doing wrong here?

    • I figured it out. When I went back and looked at the entry in the rules file I noticed that it had quote marks around the whole statement. When I took those out and rebooted it worked fine. I believe your command should look like this instead: sudo bash -c ‘echo KERNEL==\”i2c-[0-7]\”,MODE=\”0666\” > /etc/udev/rules.d/90-i2c.rules’

      • I apologize, I had placed the command in an HTML blockquote block which messed up the intent. I placed the command into a HTML code block which should clear things up. I’m sorry for the inconvenience.

  165. Always thank you useful information providing for all.
    I’m very interested in this race car projcet. So I hope to know that how to use both the lidar lite v2 and imu module for this project. Because these sensons need to connect at i2c of jetson tk1 using 3.3v. But jetson tk1 have only one 3.3 i2c port. So could you please explain how to handle it if I want to use two sensors in this project.

  166. Always thank you useful information providing for all.
    I’m very interested in this race car projcet. So I hope to know that how to use both the lidar lite v2 and imu module for this project. Because these sensons need to connect at i2c of jetson tk1 using 3.3v. And also for servo and esc controlling, i2c 3.3v port is needed. But jetson tk1 have only one 3.3 i2c port. So could you please explain how to handle it if I want to use all module in this project?

      • Hi, Kangalow. Thank you very much. I understood your comment. I forgot what i2c is. 🙂 In my case, I will try to configure between slave devices and master of jetson tk1 3.3v i2c connecting.

  167. Thanks kangalow! Once I uncommented all the lines in my sources.list file, I was able to install the libraries!

    Although now I do have another question: is there any guide from Jetson or from you on using I2C with a peripheral through the GPIO pins? I have a device for which I will have to write some code to get data on the TX1, and was curious if you had some starting advice.

    This is the device guide: https://learn.adafruit.com/adafruit-bno055-absolute-orientation-sensor/overview

    And this is the device library for Raspberry Pi: https://github.com/adafruit/Adafruit_Python_BNO055

    Thank you!

  168. Hey kangalow!

    I’ve been trying to run this and also RTIMULibDemoGL to test my BNO055. The BNO055 is connected on I2C bus 0 (pins 3 and 5 for SDA and SCL, respectively). I’m able to detect the device at address 0x28. However, when I try to run RTIMULibDemoGL, I get the error “Failed to open I2C bus 0”. Have you encountered this error before and do you have ideas for how to address it?

    Thanks!

  169. Hi!
    thanks for the useful article.
    I was wondering though, where did you get the manifold. I thought it was discontinued by DJI?

    • My understanding that it is still available (this one came through B&H Photo), but that it has to be approved by DJI, as they only want research and educational users to buy it. For example, it is used for the DJI Developers Challenge.

  170. I have been having problems getting opencv VideoCapture class to open correctly both the onboard camera as well as another Logitech usb camera I plugged into the usb port. My first question , do you have an example that uses opencv for capturing camera frames that works? If not do you have a visionworks example that captures camera images from a usb camera?

  171. Really cool. I wanna do some template matching. Do you have any tutorials? Generally for OpenCV on Jetson platform will be good too!

    • The only OpenCV tutorials are those listed in the OpenCV category. Most OpenCV tutorials you find on the web will work with the Jetson. Thanks for reading!

  172. Some nice projects with the Jetson! Say, what is the procedure to download software on the SD card instead of the MMC on the TX1? Soon the MMC fills up otherwise, no? Is there a tutorial about this aspect? Thanks!

    • There are a couple of ways to go about doing that. Usually people use the flash drive to keep the installed programs (those kept in /usr) and then keep any data or other folders on the SD card. There are ways around this where you can make SD card (or an attached thumb drive/SATA drive) the main drive which can be setup in the uboot parameters. I don’t have any tutorials on it, but it should be available from either the forum or Jetson wiki.

  173. The Jetson forum is not helpful. There is no organization of topics and the search function isn’t smart. There are too many topics to wade through looking for a certain fact. The TX1 wiki is advanced and addresses only a few issues. The TK1 wiki looks more appealing.

  174. i want to interface 3 lidars on I2C bus so whats is the pseudo code for three lidars??
    how to assign devide adress for three lidars??
    Do i need to open and close device in a loop or just one time open a devise ??

    • I don’t know what the term pseudo code means in this question.
      One way to attach the Lidar-Lites to a Jetson is with a PCA9544A I2C Multiplexer, which allows I2C dvices to share the same I2C address. The LidarLite v2 also can have it’s address be assigned.
      I’m not quite sure what you mean open and close device in a loop. See the example code for usage.

  175. Hello, I did not have this problem with my TX1. I have only put it in auto-login with remote desktop activated. Don’t know if it is the reason it worked for me.
    The resolution is small in remote desktop but could be changed by “xrandr -s 1024×768”

    • Interesting! Maybe the remote desktop activation does something at the lower levels to tell the desktop to load. Thank you for reading, and thank you for sharing.

  176. Where do you get the dongle. I’ve run into the same situation. Made it work but this way is a lot easier. For some reason on the mac it will fire up if you use the alternate X server for the mac and ssh -X.But if I try on Linux its no go although lightdm is running. I’ve made it work in the past on other machines its usually a major pain. The auth and display are the sticking points. Some of the ROS stuff like rqt and the messages for the Zed won’t compile on the MAC so the dongle will be a much easier way.

  177. Jim, there must be a story behind Mr. Fire Extinguisher, no? I don’t find it on your site.
    My son has a shark like yours but with his mouth closed. We got it from the museum at Morro Bay. He first picked up one with teeth, then changed his mind and replaced it with the closed mouth model. Both boys love that shark!

    • With the shark, what’s not to like? Everyone loves sharks, they even devote weeks to them on TV.

      The fire extinguisher story is pretty simple. In high school when I was taking chemistry there always seemed to be a fire breaking out around my little station. After a while, the teacher handed me a fire extinguisher and said that I might want to consider making it my friend.

      When I was taking chemistry on college, I learned that was probably pretty sage advice as a lot of my “experiments” went down in flames. In college, of course, the chemicals are much more serious so I have to say I was running a bit scared part of the time. That and the open flames on the bunsen burners …

      When filming, Mr. Fire Extinguisher has a stunt double for some of the more risky stunts …

  178. Hi,

    Above tutorial was very informative. Thanks for the tutorial.
    I am new to Jetson. But I am trying to learn the Embedded Interface with Jetson for my master’s project. Can I have a step by step tutorial of how to interface it. Any help will be appreciated.

    Thank you.

    • Hi Bhushan,
      Other than the article and video on this sensor, I don’t have any more information to share about this sensor. I’m not sure what you mean “step-by-step” instructions.

  179. Hey Kangalow!

    I’m reaching out because I did this with the Bosch BNO055, which, once calibrated, totally worked. Do you have any experience with the MPU9250? When I tried to calibrate the magnetometer of the MPU9250, the calibration values didn’t seem to fall in line with what RTIMULib’s calibration document said to expect. I’d appreciate any insights you can offer here!

    Oh, and it might be helpful to know that I’m working with a Jetson TX1!

    Thanks!

    • Unfortunately I haven’t worked with the MPU9250 at all, but I can’t think of an obvious problem that you would encounter. I hope you get it to work.

  180. Long time viewer first time poster. Thanks for the most excellent toturials! My question : Any caveats for doing this with a JTX1 instead of a JTK1?

  181. Lidar lite I2C runs at 3.3v and i want to interface it with jetson tk1 I2C pin at 1.8v so how to interface it.
    Note : I2C pins of 3.3v in tk1 is already used so i can’t use it.

  182. hi kangalow:
    I am trying to integrate the adv7280m with JETSON-TK1.
    I use the driver from https://github.com/antmicro/linux-tk1.
    But I always miss kernel panic when start streaming.
    can you help me ?
    below is log:

    [ 1018.375127] kernel BUG at /home/heaven/Work/TK1/Linux_for_Tegra_tk1/sources/yuv_kernel/linux-tk1/drivers/platform/tegra/hier_ictlr/hier_ictlr.c:54!

    • Hi heavenward,
      Unfortunately I do not have a camera. You should contact Antmicro for help, plus file an issue report on their Github repository.

  183. Hi Kangalow,

    I’m trying to interface the new MM7150 IMU by Microchip to Jetson TX1 and I need an info: how can I configure PULL UP/DOWN resistor on GPIO pins?

    Thank you
    Walter

  184. Hi Kangalow. I would like to know what changes need to be made for OpenNI2 to work with the Jetson TX1.

      • I tried what you told me and the result I get is that the host platform cannot be determined. I also tried changing the cortex number with no success. Please help me with this issue.

        • I’m also trying to compile. As far as I know, Tx1 use cortex-a57, so at least you need to change that line.

          • I am also sitting with a TX1 here and have the same problem. When I am going to save the Makefile i get this;

            (gedit:10873): Gtk-CRITICAL **: _gtk_widget_captured_event: assertion ‘WIDGET_REALIZED_FOR_EVENT (widget, event)’ failed

            (gedit:10873): Gtk-CRITICAL **: _gtk_widget_captured_event: assertion ‘WIDGET_REALIZED_FOR_EVENT (widget, event)’ failed
            ubuntu@tegra-ubuntu:~/Downloads/OpenNI2$ gedit Makefile
            ubuntu@tegra-ubuntu:~/Downloads/OpenNI2$ make
            ThirdParty/PSCommon/BuildSystem/CommonDefs.mak:22: *** Can’t determine host platform. Stop.

  185. It didn’t boot most likely. There is a blurb in the documentation that you might need to ground pin 8 during boot up or it can cause it to fail. I managed to free up uart1 for another use but if anything is connected booting is iffy. I think for it to be reliable you would have to use a relay to ground it until the machine is up. So I gave up on using that one and have a 4 port pcie card now. Actually I like the 4 port card better. Its full power rs232 so I don’t need to bother with level shifters. Found out about this when I was testing to see if I’d configured the kernel correctly. All of a sudden my TX1 wouldn’t boot. Wasn’t until I’d taken it out of the enclosure and removed all the wiring that it started booting again no problem.

  186. Quick question. Is OpenNI2 the only platform that can run the Structure sensor with the Jetson TX1?

    • I believe that it just uses libfreenect
      You may have more success asking questions about the Structure sensor in their forums, I have limited experience with the device.

    • Hi Steve,
      I’m glad you liked the video! The music is just a couple of friends and I jamming on a Saturday afternoon. Thanks for reading.

  187. Hi,

    I tried so many times with this tutorial with same parts …but failed
    There are some problems to execute this tutorial.
    First, I use same Level Shifter( TXB0108) but it’s not working even I wired like this page..
    2nd, when I execute(sudo ./simpleHCSR04), error “gpioExport: Device or resource busy” occurs. I refer ‘http://elinux.org/Jetson/Tutorials/GPIO’ page to use GPIO pin but still same error..

    Can u help me?

    Thank you.

    • Hi Tim,
      Sorry that you’ve had issues. One thing to check: In the video I was using an Adafruit BSS138 I2C-safe logic level converter. I2C has some timing issues that the level shifter needs to account for. The TXB0108 does not have the required circuitry. I apologize for not including the level shifter model number in the article, I had used it in so many of the other articles that I must have forgotten to add it here. I’ll add it in so that others may benefit. Try a BSS138 (Product ID 1438) and see if you get better results. Thanks for reading!

  188. Hi, i’m Roberta.
    I have some questions about these platforms… With Guidance DJI on board Matrice100 and Guidance SDK, is possible to implement an automatic collision avoidance system.
    Now, if I want to make my flight completely automatic priogramming a mission on a Manifold and usign Guidance DJI too, is it possible ( and or necessary??)to connect together Manifold and Guidance on the platform DJI?

    Thanks in advance

    • Hi Roberta,
      On several of the current drones, such as DJI Phantom 4 and 3DR Solos, you can have mission programming without a Guidance or Manifold. The mission programming is done on a base station (usually a phone or table connected to the remote controller of the aircraft).
      The Guidance is designed to work with the DJI Matrice. The Manifold is designed to work with the DJI Matrice. If you are planning on using an onboard camera for vision processing on the Manifold, the DJI Matrice has all the proper connectors needed built in. It is simply a matter of wiring, the Manifold and Matrice take care of passing through the video back to the ground station.
      While it is conceivable that you could use Manifold and Guidance on a different platform, all of the code written for the Manifold and Guidance assume that it is working with a DJI drone.
      Thanks for reading!

  189. Hello, thanks for your great install! Caffe runs smoothly on my jetson now. However, I am trying to create an lmdb database larger than the (536870912 b = 536.9 mb) value used, but this is not allowing me when changing the mapsize setting in lmdb.. Do you know of any workaround possible?

    Thank you

  190. This is really a good news. I was a bit concerned that once i bought my Traxxas racecar there was no updates on the MIT course page (i guess it not working anymore) as well as from you. The above words from you really made me happy that I can now look forward building my autonomous race car soon.

  191. Just re-checked there site, seems like they have uploaded all the materials to GIT hub https://github.com/mit-racecar . But very difficult to follow as an outsider. Hope your lectures, videos and demos make it a easy to follow. Also request you to please dedicate one lecture if possible on how to create the map of the environment which the car uses to localize .

  192. Apparently new versions of jetpack have a script called ondemand that runs 60 seconds after boot. It will remove any GPU speed settings you make in the rc.local file (i.e. if you run the MaxPerformance.sh as part of rc.local) run

    sudo update-rc.d -f ondemand remove

    to disable it.

  193. This is great help! Now I need to distill just the part for the IMU to work with ROS. The info needed is in this code somewhere!

  194. I am having issues with cloning the image on my Jetson. I installed the latest version of Jetpack on a virtual machine running a 64-bit version of Ubuntu 14.04. I went ahead and ran the command that you used in the tutorial and it started Nvflash, but it doesn’t appear to be doing anything. It doesn’t display the bits copied over bits to be copied, just “Nvflash 4.13.0000 started” and that’s it. I looked in process and found nothing of the sort that refers to this.

  195. This is great to have so much info available; that helps all builders! By the way, a “shout out” to NVIDIA for replacing my Jetson TX1 that I damaged with a power plug. They sent a new one, no charge! Now that’s customer service!

  196. Thanks Jim, really glad you are back in action and I’m looking forward to watch the Jetson RACECAR series and learn something along the way..

  197. It is good to see you back posting again.
    This device has a laser in it… where is the plush shark!?

    A day/night outdoor test would be great for a future video content.
    The spec sheet mentions 13 feet as the outside range of measurement.
    10 feet would seem reasonable for the day/night outdoor comparison.

    This looks like Intel did not try to rush it out the door. Very polished for a first gen product. It may be time to retire my trusty Kinect 360.

    Does the module connector allow multiple cameras to trigger from one clock source?

    • You are absolutely correct, I forgot the shark! This is a sad day, indeed.
      From what I understand, 10 meters is the most that can be expected outdoors. When outdoors the laser is off, it’s just acting as a stereo camera, though in the infrared light range. I don’t know what to expect outdoors at night, I would guess it’s the same as the indoor if you turn the laser projector on, but not sure.

      Once I mount the camera on a robot platform with an external power supply, I should be able to make some comparisons. I’m hoping to substitute the RealSense camera(s) for the Stereolabs ZED + Occipital Structure cameras on the RACECAR. Intel makes a short range camera, the F200 which is a little bit more money ($129) that covers the 0.2 to 1 meter range.

      I think this is the second generation of the RealSense products. Intel is spending a lot of money on this product line as they think it represents a direction of value to their company. Intel has invested large ($50M+) amounts in several different drone manufacturers, including Yuneec which has a RealSense cameras option for their Typhoon drones. Robotics is another area Intel would like to target. There has also been a push in the tablet/phone markets and webcam markets for this technology. Google Tango tablets will have this technology built in, the first products have just been announced.

      In answer to your last question, I believe the answer is no. The camera plugs into USB.

  198. In your article, it says “Make sure that the Jetson is attached to the PC using the supplied USB cable that you used to flash the Jetson with originally.” Do I actually have to have the “original” host PC or just any PC with JetPack installed?

    The reason I ask is that I am considering buying an http://www.autonomous.ai Deep Learning Robot that comes fully loaded with many ROS packages to a Jetson TK1. I need to be able to clone the TK1 without having the PC that autonomous.ai used to flash the TK1 originally.

    Thanks!

    • You can use any JetPack compatible computer (PC, Ubuntu 14.04). What you’re doing is copying a disk image from the Jetson to the PC, which gives you a backup of the Jetson. You can then restore the image, which means flash it with the backup. JetPack is basically a wrapper around the lower level commands to flash and copy files to/from the Jetson. Good luck!

  199. this saved me a ton of time, thanks.

    I’m not sure how to make it work with grinch, but the stock 21.4 kernel worked fine.

  200. Hi,

    I tried to follow your post but I am getting an error which is as follows:

    gpioGetValue unable to open gpio166: No such file or directory

    Do you have any idea why this is happening?

    • What command did you run when you get that message? It is possible that the device is wired incorrectly, or that gpio66 is not exported into user space.

        • Sorry. There are two apps in the example section. It’s difficult for me to answer the question without knowing what command you executed. The question, “What command did you run when you get that message?” means what did you type on the command line of the Terminal before the message appears.

          • I checked the connection and now I am not getting that GPIO issue. Thanks

            But, now I have an another issue: I am always getting a distance value of around 35-40 inches, even if I move the obstacle the close to the sensor. I tried with both of my ultrasonic sensor but I am getting the same behavior.

  201. Hi,

    It was interesting to see what you done with R200.

    One thing I don’t understand is that you tested two of R200 together? I thought when IR pattern overlaps the depth sensing quality decreases significantly when using multiple RGB-D sensor together.

    In your article, you said
    “The laser texture from multiple R200 devices produces constructive interference, resulting in the feature that R200s can be colocated in the same environment. ”

    is it possible? I’m doubting since two laser projector can not be placed at the exact same position. (at least there is one inch difference between two R200 even though you place one R200 at the very top of the other R200)

    • I would point out in the video that two cameras are being used on the same subject.

      The interference issue depends on the sensing method being used by the camera. Earlier RealSense cameras, and the Prime Sense based cameras such as the Kinect V1, use a structured light technique. In that case when the patterns overlap you may get destructive interference, meaning that the cameras have issues sensing depth.

      On the other hand, the R200 is an active stereo camera so it uses the patterns in a different manner. I seem to recall reading that the R200 is a time of flight camera. Intel states that you get “constructive interference” with multiple R200s in the same space. You’ll need to find some technical papers if you want a more in depth explanation.

  202. Hi Jim,

    If you don’t mind sharing, where did you purchase your Traxxas Rally 74076-1?

    I am looking at one on Amazon.com, but I am not 100% sure if it’s the same model.

    I am looking to follow along your project as I am in need of a test platform for testing out outdoors terrain negotiation algorithms, and there are other models (e.g. E-Maxx) that are probably more suited for my need, as speed isn’t necessarily my main concern. Would you happen to have an idea of how much of your approach would still be applicable (or would be different) if I were to apply this on different Traxxas model (say the E-Maxx)?

    Thanks

    Galto

    • HI Galto,
      I bought this one from Amain performance hobbies:
      https://www.amainhobbies.com/traxxas-rally-rtr-1-10-4wd-rally-racer-w-tqi-2.4ghz-radio-system-battery-dc-c-tra74076-1/p403885
      It was $414 USD, $450 delivered with taxes and so on. I would have liked to give an Amazon link, but I haven’t seen any on there within 100 dollars in price. Not quite sure why that is, but no reason to overspend by that much.

      It’s hard for me to say how close the E-Maxx is to the Rally, I don’t have much experience with RC cars. In overview, there are only a couple of issues. Controlling the steering servo and motor, and mounting the platform for the electronics.

      In a future version I’ll be discussing how to control the motors and servo. For moving robots, odometry is a concern, and these cars don’t have any odometry hardware built in to them. I recall that there is an option to add some optional odometry telemetry on the Rally. An issue with the Rally cars in particular is that the minimum speed using the stock ESC is still pretty fast, it’s difficult to have it go slower than 6 or 7 miles an hour. In other words, if you drive the ESC at a constant minimum PWM signal, off it goes at faster than a walking pace.

      The UPenn race car ( http://f1tenth.org ) accepts that fact and controls the stock ESC with a micro controller with PWM output. On the other hand, the MIT RACECAR uses an open source ESC, called a VESC, which has much finer control over the motor and can drive at slower speeds. The VESC also has good enough feedback so that you can fake some odometery.

      The real trick with these cars is using the sensor data and building the algorithms to control the car. A LIDAR is not going to be used the same way outdoors as it is indoors. MIT and UPENN race in corridors, and use the LIDAR to try to place the car on the ‘track’. When you’re outside with no walls, that trick isn’t as useful. The good news is that the LIDAR that they use is $1800 USD, an outside race car can probably forego that cost and spend the time, energy and money integrating vision processing libraries and maybe something like a GPS. That approach is more economical, and to me at least, more general purpose. An impressive example is Team ISF Lowen with their Simba car:

      http://www.jetsonhacks.com/2016/02/18/team-isf-lowen-simba-2-jetson-based-autonomous-car-in-carolo-cup-competition/

      It’s a similar sized car with a Jetson TK1 and a web cam on top, and seems like it’s a lot of fun. The organizers of the event delineate the course in black and white so relatively simple edge detection works nicely here. To me, that project feels more in reach for learning and to getting some really fun results. I’m guessing you can get ‘er done for under $1K USD (versus $4K for the MIT/UPenn setups).

      Hope this helps, and you will share your build with us.

  203. Is it possible to enable SPI on standard Jetson kernel, rather than the Grinch? If so, could please give us some tips?

  204. I have one on the way. Unfortunately the retrofit kit for my drone won’t be available till later in the year. Don’t want to wait that long to play with this new technology. Thinking about getting another gimbal and mounting the cam on that instead of the regular camera. It would be much better suited to mapping for the rover that way.

  205. When it comes to affordable and functional solution, it is logical to use the most ubiquitous and accordingly the least expensive solution for the generation of PWM – ARDUINO UNO R3 it costs US $ 3.45. I have installed on my machine this bundle, and it works perfectly with the ROS.

  206. That’s great! I read through it and will give it a try.

    BTW, does the TX1 USB 3.0 patch from NVIDIA work for the TK1 USB 3.0 issue, eliminating the need to unplug/replug the USB hub to recognise the Kinect2?

    • Hi Frank,
      Short answer is my usual waffle, it depends.

      It depends on what you measure. In the video above, the example app Protonect runs 4 panes. Pane 1 is the IR stream, Pane 2 is a color RBG stream, Pane 3 is a Depth frame, and Pane 4 is a registered color/depth frame.

      When data is received from the Kinect, it is decompressed with a JPEG decoder. On the TK1 and TX1 I believe that this is done in CUDA code. Both decompress frames faster than “real time”. The actual Kinect camera provides all streams at up to 30 fps, so both the JTK1 and JTX1 consume and decompress the streams with a minimum of lag.

      If all you were to do is collect the data both the JTK1 and JTX1 are equivalent speed wise as they read and decode the data in parallel.

      If you want to manipulate the data, or display a complex representation, then in practice the JTX1 has about a 50%-100% performance gain. As an example, the registration of the color and depth maps is done in software in the Protonect example, you can see the difference in speed cranking the CPU and GPU clocks on the JTX1 makes. While I haven’t test the JTK1 using the same test, I would guess the frame rate on a cranked JTK1 would be a little less than the normally clocked JTX1.

      One important point is that the JTX1 has more computational reserve left running the Kinect, that’s the advantage of having more CUDA cores and a different processor architecture. A lot of use cases are to gather data from the Kinect and process it on board. The Jetson then passes it to another control module. That would be things like identifying people, objects, hands, and so on. So while the display of something like a point cloud is great eye candy, in practice a lot of applications will build the point cloud, display the point cloud, but not do both at the same time. Or do something all together different. Like I wrote, it depends 😉

  207. hello,when I run this command that”cp -L lib/OpenNI2-FreenectDriver/libFreenectDriver* ${Repository}”, my full command is “haha@ubuntu:~/libfreenect/build$ cp -L lib/OpenNI2-FreenectDriver/libFreenectDriver* ${Repository}” Unfortunatelly,the error says that “cp: target ‘”../../Bin/Arm-Release/OpenNI2/Drivers”’ is not a directory” i don’t know if my path is wrong.BTW,i have read the question like mine,but I still can’t handle this trouble,thx

  208. The 64-bit version seems to have a serious memory leak somewhere in the GUI infrastructure. It slows up and eventually locks up. If you are running a GUI on the TX!, avoid the 64-bit version until this is fixed.

  209. Hi kangalow

    Once we got Protonect working with the Jetson TK1 and Kinect V2 the next logical step was to find a driver and the tools needed to receive data from the Kinect V2 sensor, in a way useful for robotics. Specifically, a data bridge between libfreenect2 and ROS. As a start, we just want the Kinect V2 to send data to a mobile base (i.e., Kobuki) to do SLAM. The great work done by Thiemo Wiedemeyer on the “IAI Kinect2” package (https://github.com/code-iai/iai_kinect2) seemed like a good next step (and perhaps the only next step). After several attempts to get the “kinect2_bridge” tool to work, I reached a stalemate, as follows:

    1. The current release of JetPack for the Jetson TK1 and TX1 does not support OpenCL, primarily to do with issue related to installing it on the ARM7 architecture, and OpenCL seems to be a prerequisite for the “kinect2_bridge” tool (on a Jetson TK1 anyway). NVIDIA is understandably pushing CUDA for both the TK1 (https://devtalk.nvidia.com/default/topic/761627/opencl-support-for-jestson-tk1) and the TX1 https://devtalk.nvidia.com/default/topic/899013/jetson-tx1/tegra-x1-opencl-support/ .

    2. The current release of the “IAI Kinect2” package does not support CUDA (as implemented on the Jetson TK1 anyway). Several forks have been attempted in the “IAI Kinect2” package to get the Jetson working but ultimately there were issue in the root code and Thiemo said: “Since I don’t own a Jetson, I can’t provide a release, therefore someone else has to do and maintain it. For now the answer would be: no/maybe, if someone volunteers.” https://github.com/code-iai/iai_kinect2/issues/149

    So I am hoping you might have some insight into the following. Are you aware of any:

    1. new efforts to support OpenCL on the Jetson TK1?
    2. alternative drivers and the tools that would allow a bridge between libfreenect2 and ROS, using a Jetson TK1 or TX1?
    3. new efforts to get Thiemo’s “kinect2_bridge” tool working with the Jetson TK1 version of CUDA?

    The Jetson TK1 and TX1 show so much promise for robotics and the Kinect V2 is arguably the best depth sensor on the market, with respect to availability, cost and resolution. But if we don’t have a bridge between libfreenect2 and ROS, we can never realize that potential.