Anthony Virtuoso from New York, New York just finished building a hardware stack for a rover robot based on the Jetson TX1. Now he’s getting ready to write the software that allows the robot to run autonomously. Best of all, Anthony has graciously agreed to share the robot build with us here on JetsonHacks!
By day Anthony Virtuoso is a Senior Software Development Engineer at Amazon in NYC where they are working on building a massive data analytics platform for several thousand users. The platform is built on Amazon Web Services and the associated ecosystem. Big metal software. Anthony feels that the rapidly maturing Machine Learning and Deep Learning fields can deliver innovative features for his groups customers.
Anthony is well qualified to make such assessments as he is just finishing up a Master’s Degree in Computer Science (Machine Learning) from Columbia University. Currently Machine Learning is all about utilizing the GPU. Money quote from Anthony:
I personally learn by doing, so I needed a project where I could use this technology to solve a real world problem. I needed a way to see, first hand, what Nvidia’s CUDA or OpenCV could really do when pitted against a top-of-the-line CPU in an intensive task. So, I did what any bored engineer would do I fabricated a complex problem to answer a simple question: “How difficult is it to use a GPU to speed up a largely compute-bound operation?”
But why build a robot?
I’m a software engineer by trade but I’ve never really been able to get the opportunity to work on/with hardware that enables my software to interact with the physical world in a meaningful way. For that reason and because the Jetson seemed like such an amazing platform I set out to build an autonomous rover… but to do so a bit differently. I had read up on ROS and their navigation stack but before handing control over to these season frameworks I wanted to understand how far a naive implementation could go… basically “Why is SLAM and navigation such a hard problem to solve?”
Here’s what the completed hardware looks like. The project is in the ros_hercules repository on Github.
The robot uses a couple of the usual suspects for sensors, a Stereolabs ZED stereo camera and a RP-LIDAR unit which is a cost effective 2D LIDAR for robotic applications.
With the hardware base well underway, Anthony is starting to turn attention towards the more interesting part of the project, which is the robot software. Included in the ros_hercules README.md are several great tips and tricks for interfacing with the the rover sensor hardware and micro controllers.
It promises to be very interesting (and fun!) to watch an experienced machine learning expert apply and explore their craft here.