The Logitech c920 webcam is one of the candidates for providing video streams for the Vision and Video project that I am working on. In particular, the c920 is interesting because it can stream H.264 encoded video from the camera itself. Here’s the video:
The Jetson TK1 board has a hardware H.264 encoder, so why is encoding on the camera itself a point of interest? Remembering at this point I’m just doing some basic experimenting, I thought it would be interesting to see how many video streams I can bring into the Jetson, with the idea that the streams themselves would be stored locally on the Jetson and then uploaded to a server.
I don’t know how many video streams the Jetson on board encoder can encode simultaneously (I’m assuming it’s one, but I don’t know), but it would be nice to have a Jetson node handle several video streams. If several cameras are recording a scene simultaneously, it would be nice to know how many Jetsons are needed per set of cameras. Certain sources, such as depth cameras (like a MS Kinect) may need to have their video encoded too, which would have to be handled by the Jetson. Having at least one of the streams already encoded could raise throughput considerably.
The architecture that I have in mind currently is that I would have a couple of ‘video capture’ Jetson nodes, with perhaps a small display attached for monitoring, networked with a video server. The video capture nodes would upload their video streams to the video server.
Several ‘Video Display’ Jetson nodes could then connect to the video server to view the results of the node processing. But, I’m getting ahead of myself here, that’s still some time away …