OpenCV, Python, Onboard Camera – NVIDIA Jetson TX2

In this article, we build a simple demonstration of a Canny Edge Detector using OpenCV, Python, and the onboard camera of the NVIDIA Jetson TX2 Development Kit. Looky here:

Background

Back in 1986, John F. Canny developed the Canny Edge detector. The Canny Edge is one of the image processing milestones which is still in use today.
You can read some more about the Canny Edge Detector and the technical details here: OpenCV.org Canny Edge Detector and here: Wikipedia – Canny edge detector

In this article we will use a simple Python script which will use the OpenCV library implementation of the Canny Edge Detector to read the onboard camera and run the frames through the filter. Earlier we went over on how to build the OpenCV library for the Jetson. There is a repository on the JetsonHacks Github account which contains a build script to help in the process. You will need to enable GStreamer support. As of this writing the current script enables GStreamer support (OpenCV 3.3), while earlier versions did not. GStreamer must be enabled to support the onboard camera. Note: The standard OpenCV4Tegra installed by JetPack does not have GStreamer support enabled, so the onboard camera will not work with it.

The Codez

The file cannyDetection.py is available in the Examples folder of the buildOpenCVTX2 repository on the JetsonHacks Github account. The script is also available as a Github Gist. The Gist is seen in its entirety further down below.

GStreamer Camera Pipeline

The first task is to open the onboard camera. Camera access is through a GStreamer pipeline:

cap = cv2.VideoCapture("nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)I420, framerate=(fraction)30/1 ! nvvidconv flip-method=0 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink")

Canny Edge Detector

The main part of the filter processing reads a frame from the camera, converts it to gray scale, runs a gaussian blur on the gray scale image, and then runs the Canny Edge Detector on that result:

ret_val, frame = cap.read();
hsv=cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
blur=cv2.GaussianBlur(hsv,(7,7),1.5)
edges=cv2.Canny(blur,0,edgeThreshold)

Not surprisingly, it is more concise to just write the code. In the code, both the GaussianBlur and Canny functions have parameters to fine tune the results.

At this point, we could simply display it on a window on the screen:

cv2.imshow('Canny Edge Detector', edges)

In the script, we add a little user interface sugar which allows us to optionally display each step. The only interesting part is that the image for each step is composited into one larger frame. This requires that the images be converted to the same color space before compositing. In the video, the Jetson TX2 is set to run at maximum performance (See note below). Here’s the full script:

Conclusion

Many people use OpenCV for everyday vision processing tasks, and the Canny Edge Detector is a valuable tool. The purpose of this article is to show how to access the onboard camera using GStreamer in a Python script with OpenCV. More importantly, I played guitar in the video.

Notes

  • In the video, the Jetson TX2 is running L4T 28.1, OpenCV 3.3 with GStreamer support enabled
  • In the video, the Jetson TX2 is running ‘$ sudo nvpmodel -m 0’

3 Comments

  1. Hi Jim thanks you for these great tutorials. I would like save a video with fourcc = cv2.VideoWriter_fourcc(*’XVID’)out = cv2.VideoWriter(‘output.avi’,fourcc, 20.0, (640,480)) However i am getting an error when trying play video “Could not demultiplex stream” in loading Video recorded

    Thank you´╗┐ again

      • Thank you jim. I’m getting the same error with all codecs installed. I’m going crazy :(.

        Could you update this great post with your python code to record the screen or record/save to a file, the camera videostreaming? It would be a great help.

        Thanks a lot again!

Leave a Reply

Your email address will not be published.


*