Why is real-time video a pitfall? Please read this article and you will know. After a few days of tinkering, the desktop can be displayed, but there is a pitfall that needs attention. This hardware uses the DP display interface, which does not have the HDMI display core. If you want to use DP to HDMI, you must use an "active conversion cable". Ordinary cables cannot convert timing, but can only convert electrical levels. This cost me the money for a cable. This time I used an Ethernet cable and opened the serial terminal at the same time. There is another pitfall that needs attention, which is that you must manually configure the address of the domain name server every time you go online, otherwise you will not be able to access the Internet.
echo “nameserver 8.8.8.8” > /etc/resolve.conf
With a desktop, I can use a USB camera to test video_classify, but unfortunately, it cannot! The specific reason is still being investigated.
The test equipment used this time! See here
This is the "experimental material" I purchased based on the picture. I simply recorded the testing process.
First, according to the guidance of the EdgeBoard Supported Camera Selection Table, my camera is the Logitech C270, which has a built-in H.264 hard-coding function. It has performed well in various previous tests and has good compatibility. The USB development library of the Linux system generally uses V4L2. There is a driver in the system board. Use v4l2-ctl to view the parameters of the camera, indicating that the USB camera can be used.
There are two commonly used video libraries in Linux, gstreamer and ffmpeg. Both of them are built-in. I first chose gstreamer for testing, but the gst-launch-1.0 tool could not be displayed and had to give up. Then I had to use ffmpeg. The system's built-in ffmpeg version is version 4.0.2. First
startx
Start Desktop
ffmplay -i dev/video0
You can see the effect on the desktop. Adjust the angle - start the test. Here is a complaint: "The delay of displaying the video is really slow! It's about one second." I sat down in my seat before the video of walking past the camera was played. I really can't stand this delay. According to the information, the CPU of the board has H.264/H.265 codec. It should not have obvious delay, and it feels even longer than the delay of network streaming.
There is another one
Start testing
cd /home/root/workspace/PaddleLiteSample/classification/build
./video_classify ../configs/resnet50/drink.json
There is nothing on the screen! ,There is nothing on the screen! ,There is nothing on the screen!
The program seems to be executed, and the video light of the camera C270 has turned on. I tried it with Coke at first, but there was no reaction. Then I changed to Yibao, but it didn't work. I adjusted the distance and it didn't work. I tried this for a long time, but I didn't succeed in the experiment. Just when I was about to give up, a magical phenomenon appeared. This is what I call a "pitfall".
When I wanted to switch the monitor to the desktop, I turned off the monitor. Then I suddenly thought of seeing if there were any screenshots that I hadn't taken, so I used the startx command to display the desktop. At this time, the monitor was already turned off. I followed the process and performed the test as usual, but the test data appeared on the screen. The strange thing was that I had already taken the tested beverage away. At this time, I noticed that the camera light was also on, so where did the test data come from? And the data was updated very quickly.
It’s really strange. Where did this data come from?