Hello, I’m working to stream camera data from a Pi to a Ubuntu machine once motion is detected. Speed is of the essence to get image data to the Ubuntu machine for processing. Got everything working for the most part, and pretty fast by bypassing read/write IO on streaming, but running into challenge with the activation of motion recognition, continuous motion recognition detection, and then feeding a MJPEG stream via PiCamera (built in Raspberry Pi camera) all on the Raspberry Pi’s 4 cores with Python.
According to earlier forum messages, I see the Cam 7 can detect motion in approximately 16 ms it looks like. However, the stream is not quite there yet, 640x480 in grayscale at 15 FPS at best, which is understandable for what it is.
What I’m wondering is if I could use the Cam 7 for motion detection, and simply have it send a signal when motion is detected or disappears, and how fast would that signal get transmitted through the pins, or USB?
The Cam 7 could run on its own without using the Pi GPU/CPU really then as well, correct?
The M7 with the current firmware should be able to run motion detection using frame differencing in Grayscale at 30 FPS at 320x240. I don’t know what the speed is for 640x480. Unless you have a specific requirement to be able to detect differences of 4 pixels or so doing anything at 640x480 offers no benefit and takes more time since noise in the real world will require you to toss detection areas that are too small.
In general, I’d run the detection algorithm on 160x120 grayscale. With our current firmware you’ll be able to detect motion at 30FPS and then determine all areas of difference using the find blobs function.
So, we’re about to release a new firmware image with a higher clock frequency to the camera sensor which boosts it from running at 60 FPS currently where we are able to grab every other image to 120 FPS where we can grab every other image. This should put you in the 60 FPS territory. A smaller resolution might put you even higher up to 90 FPS or so.
As for transmitting the detection to the PC… the camera appears as a VCP port when plugged in and could just send some serial bytes. However, most PC OSs, like ubuntu, are not real time so the OS might impose a 20 Ms delay to getting around to reading serial data. Not much you can do about this without tossing standard desktop software. Or, you can hack the kernel to get it to service the serial port and transmit the data into user space real time.
Anyway, the OpenMV Cam can run by itself. I see getting the desktop OS to start capturing images on demand the hard part.
Not sure what you mean by OOP. You program in python. You can make classes if you like. However, there’s not really a huge need unless you plan to do more things with data structures.
Generally (or specifically) how would I go about setting up a frame differencing system, and then upon motion writing the camera stream to the uart methods for passing through to the VCP port?
There’s an example script called advanced frame differencing. Please see the IDE examples folder. There are about 60+ examples in there which basically serve as our documentation on how to do things currently. See the pixy emulation scripts for examples of long programs that do computer vision and then write formatted binary strings to a UART. See the advanced frame differencing script on how to diff frames. The frame differencing script I believe is under the filters example folder.
As just ask questions. Response time is pretty fast.
Been playing around with the advanced script a bit for motion detection, or for me, no motion if a bot gets stuck. Anyway, I used the advanced script but changed to gray scale the image instead of color, and right after the img.difference statement I added these lines:
if img.get_statistics().stdev() < 5:
print("no motion detected")
Planning on adding appropriate logic for bot action when no motion is detected. In process of putting a test bot together now.