line follower using OpenMV

My robosport team builds line following robots using video recognition on Raspberry PI 3 and we are looking for something with a smaller size and less latency time. Currently we run v1 camera for Raspberry PI at 90 fps at 6464 or 128128. As far as I understand there is a small additional delay caused by FIFO camera buffer (three frames or so). It is a true assumption that OpenMV can run 64*64@60 fps and has no FIFO camera buffer? As for algorithms only simple array calculations are used via numpy in a current solution.

As for our robots, please see how it looks like currently - Ясон-3 на тонкой линии - YouTube
The question is that we need to multiply robot speed by two and so reduce video processing latency to match the speed of other competitors - Королёв, 2017, турбогонщик PSLR - YouTube
Can OpenMV help us with that?

We work on the image directly from the camera. There is no FIFO. However, our system is optimized for ease of use… so, we don’t buffer up frames coming from the camera. We capture one at a time and work on it out of a stream of frames. Thus, if you work on a frame longer than some short time you then can miss the next frame.

If you set the camera resolution to 64x64 then you should be able to hit 60 FPS on an algorithm you hand code in C. You’ll then want to reduce the amount of python code running to the absolute minimum so as to just glue the sensor.snapshot() function to your custom function.

What you’re doing is kinda pushing the limits on things. You’d need to get comfortable with the C programming environment for our cams to reach the FPS you want…

Camera with no FIFO - is what was dreamed of :slight_smile: , so that’s good. Loosing frames is also fine.
Currently RPi velocity is not a blocker, we’re just finding a center of black on an image to find out where to go, because of all we need is a position of a black line on a white field.

In a case of any troubles, C is also good despite the fact that we usually are not using it.
Thanks for your advise, I was also thinking about Jevois, but we need wide optics to see where the line goes and C-coding OpenMV looks to be easier than to make custom lens mount for Jevois.

Oh, if you just need to find the center of a black line on the image then you can just use find_blobs. I though you were doing something more complex.

http://docs.openmv.io/library/omv.image.html#image.image.find_blobs

http://docs.openmv.io/library/omv.image.html#class-blob-blob-object

Just call it on a grayscale 64x64 image and it will run ultra fast. That said, the rest of the python control code will slow you down. You’ll be able to hit 30 FPS easily without any issues. Going to 60 FPS however is harder as the whole loop needs to be completed in under 10 ms or so and ready to capture another image. In C with total control of the MCU this is possible. But, while the system is hooked up to the computer it has to service interrupts and do other things which can limit you to just 30 FPS in python. That said, the camera image data stream is at 60 FPS. So, you’ll likely hit a frame rate between 30 - 60 FPS when you deploy the system.

I’m working on something new, you maybe able to do close to 60FPS @QQVGA Here’s a snapshot from a test

Nice! Are you using OpenMV Cam M7 there on the screenshot?
I’ve ordered two OpenMV cameras yesterday. If we succeed to make them working they will be competing in August in Beiging and then in autumn in Russia. So fingers crossed :slight_smile:

Yeah, that’s the M7 - 52 FPS at 160x120 RGB. Note the frame buffer is disabled to get the FPS up that high. Otherwise the system has to JPEG compress images and stream them to the IDE.

Here’s a GIF. Note FB is enabled, when disabled it goes way higher than that :wink:
ezgif-3-197bb5fc61.gif

Is it a true statement, that having FB enabled we get additional latency while new frames stay in buffer?
Is there a known way to estimate time between something happened and when the picture of it can be processed in code? My idea was to use LED for this purpose, but LED itself might be having some delay between it’s powered up and when the light can be seen by camera.

That’s true, when FB is enabled the camera locks the FB for the IDE and compresses it before capturing the next frame. Note if the IDE is not polling fast enough, the camera overwrites the FB and the IDE misses frames, this is by design to avoid slowing down the FPS too much by waiting for the IDE.

Re latency, an LED should work if you toggle an I/O interrupt when you power the LED you could measure latency.