openmv Point tracking and Image Overlay on own board

A project I’m starting requires a camera + IMU /gyroscope to track a camera orientation /position and overlay sprites onto live video in 3d space; all while fitting in a mobile low power packages.
This points me to microcontroller computer vision; and it seems openmv is the only way to go or trying to manually port opencv algorithms over which is out of my depth.

We can use the same STM32F427 microcontrollers as your old M4 hardware, but the camera sensor might have to vary.

My questions about openmv:

  1. can openmv running on the cortex-M4 support tracking a sprite onto live video in 2d/3d.
  2. is openmv suitable for our own custom board, as long as we use the same M4 chip; but with maybe a different camera sensor
  3. any examples of bypassing the micropython layer and interfacing with your library directly in C/C++

very sorry if our requirements are too vague, I have zero experience in computer vision but some experience in embedded development.

Seeing as you no longer sell the M4 version even prototyping we would have to build our own board right? Has anyone undertaken a similar task in these forums?

Hi, thanks for the questions

  1. What are your requirements? I recommend you buy an OpenMV Cam M7… get the vision part working and then project a 2X speed loss with the M4 before going with that chip. The memory limits on it are very real compared to the M7. E.g. all of our cooler algorithms require two frame buffers which the M4 cannot do. If you’re just doing one frame buffer with drawing I guess you’re okay. That said, how to do you plan to get video off of the system? Our JPEG frame streamer on the M4/M7 is done in software and throws away a lot of information to get super small JPEGs… but, they look terrible. Our next gen H7 has a hardware JPEG compressor so its image quality is ALOT better.

  2. A different sensor will require C-Library code work. But, otherwise, you can use our M4 designs to start with.

  3. In the main.c here https://github.com/openmv/openmv/blob/master/src/omv/main.c just add the stuff you want (https://github.com/openmv/openmv/blob/master/src/omv/main.c#L567). Then, all the library methods can be directly called in C. As for how to call things, look at the python wrapper library: https://github.com/openmv/openmv/blob/master/src/omv/py/py_image.c.

This is all very much something where you’r going to be hacking the firmware. That said, as long as you don’t switch the hardware out or change anything like clock frequencies it’s all just C code that runs like it would on a PC. Quite straight forwards to change. If you get into driver stuff then life gets harder.

Wow thanks for your speedy reply, the support you guys do is really impressive.

The M4 requirement is loose, we can definitely prototype on the M7 or even a raspi to start.
The vision requirements are to stabilize a 2d sprite onto a live video outputted to a 128px spi bus display; some sort of frame displacement could get 2d translation crudely I guess?
We will have a gyro to help with accuracy.

I don’t know about the openmv pipeline, but could we maybe output the pre-processed camera input to the displays internal framebuffer and then pump the the sprite pixels after working out translation; maybe this is totally naive though; thoughts?

The vision requirements are to stabilize a 2d sprite onto a live video outputted to a 128px spi bus display; some sort of frame displacement could get 2d translation crudely I guess?

Still don’t know what you mean. So, I’m think you want to draw something on a display… and you want that thing to stay in the center of the display by using a gyro to control exactly where that thing is drawn? More context would help…

I don’t know about the openmv pipeline, but could we maybe output the pre-processed camera input to the displays internal framebuffer and then pump the the sprite pixels after working out translation; maybe this is totally naive though; thoughts?

You just draw the sprite on the screen and then send the screen to the buffer. No need to do a second SPI transfer.

Sorry what I mean is that the sprite would stay static relative to the background in 2d; while the camera translates and rotates.
So if the sprite starts in the center of the screen, but then the camera is moved to the right, the sprite would move to the left to keep up with the background.
If the camera is rotated up or translated up then the sprite would move down on the screen; the gyro can help approximate rotation though so openmv would only have to do the work for translation.

So the data we would get from openmv would hopefully be approximating 2d camera movement, so we could work out where to draw the sprite on the screen every frame.
Output would be something like ‘camera has moved up 10 pixels and across 5 pixels in the last frame’, we would then draw the sprite 10 pixels lower and 5 pixels to the left.

I see, you’re doing something AR like. Sure, this is not a huge issue. Using our phase correlation code you can detect sub pixel shifts with the OpenMV Cam. The phase correlation code doesn’t handle too much rotation per frame yet though, but, we plan to have that fixed soon.

As for drawing the Sprite, you just have to have a general idea of where it should go and then draw it there.

Note that I’ve been making a lot of feature updates to our vision library for our next release which fix a lot of basics. You may wish to use our pre release firmware which has better drawing primatives.