Questions on modularity

Hey guys, great project, a lot of potential on nerdy and non-nerdy applications :slight_smile: I’d like to understand a bit more how hackable the whole thing is :

  • I see that there are options for 1.2 and 2 MP sensors, which is already pretty good. I see that OmniVision has sensors with likely the same interface that go up to commercial grade resolutions. Do you foresee any potential pitfalls for people trying to go 5, 10 MP?

  • I found in a blog post discussion that the CV engine under the hood is SimpleCV. Are we able to “simply” import it and use its standard methods?
    Welcome to SimpleCV’s documentation! — SimpleCV 1.3 documentation

  • I understand that you guys wrote some nice wrappers for that so we don’t need to think too much, but it’d be also nice to go more low level, if needed be.

  • Looks like the IDE is not really a needed tool, just for sake of simplicity? Apparently the microPython guys get away with copying the main.py file to the right place and we’re done?
    2. Running your first script — MicroPython 1.19.1 documentation

  • I’m really interested on audio+video for surveillance applications. I read this that doesn’t say a lot on what’s the path to properly recording the AVIs :
    https://openmv.io/docs/openmv/tutorial/video.html

I’m willing to spend the time on that BUT is there any guidance on how to get that done? I see that in the video.py script there’s an AVI class already defined with some dark magic there. No audio mentioned in the code, AFAICT. For now I’m just thinking about feasibility. Thoughts?

Thanks!

Hi stonehengesc,

  • I see that there are options for 1.2 and 2 MP sensors, which is already pretty good. I see that OmniVision has sensors with likely the same interface that go up to commercial grade resolutions. Do you foresee any potential pitfalls for people trying to go 5, 10 MP?

The OV7725 is the only sensor we plan to support moving forwards. We can’t support high resolutions cameras right now and don’t expect to be able to moving forwards in the future. All high res cameras need a lot of RAM to work with to store the images coming off the camera. Right now, we store the image completely in the processors SRAM. For higher resolutions we’ll need to add SDRAM to the board which will increase complexities and cost. We’d… like to do this in the future with a STM32F7 based camera, but, part of our goal with this product is too keep the board costs low. So, we’re not sure if we can do that economically. (To get SDRAM on board we’ll have to use a 6-layer PCB, BGA processor, BGA SDRAM, etc. All which costs more and have higher manufacturing failure rates).

The intended purpose of the OpenMV Cam is to do computer vision stuff on a small embedded microcontroller. Something which turns on quickly, is low power, and completely controllable by the user. SBCs like the Raspberry PI completely outclass our product in terms of speed and image sizes supported albeit without having the previous three qualities mentioned which you need if you want something that is highly mobile.

The above all said, for the next OpenMV Cam (assuming we can get there) will try to increase the power as much as possible while keeping the price low and current flexibility.

We don’t use simpleCV at all. All the computer vision libraries available for the desktop don’t make sense on a microcontroller. Those libraries all assume you have the ability to malloc infinite space, etc. On a microcontroller its a very different world. So, we’ve written all the computer vision algorithms ourselves.

The way we’ve implemented the system makes it very easy to add more features. If you want another algorithm and need it added to the firmware you can setup the build system here: Home · openmv/openmv Wiki · GitHub. Its… SUPER EASY to change the firmware, build it, and then upload a new version. And… if you fork the repo and want your changes merged upstream you can send us a PR too.

  • Looks like the IDE is not really a needed tool, just for sake of simplicity? Apparently the microPython guys get away with copying the main.py file to the right place and we’re done?
    http://docs.micropython.org/en/latest/p … cript.html

Haha, yes, you don’t need the IDE. Kinda hard to do the vision stuff without it however. You won’t be able to see what the camera sees… Anyway, yes, the IDE is extremely useful. It makes deploying scripts really easy. You just write some code and hit run, fix syntax errors, hit run, repeat 50 times, and then your code finally runs. No copying scripts back and forth. The frame buffer updates in real time as to what the camera sees. Also, as you apply filters, etc. you see the results in real time.

Yes, so, we just got the video integrated into the core API now. It can record MJPEG video at about a 15KB per frame at 320x240 in color (for the OV7725 model which is what we will be selling online).The older OV2640 model can do a much higher resolution, but, we’re not going to be making anymore of those cameras after the whole manufacturing disaster. So, if you want to use the camera to record video it can get about 15FPS at the previously quoted res and with a standard 4GB SD card you can record 5+ hours of video. If you go with just grayscale images then you can double that.

Here’s the MJPEG script: https://github.com/openmv/openmv/blob/master/usr/examples/mjpeg.py

And… we also just added GIF support too. With GIF support you can record GIFs which loop. There’s no limit on GIF size, but, GIFs are stored uncompressed on the SD card so you’ll want to keep them short:

https://github.com/openmv/openmv/blob/master/usr/examples/gif.py

Here’s a post about GIF support:

https://www.kickstarter.com/projects/botthoughts/openmv-cam-embedded-machine-vision/posts/1507881

About audio - we don’t have a mic on the system. However, we do have an ADC. But, I don’t know if you’d be able to sample the ADC fast enough for audio without really diving into the firmware. We’re using standard HAL library functions to read from the ADC, but, these are slower. If you want to get high speed results from it you need to use DMA to transfer samples. I don’t know the details behind getting that to work. Ibrahim would have to look into it.

As for the file format supporting audio - yes, it can. MJPEG videos are just standard Microsoft AVI containers: https://msdn.microsoft.com/en-us/library/windows/desktop/dd318189(v=vs.85).aspx

They support interleaved audio with video. Audio doesn’t have to be compressed either. So you could saved uncompressed audio with the video. However, you’d likely not want to do that. You’d probably want to MP3 compress the audio. As for doing MP3 compression. The STM32 has enough processing power to do that. But, there are no “free open source and microcontroller friendly” MP3 libraries. So, you’ll have to write one from scratch.

Here’s the MJPEG code:

https://github.com/openmv/openmv/blob/master/src/omv/img/mjpeg.c

As you can see is only about 150 lines. Not very complex. I foresee us adding audio support for the next OpenMV Cam. I want to add a mic on board for this very purpose.

The above all said, lack of audio support isn’t a deal breaker for most folks. Most surveillance systems don’t have audio support. So, we’re not missing a standard feature. Additionally, our frame rate and resolution more or less match low end NTSC video recording systems.

Note that in that KickStarter post I mention that we have frame differencing working. So, you don’t have to record everything. Only movement. We also have code that allows you to blend the background image into the current frame to deal with stuff moving now and then to “age out” artifacts.

Anyway,

Please forgive the poor state of the API and docs. The product was in a very bad state the beginning of the year. I’ve… been working non-stop to make it not suck. So, there’s been a lot of focus on the firmware to get it ready for shipping Kickstarter rewards. If you have any questions until the documentation is updated please just ask me.

Have you have already think about how to add SDRAM for increasing heap, and size of main.py or cnn ?

Yes we’re going to make a camera with SDRAM, the prototype already works. I can’t give you any estimates when it’s going to be released, soon I hope.

Do you think is it possible to do a external ram shield ?

An SPI SRAM maybe, but it’s too slow.

Maybe it can be done as a camera sensor module with an appropriate number of pins on the connector. But it feels a bit pricey.
Not sure if it can be used as additional gpio pins if a sram is not needed for the user.

There’s no need for all that, an SDRAM version is coming really soon, we’re just testing it and working on software support.