I have been playing with this. The pygame tool is better in that you can save frames but its not straightforward with the mjpeg_streamer.py tool you can only see the stream.
If you run openmv like a standard UVC webcam then you can use mjpg-streamer MJPG-streamer download | SourceForge.net which allows both streaming and snapping frames for downloads, I came across a video UVC Webcam Support for your OpenMV Cam - YouTube where you can set up UVC on the OpenMV but I couldn’t get it to work. I couldn’t find any more documentation.
UVC comes standard with JeVois and using a Raspberry camera so it would be great to figure this out with OpenMV but then when running it this way you are not harnessing the its image processing tools then it just becomes an expensive webcam. So more ideally you just want a way to download pictures and image processing data conveniently but this is what is perplexing with this tool since I find it pretty cryptic to use. You can also do image processing offline with a webcam using tools like python/opencv using a standard webcam.
When using UVC we assume you have a powerful host to do the image processing (i.e RPi) doesn’t make sense to do any processing then on OpenMV.
This feature is implemented specifically to allow folks to use their expensive FLIRs with other boards, otherwise it’s kinda pointless since you could just use any webcam.
EDIT: To revert back to the default firmware, just upload firmware.bin, however you need to connect the cam after clicking on Run Bootloader->Run.
Although it is mpen_streamer.py,
It works with OPEN MV IDE for windows, but
I tried to make it run on Linux
“ImportError: no module named 'usocket”
I get an error
How to install modules
I do not know so please advise
Hi, our scripts don’t run on CPython in linux. MicroPython which runs on our board parses and compiles python code. However, this doesn’t mean the modules/libraries etc. are the same.
Anyway, is your goal to the use the OpenMV Cam as a webcam? This is not it’s design purpose. We built it to process images onboard and to not send them really anywhere. While we have some examples showing this off… and while it is possible the performance for this type of application isn’t really great. Anyway, if you really need to send image data you won’t want to use WiFi. Pretty much all the Microcontroller WiFi based solutions don’t have the buffers onboard to handle image data through and offer poor results. If you want to send image data using the OpenMV Cam’s VCP USB interface will offer the best results.
Hi, you can open the file like you would any file in python and then use the wifi shield to open a socket to transfer the image over TCP. This is more or less straight forward python code.
Um, I can’t actually write the code to do all of this. I don’t really have the time anymore… but, also, there are several steps. Once big issue you’ll find is that the latency on doing this type of thing will be very high… the OpenMV Cam is not a WiFi camera. Is your goal to live stream images? If so, we have an MJPEG example, but, that’s about the best you’re going to get.
Hmm, so, do you want video quality as high as that camera in the video? Our product really isn’t designed around steaming frames. We can JPEG compress stills and send them to the wifi shield over SPI… But, the WiFi shield’s internal MCU can’t really buffer large data packets which makes it slow at streaming live video. It’s really just meant for MQTT like data transfer.
Okay, um, so, wifi is definitely the fastest way to move data. However, you’re going to want to send UDP packets because TCP causes a lot of issues.
I don’t really have a template for how to do this data transfer… however, luckily, we have some infrastructure setup for you. So, the first thing to do would be to get UDP packets sent from your camera to an application. You may use our WiFi shield using python UDP sockets… or you can use an EPS32. Whatever the case, we’ve got all the code in place for JPEG compressing images fast and giving you that byte stream over serial or SPI.
As for a protocol, so we have this method called compressed_for_ide() which jpeg compresses an image and then reformats the binary data so you can deal with byte loss and adds a leading and trailing byte flag to the image to know when the data is fully received. This method allows you to just transfer the image with no sync information on the data channel and if all the bytes get through you can display the image. Our IDE technically has support for viewing this through our Open Terminal feature too. However, I haven’t tested if any of this stuff works.