openmv webserver

Hi

I am making a Raspberry Pi robot that wants to access data (images and image processed data) from OpenMV cameras. This computer is already connected to a few devices using USB so I want to used the WIFI interface. The MJPEG streamer is cool but I do not need a stream (besides its tricky to save an individual frame from a MJEPG stream) rather a way to access frames from a webserver running on the OpenMV and ideally some text data from some machine vision algorithms that you can run using the IDE.

If I put in a SD card into the camera, can those files be accessible through the webserver? Then I could use wget or curl to download the files using my computer. Maybe I can use MQTT to trigger the camera to save an image and do the image processing then download it subsequently.

Or … instead of using a webserver another way would be to have the camera save the image / data using on the SD card then send the data to another webserver using scp or something similar.

thanks

Hi,

It should be relatively easy to write a web server that serves images on request. See this tutorial for an example. Let me know if you get stuck somewhere.

Thanks for the info its good to know. So before I get too deep in figuring out how to code this. Let me more clear what I want to do after studying the topic all day and tinkering with some of the examples.

If its possible to implement a webserver to access the files stored on the camera’s SD card, what would be the best way to have a computer send a message to the camera to get it to save an image and do some statistics on that image then save this data on the webserver? For example would it be reasonable to use standard GET/POST type html form or is something like MQTT more appropriate. (If MQTT is better, from the documentation I can see an MQTT publisher example but its not clear how to implement a subscriber.)

Another thought would be to set up an arduino or something similar to send a serial command UART that gets the camera to take a photo. So then you access the saved files on the camera’s microSD card using WIFI assuming its possible to access files stored on microSD card using webserver

Hi, I think it’s possible to use either approach. I honestly don’t have much experience in this domain, but I’d be more than happy to support your efforts from the firmware side (if you find any bugs, or missing features report them back here).

Well I am stuck kind of … so you do not think you can make a simple webserver that just posts the files that you can save. If we can get this far … then maybe we can figure out a way to call some functions using UART, analog trigger on some pins whatever is best.

As long as you can serve the saved files via a webserver then it would be awesome but if not then … well the wifi card not so useful since then you have to be tethered to a computer.

Like I said yes it’s possible, the same Python code you’ll find if you google (python+tcp+server+file) should work on OpenMV, with little or no modification. Have you actually tried writing any code that didn’t work ?


EDIT: There’s no need to save files to SD and then send them, you could just capture a snapshot on demand.

I haven’t tried it yet. I am still stuck on how to even send a command to the camera to save an image or a series of images on the SD card. So you have all of these cool examples of various image processing tools (like QR code recognition) can’t you save the file and say … append an text file of the corresponding filename and the QR code translation?

Do you think you can send me over a basic template that allows me to save an image and maybe some text data upon receiving a command (like most ideally a serial cmd - USB_VCP, or uart)? If I can get this far then I can keep going. Right now I just can’t figure out how to make any progress with demonstrating how this camera works better then a standard webcam.

For example the snapshot examples just save one image then you have to reset the camera. Isn’t there a way we can send a serial command to get the camera to save the image. There seems to be no basic example of how to use the USB_VCP example for this in the IDE. If this isn’t appropriate what could be another solution.

Sure if you have a camera tethered to a computer then whats the point of the wifi shield. I get that but maybe if we can start from this point. Along the way we can figure out how to create a python-wgsi get/post page that responds to html calls. Right now I can’t go there until I have some semblance of a working prototype.

I already spent alot of time creating a case and optics for this camera so I am on the fence whether to keep going or just bail out and get a raspi-zero and a raspicam. I want to work with OpenMV but its quagmire for me to get a handle of how to use it.

Hi, please checkout the Pixy Emulation UART Examples. They show off how to parse a serial data byte stream. In particular, all the methods you need for command and control are right here in this module. class USB_VCP – USB virtual comm port — MicroPython 1.15 documentation

Note that since we are a MicroPython board any knowledge about the pyboard applies. So, you can just google “MicroPython ”.

One note about the above VCP port. Make sure to set the DTR line high on the application talking to the camera.

I understand you are having issues and are frustrated… however, given we are a smaller project than the Pi we don’t have everyone in the world providing free documentation content on how to do everything. While our goal was to make the OpenMV Cam programmable it has kinda turned into a can of worms with massive feature creep and every possible request. Given we’re a two man team with day jobs it’s quite difficult to support providing script help. In particular, I really only have the bandwidth and time to answer questions but not really to provide custom example scripts to people on the forums anymore (back when OpenMV was first started we were able to do this but not so much anymore).

If you find using the Pi cam easier for your problem that is fine. From what you’ve been posting about you’re using the OpenMV Cam in a way that we haven’t really focused on providing a lot of support for. In particular, the product was designed to provide an easy way to get object tracking working with an embedded device where all the processing and images stay on the camera. It’s not really mean to transfer images to the PC so much. While this is possible USB bandwidth doesn’t really allow for high res pictures.

Regarding the UVC support on another thread. That isn’t really yet rolled out. But, again, we’ve provided it for the OpenMV Cam H7 so that you can use the OpenMV Cam as a thermal camera with flir lepton devices.

Anyway, if you’d like to continue with the OpenMV Cam. Let’s do this, first, can you design a serial protocol you think would be good for what you need? Are you writing the program in python? What are the inputs and outputs to the program?

Sure I am happy to keep working on this if there is light at the end of the tunnel. Here’s what I am thinking about this product. Its has some really nice features but you are not providing sufficient documentation at least for me to get started. Like this USB library you send the library rather then sending a basic template script that we can modify. Even the basic control examples, sure you can establish a connection but there isn’t enough info about how to actually use the camera with these controls.

Why not have some basic example that shows how you can save images on the microsd card and on a host computer when you send a command through a serial connection (USB) and also save data that got generated with your cool image processing functions? Really how hard is that for you to do? Rather then having us do all of this guessing. If you can establish this then you can work towards the webserver but we are not even there.

I bought the wifi shield thinking you had a webserver solution were we can download the images/data. Why even bother selling it if you dont’ have that feature? So for me its kind of useless in its current state. Think about it the OpenMV isn’t much bigger then a Raspi zero with a raspi camera. I bought OpenMV thinking it could stream image processing data but its not at all straightforward to do. Normally when you have a situation where the vendor expects the customer to do it … this means the vendor hasn’t done it. So how are you so sure it really works?

Hi Rister,

It has never been our design goal to use the camera to take pictures and send to the PC. In particular, the point of the OpenMV Cam was to enable you to build robots like this: https://diyrobocars.com/2017/10/01/a-minimum-viable-racer-for-openmv. I understand you want to use the camera in this way and want scripts and examples for it. However, I’m one person doing this in my free time (which I had more of when I started the business). While I understand the script I provided is not dead simple to use it’s not nothing…

In regards to the webserver, we’ve never advertised that being a feature really. We have a basic MJPG streamer. However, the WiFi shield is really just for folks wanting to do MQTT like things. Please keep in mind we have a rather large and unreasonable number of features do to feature creep and that we’ve tried to meet everyone’s needs. However, it has increasingly become obvious that it’s impossible to support so many different things at once. WiFi support and PC control of the camera isn’t a strong point for us. I’m sorry if that makes the product useless for you.

Anyway, while I continue to work and develop new features for the product I think you may get to where you want to be with a PiCamera faster. It’s a much better solution for what you are trying to do.

That said, I agree we should provide better PC side control scripts. I will put an effort in for the next IDE release to provide a cross platform control library written in python (and threads) that will ship with the IDE. This will allow anyone to talk to the camera using our debug protocol.