Opencv7 with a Raspberry Pi

5.1.2018

Due to the documentation being rather a dearth, I need to ask a few elementary question before I start new projects.

For Project #1: I would like to connect the M7 to a Raspberry Pi 3 look for, say, a blue object and yellow rectangular object,
count them and send the coordinates of the found objects to the raspberry pi3 so a robot arm may pick them up.
I would have to, of course, somehow convert the M7 coordinates to the Robot Arm coordinates - this may be interesting to achieve.
Perhaps detecting the robot arm on the M7 and send coordinates of the arm to the rpi for coordinate transformation.

Project # 2: using the M7 camera to find a path through obstacles perhaps reading the M7 camera input into the RPi and run opencv
for that pathway detection.

Project 3 is face detection: the detected face must be sent to the RPi (jpg) for processing - also unknow faces sent to the RPi3.
I think this will not be an easy task.

Can one reprogramme the Camera via the RPi3? This is to use create multifunctionality when events demand that.

Thank you for any assistance

Project 1 is very easy, the find_blobs method does all this for you and tells you the centroid of objects. You can then send these centroids to the Pi via a UART. See the pixy emulation example for how to send packet data over the UART using the struct module.

Project 2 involves path planning. We don’t offer any higher level Ai ish support like that. If you know how to code these type of algorithms from scratch then you can hand roll them to run in python on the M7. Otherwise, sure find a path planning lib on the Pi and use that.

Project 3 is easy, just use our face finder, and on detection of a face jpeg compress the image and send it to the Pi.

None of these projects above though need the OpenMV Cam. What’s your reason for using it with the Pi and not the Pi Cam? Yes, I understand we make the CV part a lot easier. If you’re good with serial comms then this trade-off makes sense.

Thank you for your reply.

I do not think it is all so easy as you state.

I could use an RPi camera but what then is the point of the M7 Camera.

Can one send programmes to the M7 from a RPi3?

I’ll I’ll need to finalise the high-level designs and reconsider the projects.

Yeah, we have OpenMV IDE for the Raspberry Pi.

Since we’re a MicroPython board you can also use the MicroPython control python script to tell the board what to do by uploading a new MicroPython script to it.

Is there any sample code that demonstrates how to send the jpg image that is taken by the M7 and send it over to the Pi?

What method do you plan to use for reception?

send.uart(I’m going.compressed_for_ide()) is enough if you are sending over a UART.

We planned on using the fastest possible way to send the image since the camera will be moving at around 25mph and we need it to detect, capture, send, and then again continuously detect blobs. Looking over the forums, it seemed like SPI was the way to go but I could be wrong?

No, SPI is defiantly the way to go. The code is effectively the same. On the PI you need it to be a SPI slave to receive the data.

You’re saying the camera needs to be a slave and the pi the master, right? Thats what I see in the example you have for the arduino.

I’m having a lot of difficult connecting the two together. I am using the example from the arduino spi slave example code as a reference to try and implement it for the Pi but I am not receiving anything on the Pi. I get a bunch of garbage data and not sure what could be wrong. I also do not really understanding how to send the data from the M7 using the ustruct pack and then put the data back together into a jpg image. Can you possible show how to set this up? This could be really helpful for this project we are working on and future projects that we plan on working on and integrating with a Pi.

Hi, the camera has to be a SPI master, it’s works really bad as a slave device.

That said, you can get video data through USB with the camera. In fact, OpenMV IDE runs on the Raspberry PI: Download | OpenMV

The camera will appear as a VCP port on the PI and you can push data through that at 12 Mb/s. Note that you can just push the blob detections.

Note that, the same code working in OpenMV IDE that prints stuff to the terminal in OpenMV IDE will print to the VCP port when opened by another program.

Hi…as per my knowledgethe blue filter will be installed on the Earth Observing Astro Pi unit for the duration of the mission. It basically stops visible red and green, but lets visible blue and infrared pass through into the camera. It permits a type of EO analysis of vegetation on ground. Infrared then goes into the red RGB channel of the image data.

printed board assembly

Hii,
How can you say that the project 1 is easy…??? Suppose I want to measure the size of white color object in open ground and send those coordinates (x,y,z) to the raspberry pi via UART… Then according to the values the tool has to pick the white coloured object in the ground … How is it possible ???

Hi SARAVANABAVAN,

You’re getting low quality responses for your questions because you aren’t putting much effort into asking a question that I can reasonably answer. I’ve told you many times now to use the find_blobs() method to find color blobs. See the Color Tracking Examples folder, then look at the single color tracking example. To pick the color thresholds you’ll want to use the Tools->Machine Vision->Threshold Editor. Finally, to control color channel gains you’ll want to only skip frames for something like 100 ms versus 2000 ms that the scripts default to.

Just a quick question - When will the documentation be completed for ther MV7?
This “The tutorial is not complete at all right now. This note will be removed once the tutorial has enough content describing more or less everything about the OpenMV Cam.”
is still in the documentation.

I want to really readthe docs. before proceding with my colour/form tracking programme.

Al the best

N

I’m pretty much doing the initial Arduino model of things right now and supplying examples and answering forum questions. As for the documentation… the main things to add to it would be more details on how to use the hardware and some basic concepts on how cameras work.

Generally, reading the library documentation and looking at examples is what you need to do. I’d like to work more on the tutorial but I never have time for it given forum help support and feature expansion.

Um, for color tracking, what exactly do you need to know about that? The only thing you need to do is to use the Tools → Machine Vision → Threshold Editor to pick the color bounds and then supply those to one of the example scripts. Other details to worry about include how to deal with autogain and exposure settings for the camera. Which part in particular are you having trouble with?