Rover One UGV - Assistance Needed with Face Detection

Hi, Everyone. (Sorry in advance for the long post)

I’m new to the OpenMV Platform and recently purchased an OpenMV M7 after seeing a few videos online where the creator talks about it’s facial recognition and thermal imaging capabilities. I’m a fluent Arduino programmer, but I’m amiss when it comes to Python or Face Detection Software. Anyway, about the project…My reason for this purchase was to give my UGV a way to identify whether a warm entity had a human face and vice versa (essentially a way to tell if there’s really a human in the room or just a poster of a person on the wall). The objective of Rover One is to create a system for UGV Rescue Robots to autonomously find survivors in disasters areas and geotag them without the need for human interaction. I’ve connected my AMG8833 Thermal Sensor to the M7 via a perfboard and successfully ran the face detection script with a thermal overlay with both in full RGB color. However, I still have a few small issues I need to work out, mostly due to my poor knowledge of the system.

Issues in order of importance:

1. Horizontal Face Detection - to detect faces that are more than 45 degrees from center (ideally 90 or more) since most victims will likely be lying on the ground.
2. Hot Point Detection - to send data over i2c when it detects a face and a heat signature within the same vicinity, triggering other arduino based actions to take place.
3. Wide Angle Lens Face Detection - to detect faces in the FOV of 185 degrees (I saw OpenMV sells these lens, but was unsure how they work with Face Detection).
4. Interpolating Thermal Sensor - The AMG8833 natively is 8x8 and there’s a way in Arduino to interpolate that data to 32x32 (via Adafruit), but I have no idea how to do it in Python or on this platform.


That’s everything. Sorry it’s so much, but any help would be greatly appreciated.


Disclaimer: I’m not asking anyone to do the project for me. I only ask questions on forums when I have exhausted all other resources. In this case I have been researching for weeks on several of these topics but haven’t found enough to get these particular issues resolved. I’m only looking for someone to point me in the right direction. Thanks.

Hi, thanks for buying the system.

  1. You have to rotate the image if you want to detect faces at different angles. The Haar Face detector is pretty rigid about the face being upright. You can do this in software by doing img.rotation_correction(z_rotation=45) and img.rotation_correction(z_rotation=-45)
  2. The OpenMV Cam makes a poor I2C/SPI slave. And the Arduino can’t be an I2C/SPI slave. So, use async serial to communicate instead. You can use Software Serial on the Arduino to avoid using the default hardware serial port that’s used for debugging.
  3. We support an lens_correction() method that dewarps the image given a wide angle lens. So, if you use this the lens doesn’t matter too much. This does require CPU however.
  4. What’s your goal? There’s a way to draw the thermal image on the main image. Anyway, the next firmware releasing is adding scaling for any size. However, note that the RAM is limited on the M7 so it’s best to avoid doing image scaling if possible. What is your goal? I can provide a firmware image which has a scale() method right now if you need it.

This information was extremely helpful. I’ll get started on it immediately.
To answer your question about number 4. Interpolating the image would just give the image more resolution, but without changing the size of the image, by “guessing” the values in between to give it a smoother look instead of the pixelated look we are familiar with.

Given this:
Block A------------------------ Block B
[24.00]------------------------ [27.00]




[23.00]------------------------ [22.00]
Block C ---------------------- Block D
It becomes this or something similar:
Block A--------------------------Block B
[24.00]—[25.00]–[26.00]–[27.00]
[23.66]—[////]—[////]—[25.34]
[23.33]—[////]—[////]—[23.66]
[23.00]—[22.66]–[22.33]–[22.00]
Block C ------------------------Block D
Basically you take the difference of Blocks A and B (for line 1 the difference is 3). Then divide it by the number of spaces in between plus one. So 3/3 equals 1. Then add 1 back to each value next to Block A in the matrix, then 1 to that value for the next space until you reach Block B.

Since the amount of values between the blocks are constant you basically have to divide the difference by the number of spaces and add each back as it progresses. This is how works in theory anyway.
Here’s an example of this in practice.


Here is a quick video of the original person who did it in Arduino (the person Adafruit based their code on). He takes you through the entire process.

I hope this answers your question. I’m not the best at math, so sorry for any mistakes.

Okay, so, I’ve recently added in nearest neighbor A.K.A blocky scaling. So, I can upscale the 8x8 image to whatever res though it still looks like an 8x8 image. I’d be okay adding an interpolate method that works on that up scaled image to clean it up using interpolation loops.

Can you provide C code for it? Assume pixels are grayscale (0-255) and you have a method called get_pixel(x, y) and set_pixel(x, y, value). Also assume the method has an input buffer that is different from the output buffer.

If you give me some code that does that I can optimize it and then quickly implement it to all image types and have it done for you before the weekend is over. Otherwise, I’ll have to spend time testing and researching which will take a while.

Nearest neighbor versus bi/tri linear scaling was done on purpose since those methods require more compute power for scaling but kinda lie a bit. Your iterative method is different for generating values.

Wow! Thanks for the quick reply. I completely understand. Here’s some code to help you out. Adafruit_AMG88xx/examples/thermal_cam_interpolate at master · adafruit/Adafruit_AMG88xx · GitHub

It runs using an RGB Color scale (same as the AMG8833 example), so it’s not grayscale. Hope that’s not a problem. This link also includes the cpp file which should come in handy.

That example won’t produce the result above. It will just make the image look muddy. E.g.

Interpolation typically destroys edges. So, I’m not sure how that image for the thermal camera was made.

To be clear, I can add an interpolation method for bilinear and bicubic. There’s plenty of code to copy offline for this. However, I’m not sure what’s going on given what you want.

Understood. Sorry I can’t be of much help. I wish I knew the Python programming language as well as I know Arduino. Math however is not my strong suit.

Essentially, as I understand it anyway, it’s taking one of the large 8x8 blocks and dividing it into four smaller pixels with varying values.

I’m not sure if the link came though last time, but this video explains how it was done and how he got the results. Build a Teensy-based thermal imaging camera - YouTube

I’ll add bicubic interpolation. I need to think about how to add it in a nice way however.

Hi, bicubic interpolation will need to be a feature for the next firmware release. We’re just going to tidy things currently up and document and then release all the new stuff.

Hello. Is there any progress? I see no interpolation functions.

Hi, this is added to the latest firmware that has not yet been released. I’m in the process of building the feature now into all methods.

https://github.com/openmv/openmv/blob/master/src/omv/img/draw.c#L2402

It’s not going to be public until December. However, once it hits, it will be built into the LCD display output, TV shield output, FIR input (for MLX, AMG, and Lepton) and more.

Hi, it’s finally here!

Once this PR is merged all the stuff you want and more is done. Sorry for it taking 2 years.

The AMG8833 snapshot example is now ultra cool.

This looks useful, nice work!