EDGE IMPULSE Result wrong on openmv H7 plus

Hi, I use EDGE IMPULSE to trained the data and deploy the module into my openmv board. Then I read the image from my test data set and use the module to detect the feature I want. BUT the result is not same as what shows on EDGE IMPULSE’s website. Like there is a fixed (about 60 pixels) diviation for the position.


It looks like the size of the image used for training and classification are different. This could cause problems. Can you make sure your images are square? This should fix the issue. The fixed deviation is likely from our code paths diverging on how we scale/crop the image for detection.

If you give me the model and images to detect with I can fix the issue… or tell if it’s us or Edge Impulse doing something wrong here.

Actually the images I used for training and classification are both 320240. Not sure if EDGE IMPULSE process it into 240240. So do you mean I need to manually crop the image into 240*240 for classification? Or even this need to be done for training?

No, it shouldn’t need to be cropped. But, it’s clear that not having them cropped is causing a problem. This was fine before. So, something has changed.

I can tell if it’s our issue if you provide me the model and a test image.

That said, I verified though with our face detection mode which is onboard that scaling/cropping is working as expected though… and you are the first one to post about this in years. So, it’s most likely an issue on your side of things. But, I’m willing to double check.

As for what edge impulse does, it should be centering the image into the middle of the 240x240 out of the images and then scaling that down into the resolution of the CNN which is typically 96x96.

Our code runs detection using the same logic, we grab the center of the 320x240 image which is 240x240, scale that down to 96x96 (or whatever the CNN resolution is). The output is then scaled to 240x240 and then centered on the 320x240 image. There’s an 80 pixel column difference between the two images… so, I would expect an offset of 40 if there is an error. 60 doesn’t make sense.

ei-cataract-openmv-v1.zip (31.7 KB)
image

This is the model and test image. I use it to identify the blue line.

I can look at this on Saturday and Sunday.

Hi, thanks for finding this issue. It was indeed broken. Folks have just been using square images for so long that I did not know this was bugged. It is now fixed: modules/py_tf: Fix non-square image support. by kwagyeman · Pull Request #2031 · openmv/openmv · GitHub

Attached is a firmware for the H7 Plus with the fix.
firmware.zip (1.1 MB)