Using OpenMV for text

Hey all,

I’ve checked online and I think i know the answer, but I wanted to double check. Has anyone been able to use OpenMV H7 Cameras to read text. I’ve got a project where a robot will navigate to a small screen and will be required to read instructions from a phone. I’ve seen youtube videos of this camera reading characters that are relatively large, however I am curious if I would be able to extract text that is around size 14 font black and on white background.

There’s just a CNN that we trained a while ago that can do this for one letter. However, general text reading is not possible right now.

The focus push this year is to build out an automatic CNN training pipeline. However, we can only do classification and image segmentation. Anything else is likely not doable.

would it be possible to read a letter, store it in a list or something? i imagine it would take some maths to detect the first letter, then the next, etc

Yes, train a cnn on a letter, then slide the cnn window over the image and store the string of letters produced. It won’t be state of the art however, but, it will work.

Yes, train a cnn on a letter, then slide the cnn window over the image and store the string of letters produced. It won’t be state of the art however, but, it will work.