Reading numbers (training info)

I’ve tried the library version of reading numbers and it’s not accurate enough for what I need to do. Same for the image template function.

I’ve trained a model in Tensorflow, and would love to use my model since it is tuned to the font I’m trying to read. But TF is a big beast to drag into this tiny space, when all I need to do is read, not train.

Alternately, I’d be happy to use my training data to train the model used by the OpenMV library, since it does fit on the board. Can I train the OpenMV number recognition?

Any pointers to making this work?


Hi, we just integrated ARMs new CMSIS neural network library. It works using caffe I think. So, you’ll have to import you model into caffe and then export to binary header files.

It’s somewhat of a lot of work to do right now… But, technically cutting edge since no one else has done this yet.

I’ll be focused more on building our the code infrastructure in the future to make exporting a model from TF/caffe/pytorch easy and being able to load that with the camera. However, there are a lot of other to do items to focus on in the mean time.

Here’s where we pulled the NN code:

Here’s where we put it:

To make this flexible to run any model we’ll have to build up an infrastructure that let’s you port a small NN to the camera. I think this will involved being able to parse the network topology in a binary file along with the weights.

If you want to contribute, figuring out a good way to generate a binary file with the network topology and weights, etc from any library would be the first step. Then writing a parser to load that would be the next step.

Thanks for the reply and code pointers.

I’d be very happy to contribute to that project, if my job allows. It seems like there are a number of projects here for which this would be useful to others.

I have a background in computer graphics programming, so all of this feels very familiar. “Back in mah days we had to write our own linear algebra libraries for our ray tracers!! Uphill both ways!” :slight_smile: It seems pretty straightforward to me…

I got busy with other things, but I’m ready to approach this again. I will look at this tomorrow and keep you posted on my progress.