Can't use Tensorflow-Lite for OpenMV

i try to build a lot of model base on tflite for OpenMV but it show error " OSError: Didn’t find op for builtin opcode ‘XXX’ version’2’ " [XXX mean any operation in https://github.com/openmv/tensorflow/blob/openmv/tensorflow/lite/micro/kernels/all_ops_resolver.cc]

what should i do? and if i can’t use it with tensorflow version 2.
can i use tensorflow version 1 (downgrade)? and how?
if can’t i want to know how long this issue will be fix? because i have a senior project with this on OpenMV.
thank you very much and sorry for my a lot of question.

This just means that your model uses an operator that’s not support by TF lite port for microcontrollers. You need to replace that operator, see the list of supported ops here:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/kernels/all_ops_resolver.cc#L21

Please keep in mind that your cam (H7) doesn’t have an external memory so even if you get the model working you may not be able to load it in memory. The built-in person detector model works on this cam because the model is stored in flash.

Hi @iabdalkader , the github link you provided mentions that a Softmax version 2 is supported, however I get the following error (as shown in the attached screenshot). Please help.
I have trained a small model with MNIST, then converted the model to “.tflite” using the post training integer quantization method, the size of quantized file is about 80kB.
Screenshot_2020-03-30 Softmax_op_err png (PNG Image, 1062 × 465 pixels).png

This literally means that TensorFlow Lite from Google doesn’t support that opcode yet.

That said, Google might have added support for it. https://github.com/openmv/tensorflow/blob/openmv/tensorflow/lite/micro/kernels/all_ops_resolver.cc

It looks like the latest unreleased code has support for that op. We are about to cut a new firmware version so it should be possible to run your network if you update the firmware to the latest tip of GitHub. You either have to wait for the release or build the firmware and flash the camera yourself.

Hi @kwagyeman , Thank you so much for such a quick reply and clarifying it out.

I’m ready to build the firmware and flash the camera myself, however, I’m fairly new to all this and don’t have any idea as to how building & flashing could be done. Can you please provide any instructions/link to instructions on how to do it ? Any help would be highly appreciated.

Also, any tentative dates as to when the next firmware release would be out? Thanks in advance.

Probably next week for a new release.

1 Like

Hey, thanks a lot for your reply.

I followed the instructions provided in the given link and was building the code (by using the Hammer button on QT Creator). However, after a while I came across the error as shown in the screenshot.
I have tried installing the mpy-cross using pip and by building it using the github repo as mentioned here GitHub - micropython/micropython: MicroPython - a lean and efficient Python implementation for microcontrollers and constrained systems , but the problem still persists. Any ideas as to what else should be tried?

Thanks

Ah, yeah, my documentation is out of date. Cd into MicroPython and into the mpcross directory and do run make.

Hi Nyamekye,

Thanks for the reply, I was able to build the firmware myself and flash it to OpenMV H7. After performing the Full Integer Quantization using TF Lite on a simple model on MNIST (as per the directions mentioned in official tensorflow docs), I started getting an error as shown in the screenshot.

I followed the solution provided here Quantization problem while reproducing person detection example · Issue #37347 · tensorflow/tensorflow · GitHub ,& that removed the said error but gave another error saying “Input model data type should be 8-bit quantized!”, despite the fact that the model is already integer quantized.
I also verified my model using Netron (as mentioned in the linked comment above) and it showed that the inputs/outputs are of integer type (screenshot attached).

I also have used the latest release of firmware and same situation occurs in that too.
What should I do now? Please help. Thanks in advance.

IM
new_rel_err1.png

That error is talking about a internal layer.

The line of code the error is on is mentioned. I would look at that line since this is not our code:

Honestly, I’m, not sure what’s going on here… it looks like the quantize you probably shouldn’t use.

In other news, because of the lock-down I have a lot more time to work on OpenMV and I’m getting a lot of work done on integrating easy TensorFlow model training into OpenMV. We will have an external web service that does transfer learning for image classification models later this year so no-one has to deal with trying to train models using TensorFlow.

Hi Nyamekye,

Thanks for the reply.

As I mentioned that the error I’m facing now is “Input model data type should be 8-bit quantized!” (Screenshot Attached).
But the model is already integer quantized & verified through Netron which shows that the inputs/outputs are of integer type (screenshot attached).
So what to do now to resolve this condition?

I’m building an easy model for training MNIST data as must be clear from the model design as well.
Thanks

IM
new_rel_err2.png

Hi, sure, but, it’s not:

When our code checks your model it reports that the input vector is not uint8.

Mmm, Ah, maybe its int8. Which in this case I need to update out TensorFlow code. I can do that. Will do tomorrow. I have a lot going on today.

Hi,

Thanks for the reply. Yes it is int8; and sure I’ll wait for the updated code.
Also, please take care of yourself during such times and not overload yourself with work . : )

Regards,
IM

Hi,

Hope everyone is healthy.

Circling back to get an update about the issue and the TF code ?

Stay Safe.
Regards,
IM

Let me do this now. I’ve been deep into getting the interface library complete. I’ll switch gears and just give you a binary and do the updates today. Let me get back to you in a few hours.

Hi, here’s a new firmware for the H7. It should work for you. Let me know and then this will be in the next release.
firmware.zip (1.16 MB)

Hi,
Thanks for the new firmware. It did work and the model finally got loaded.

Currently just by replacing the model in the person detection example with the custom trained model and changing the labels, results in constant display of one particular answer in the video feed (I’m working with MNIST).

I will be digging deeper to change the example code for image classification. Also thinking to try out custom object detection as well.

Thanks & Regards,
IM

Hi imlearning,

I have same problem of yours.
My model is quantized (Input int8 and output int8).
But got

OSError: tensorflow/lite/micro/kernels/quantize.cc:47 input->type == kTfLiteFloat32 || input->type == kTfLiteInt16 was not true.

This problem is not solved with uploaded new firmware.

My TFLite model is here.

Thanks.


Hi, does that mean you got it working or it was generating a wrong classification?

Mnist required black and white data so you should binarize the input.

Always feed the network images that look exactly like what it was trained on. Networks do not really generalize. They just recall.

Hello,

Ahh yes! thanks for the thoughtful suggestion.

However, I already tried binarizing the input feed and the output is generating same result (say any particular number for e.g. ‘five’) no matter what the feed is.
Here’s the code that I used:

import sensor, image, time, os, tf

sensor.reset()                         # Reset and initialize the sensor.
sensor.set_contrast(3)
sensor.set_pixformat(sensor.GRAYSCALE) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
sensor.set_windowing((128, 128))       # Set 128x128 window.
sensor.skip_frames(time=100)
sensor.set_auto_gain(False)
sensor.set_auto_exposure(False)

net = tf.load('/mnistconv_int8.tflite')
labels = ["zero","one","two","three","four","five","six","seven","eight","nine"]

clock = time.clock()

while(True):
    clock.tick()
    img = sensor.snapshot()

    out = net.classify(img.copy().binary([( 0, 1)], invert=True))
    score = max(out[0][4]) * 100
    max_idx = out[0][4].index(max(out[0][4]))
    if (score < 50):
        score_str = "??:??%"
    else:
        score_str = "%s:%d%% "%(labels[max_idx], score)
    img.draw_string(0, 0, score_str)

    
    print(clock.fps())

I also tried using other model files without changing the train code, but the same behavior continues at output with another number getting fixed as the new output.

So basically, the output changes with change of model files but is constant for a particular model with a slight variation in scores here & there.

Let me know if the code needs some modifications or any other suggestions.

Thanks & Regards,
IM