Can't use Tensorflow-Lite for OpenMV

Discussion related to "under the hood" OpenMV topics.
uraibeef
Posts: 18
Joined: Sun Apr 28, 2019 5:56 am

Can't use Tensorflow-Lite for OpenMV

Postby uraibeef » Tue Mar 17, 2020 8:46 am

i try to build a lot of model base on tflite for OpenMV but it show error " OSError: Didn't find op for builtin opcode 'XXX' version'2' " [XXX mean any operation in https://github.com/openmv/tensorflow/bl ... esolver.cc]

what should i do? and if i can't use it with tensorflow version 2.
can i use tensorflow version 1 (downgrade)? and how?
if can't i want to know how long this issue will be fix? because i have a senior project with this on OpenMV.
thank you very much and sorry for my a lot of question.
User avatar
iabdalkader
Posts: 1168
Joined: Sun May 24, 2015 3:53 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby iabdalkader » Tue Mar 17, 2020 11:27 am

This just means that your model uses an operator that's not support by TF lite port for microcontrollers. You need to replace that operator, see the list of supported ops here:
https://github.com/tensorflow/tensorflo ... ver.cc#L21

Please keep in mind that your cam (H7) doesn't have an external memory so even if you get the model working you may not be able to load it in memory. The built-in person detector model works on this cam because the model is stored in flash.
imlearning
Posts: 16
Joined: Mon Mar 30, 2020 1:58 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby imlearning » Mon Mar 30, 2020 2:20 am

Hi @iabdalkader , the github link you provided mentions that a Softmax version 2 is supported, however I get the following error (as shown in the attached screenshot). Please help.
I have trained a small model with MNIST, then converted the model to ".tflite" using the post training integer quantization method, the size of quantized file is about 80kB.
Attachments
Screenshot_2020-03-30 Softmax_op_err png (PNG Image, 1062 × 465 pixels).png
Screenshot_2020-03-30 Softmax_op_err png (PNG Image, 1062 × 465 pixels).png (9.89 KiB) Viewed 4994 times
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Mon Mar 30, 2020 2:31 am

This literally means that TensorFlow Lite from Google doesn't support that opcode yet.

That said, Google might have added support for it. https://github.com/openmv/tensorflow/bl ... esolver.cc

It looks like the latest unreleased code has support for that op. We are about to cut a new firmware version so it should be possible to run your network if you update the firmware to the latest tip of GitHub. You either have to wait for the release or build the firmware and flash the camera yourself.
Nyamekye,
imlearning
Posts: 16
Joined: Mon Mar 30, 2020 1:58 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby imlearning » Mon Mar 30, 2020 2:52 am

Hi @kwagyeman , Thank you so much for such a quick reply and clarifying it out.

I'm ready to build the firmware and flash the camera myself, however, I'm fairly new to all this and don't have any idea as to how building & flashing could be done. Can you please provide any instructions/link to instructions on how to do it ? Any help would be highly appreciated.

Also, any tentative dates as to when the next firmware release would be out? Thanks in advance.
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Mon Mar 30, 2020 3:02 am

https://github.com/openmv/openmv/wiki

Probably next week for a new release.
Nyamekye,
imlearning
Posts: 16
Joined: Mon Mar 30, 2020 1:58 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby imlearning » Thu Apr 02, 2020 4:18 am

Hey, thanks a lot for your reply.

I followed the instructions provided in the given link and was building the code (by using the Hammer button on QT Creator). However, after a while I came across the error as shown in the screenshot.
I have tried installing the mpy-cross using pip and by building it using the github repo as mentioned here https://github.com/micropython/micropyt ... -mpy-cross , but the problem still persists. Any ideas as to what else should be tried?

Thanks
Attachments
mpy_cross_err.png
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Thu Apr 02, 2020 12:59 pm

Ah, yeah, my documentation is out of date. Cd into MicroPython and into the mpcross directory and do run make.
Nyamekye,
imlearning
Posts: 16
Joined: Mon Mar 30, 2020 1:58 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby imlearning » Tue Apr 07, 2020 2:24 am

Hi Nyamekye,

Thanks for the reply, I was able to build the firmware myself and flash it to OpenMV H7. After performing the Full Integer Quantization using TF Lite on a simple model on MNIST (as per the directions mentioned in official tensorflow docs), I started getting an error as shown in the screenshot.

I followed the solution provided here https://github.com/tensorflow/tensorflo ... -602966745 ,& that removed the said error but gave another error saying "Input model data type should be 8-bit quantized!", despite the fact that the model is already integer quantized.
I also verified my model using Netron (as mentioned in the linked comment above) and it showed that the inputs/outputs are of integer type (screenshot attached).

I also have used the latest release of firmware and same situation occurs in that too.
What should I do now? Please help. Thanks in advance.

IM
Attachments
new_rel_err1.png
proof_of_quantization.png
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Tue Apr 07, 2020 11:01 am

That error is talking about a internal layer.

The line of code the error is on is mentioned. I would look at that line since this is not our code:

https://github.com/openmv/tensorflow/bl ... ize.cc#L51

Honestly, I'm, not sure what's going on here... it looks like the quantize you probably shouldn't use.

...

In other news, because of the lock-down I have a lot more time to work on OpenMV and I'm getting a lot of work done on integrating easy TensorFlow model training into OpenMV. We will have an external web service that does transfer learning for image classification models later this year so no-one has to deal with trying to train models using TensorFlow.
Nyamekye,
imlearning
Posts: 16
Joined: Mon Mar 30, 2020 1:58 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby imlearning » Tue Apr 07, 2020 2:33 pm

Hi Nyamekye,

Thanks for the reply.

As I mentioned that the error I'm facing now is "Input model data type should be 8-bit quantized!" (Screenshot Attached).
But the model is already integer quantized & verified through Netron which shows that the inputs/outputs are of integer type (screenshot attached).
So what to do now to resolve this condition?

I'm building an easy model for training MNIST data as must be clear from the model design as well.
Thanks

IM
Attachments
new_rel_err2.png
proof_of_quantization.png
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Tue Apr 07, 2020 4:04 pm

Hi, sure, but, it's not:

https://github.com/openmv/tensorflow-li ... btf.cc#L39

When our code checks your model it reports that the input vector is not uint8.

Mmm, Ah, maybe its int8. Which in this case I need to update out TensorFlow code. I can do that. Will do tomorrow. I have a lot going on today.
Nyamekye,
imlearning
Posts: 16
Joined: Mon Mar 30, 2020 1:58 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby imlearning » Wed Apr 08, 2020 4:25 am

Hi,

Thanks for the reply. Yes it is int8; and sure I'll wait for the updated code.
Also, please take care of yourself during such times and not overload yourself with work . : )

Regards,
IM
imlearning
Posts: 16
Joined: Mon Mar 30, 2020 1:58 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby imlearning » Tue Apr 14, 2020 8:12 am

Hi,

Hope everyone is healthy.

Circling back to get an update about the issue and the TF code ?

Stay Safe.
Regards,
IM
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Tue Apr 14, 2020 10:34 am

Let me do this now. I've been deep into getting the interface library complete. I'll switch gears and just give you a binary and do the updates today. Let me get back to you in a few hours.
Nyamekye,
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Tue Apr 14, 2020 11:48 pm

Hi, here's a new firmware for the H7. It should work for you. Let me know and then this will be in the next release.
Attachments
firmware.zip
(1.16 MiB) Downloaded 38 times
Nyamekye,
imlearning
Posts: 16
Joined: Mon Mar 30, 2020 1:58 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby imlearning » Thu Apr 16, 2020 1:31 am

Hi,
Thanks for the new firmware. It did work and the model finally got loaded.

Currently just by replacing the model in the person detection example with the custom trained model and changing the labels, results in constant display of one particular answer in the video feed (I'm working with MNIST).

I will be digging deeper to change the example code for image classification. Also thinking to try out custom object detection as well.

Thanks & Regards,
IM
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Thu Apr 16, 2020 10:42 am

Hi, does that mean you got it working or it was generating a wrong classification?

Mnist required black and white data so you should binarize the input.

Always feed the network images that look exactly like what it was trained on. Networks do not really generalize. They just recall.
Nyamekye,
imlearning
Posts: 16
Joined: Mon Mar 30, 2020 1:58 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby imlearning » Fri Apr 17, 2020 1:38 am

Hello,

Ahh yes! thanks for the thoughtful suggestion.

However, I already tried binarizing the input feed and the output is generating same result (say any particular number for e.g. 'five') no matter what the feed is.
Here's the code that I used:

Code: Select all

import sensor, image, time, os, tf

sensor.reset()                         # Reset and initialize the sensor.
sensor.set_contrast(3)
sensor.set_pixformat(sensor.GRAYSCALE) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
sensor.set_windowing((128, 128))       # Set 128x128 window.
sensor.skip_frames(time=100)
sensor.set_auto_gain(False)
sensor.set_auto_exposure(False)

net = tf.load('/mnistconv_int8.tflite')
labels = ["zero","one","two","three","four","five","six","seven","eight","nine"]

clock = time.clock()

while(True):
    clock.tick()
    img = sensor.snapshot()

    out = net.classify(img.copy().binary([( 0, 1)], invert=True))
    score = max(out[0][4]) * 100
    max_idx = out[0][4].index(max(out[0][4]))
    if (score < 50):
        score_str = "??:??%"
    else:
        score_str = "%s:%d%% "%(labels[max_idx], score)
    img.draw_string(0, 0, score_str)

    
    print(clock.fps())


I also tried using other model files without changing the train code, but the same behavior continues at output with another number getting fixed as the new output.

So basically, the output changes with change of model files but is constant for a particular model with a slight variation in scores here & there.

Let me know if the code needs some modifications or any other suggestions.

Thanks & Regards,
IM
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Fri Apr 17, 2020 11:15 am

Your code is completely wrong... Please read what the network outputs. I will post an update with changes to your code in a it.
Nyamekye,
imlearning
Posts: 16
Joined: Mon Mar 30, 2020 1:58 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby imlearning » Mon Apr 20, 2020 3:10 am

Hi,
Yes, I had developed that code based on the network output I was getting.

For e.g. printing just the "out" variable in above code gets the following output like:
[{"x":0, "y":0, "w":128, "h":128, "output":[0.5294118, 0.8745098, 0.627451, 0.5803922, 0.5254902, 0.682353, 0.5372549, 0.6352942, 0.5058824, 0.5294118]}]
Code with outputs is attached below

But sure, I'll be happy to learn from your updated code.
Thanks in advance.

Regards,
IM
Attachments
Screenshot from 2020-04-20 12-39-32.png
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Mon Apr 20, 2020 3:25 am

Hi, another user brought an issue up and the problem is likely related to the fact that I don't do uint8_t->int8_t input/output conversion. Right now the image is fed to the net as a uint8_t image.

Can you tell me what type of image data you trained on and what you expect the output to be? I.e is the image input -128 to 127? Or 0 to 127? And is the output -128 to 127? or 0 to 127? I have to fix this in the C code for things to work.

As for your code being wrong:
  • out = net.classify(img.copy().binary([( 0, 1)], invert=True))
    score = max(out[0][4]) * 100
    max_idx = out[0][4].index(max(out[0][4]))
    if (score < 50):
    score_str = "??:??%"
    else:
    score_str = "%s:%d%% "%(labels[max_idx], score)
    img.draw_string(0, 0, score_str)
I was just thrown off by this. I think this is from some other code older code. The tensorflow example doesn't show how to print class labels this way.
Nyamekye,
imlearning
Posts: 16
Joined: Mon Mar 30, 2020 1:58 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby imlearning » Mon Apr 20, 2020 8:03 am

Hi,
I trained on the MNIST images(uint8) , then quantized it with full integer quantization, making the inputs and outputs as int8 type ( screenshot attached).
Attachments
Screenshot from 2020-04-20 17-13-03.png
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Mon Apr 20, 2020 11:46 am

Okay, I'll make a version of the firmware that fixes the offset and post it tonight. This should fix the problem.

The firmware will check the network input type and then fix the data signedness automatically.

...

That said, it would be helpful to know the range if possible. I'm going to assume I just need to subtract 128 on the input and add 128 on the output. I hope that is correct.
Nyamekye,
imlearning
Posts: 16
Joined: Mon Mar 30, 2020 1:58 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby imlearning » Tue Apr 21, 2020 2:27 am

Hi,
The scikit-image website says the range of int8 is -128 to 127.
However, I think the your approach should work. Similar thing is suggested in this comment too: https://github.com/tensorflow/tensorflo ... -602962764
Attachments
Screenshot from 2020-04-21 09-33-29.png
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Wed Apr 22, 2020 2:30 pm

Hi, try this binary out. It will subtract 128 from the input unsigned image data if the data is signed and add 128 to the output if signed to make it unsigned again. The input and output are done independently. So, you are free to mix and match.

I've verified that our unsigned person detector network works.

...

Please let me know if this works and then I will send in the branch to be merged.
Attachments
firmware.zip
(1.16 MiB) Downloaded 41 times
Nyamekye,
imlearning
Posts: 16
Joined: Mon Mar 30, 2020 1:58 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby imlearning » Thu Apr 23, 2020 3:05 pm

Hi,
Thanks for providing the binary. I gave it a try but the same situation still continues. The outputs are still following the same pattern as I mentioned earlier.
the output is generating same result (say any particular number for e.g. 'five') no matter what the feed is.....
So basically, the output changes with change of model files but is constant for a particular model with a slight variation in scores here & there.
I am attaching the full integer quantized model with this.
Attachments
mnistconv_int8.zip
(62.12 KiB) Downloaded 35 times
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Thu Apr 23, 2020 3:12 pm

Okay, I need this from you. Can you provide me the final model output and the training data set that you used for it. If I can load these images onto the camera via the SD card then I can actually debug this and get it working.

I won't be able to debug your model unless I know what type of input it needs.
Nyamekye,
imlearning
Posts: 16
Joined: Mon Mar 30, 2020 1:58 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby imlearning » Fri Apr 24, 2020 3:47 am

Okay, Here's is a Jupyter notebook which trains the model and performs full integer quantization.

Open it up with Google Colaboratory and run in the "Playground" mode after that.

Running the entire notebook (by clicking on "Runtime" -> "Run all" option in Colab ) will get you the Final Model (".tflite") output file which you can easily download from the "Files" section on left side of Colab.

The dataset is the inbuilt MNIST dataset that Tensorflow/Keras provide.

Link: https://drive.google.com/file/d/1eiZlf4 ... sp=sharing
imlearning
Posts: 16
Joined: Mon Mar 30, 2020 1:58 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby imlearning » Wed Apr 29, 2020 2:11 am

Hey,

Just circling back to see if there are any updates on this?

Regards,
IM
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Wed Apr 29, 2020 11:35 am

I'm focused on the interface library right now. Since the simple fix didn't work this has to become a project for me now.

Question, did you try binarizing the image (using actual thresholds - not just calling to_bitmap()) and feed that to the net? Previously, on our old CNN system you had to binarize the image and then invert the image to get things to work.

I.e. the network needs the background as white and the characters black. So, this requires and inverted binarize operation.
Nyamekye,
imlearning
Posts: 16
Joined: Mon Mar 30, 2020 1:58 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby imlearning » Thu May 07, 2020 9:29 am

Ahh alright, then it might take a while I assume?

And yes I did try the image with White Background & Black characters (& vice versa).

I feel maybe it's not just about the input images or the dataset trained on because as soon as the video begins the same result [a particular number(class) ] starts getting displayed and the result never changes no matter what the input is.

I hope you all are safe
Wishing you good health,
IM
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Thu May 07, 2020 10:19 am

Hi, I'll be able to start working on this next week. I have to finish the latest firmware release documentation and get that out to everyone.

We have a goal of integration with Edge Impulse soon for automatic deep learning training so everything with TensorFlow needs to get working for the release coming next month after the one we are about to do.
Nyamekye,
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Thu May 14, 2020 9:00 pm

Hi, I've updated the firmware to support int8/uint8/float32.

The model still needs to be int8/uint8 quantized internally, but, float32 is supported as the input/output layer.

It was tested on Mobilenet (uint8), the person detector (uint8), another person detector (int8), and then a flower model (float32 with int8 internally).

This file is for the H7 Plus. Let me make one for the H7.
Attachments
firmware.zip
(1.18 MiB) Downloaded 11 times
Nyamekye,
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Thu May 14, 2020 9:12 pm

H7 Firmware
Attachments
firmware.zip
(1.17 MiB) Downloaded 17 times
Nyamekye,
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Thu May 14, 2020 9:13 pm

Regarding your model. Please try a dataset where the input data is a series of color images and verify that your model works okay before trying mnist.

We are getting things ready for Edge Impulse support. So, we expect TensorFlow support to be working smoothly moving forwards.
Nyamekye,
shawn.lee
Posts: 4
Joined: Tue May 19, 2020 6:24 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby shawn.lee » Tue May 19, 2020 7:27 am

Hi,

I updated the H7 firmware that kwagyeman released on Fri May 15, 2020.

I follow the steps to prepare the tensorflow lite model for mnist.
https://www.tensorflow.org/lite/perform ... eger_quant
partial of training code as below.
I modify some codes for channel dimention

Code: Select all

mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()

# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0

train_images = np.expand_dims(train_images, axis=3)
test_images = np.expand_dims(test_images, axis=3)

# Define the model architecture
model = tf.keras.Sequential([
  tf.keras.layers.InputLayer(input_shape=(28, 28, 1)),
  tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
  tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])

.......................................
Everything is fine on my PC.
I try these models on openmv h7.
What could i set image range? 0~255 or 0~1

Code: Select all

import sensor, image, time, os, tf

sensor.reset()  # Reset and initialize the sensor.
sensor.set_contrast(3)
sensor.set_pixformat(sensor.GRAYSCALE)  # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA)  # Set frame size to QVGA (320x240)
sensor.set_windowing((28, 28))  # Set 128x128 window.
sensor.skip_frames(time=100)
sensor.set_auto_gain(False)
sensor.set_auto_exposure(False)

#net = tf.load('/mnist_model_quant.tflite')
net = tf.load('/mnist_model_quant_io.tflite')
clock = time.clock()

while (True):
    clock.tick()
    img = sensor.snapshot()
    out = net.classify(img.copy().binary([(0, 1)], invert=True))#?????????
    print(out)
Thanks.
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Wed May 20, 2020 1:40 am

Should be 0-255. However, you need to match the color of the binary data with MNIST. So, if it's black on white then 0 and 255. If it's white on black then 255 and 0.
Nyamekye,
shawn.lee
Posts: 4
Joined: Tue May 19, 2020 6:24 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby shawn.lee » Wed May 20, 2020 7:36 am

Thanks for your reply.

test code on my PC

Code: Select all

img = np.ones((28,28,1),dtype=np.float)
white_img = img*255
test_image = np.expand_dims(white_img, axis=0).astype(np.float32)

tflite_model_quant_file = tflite_models_dir/"mnist_model_quant_io.tflite"
interpreter_quant = tf.lite.Interpreter(model_path=str(tflite_model_quant_file))
interpreter_quant.allocate_tensors()
input_index_quant = interpreter_quant.get_input_details()[0]["index"]
output_index_quant = interpreter_quant.get_output_details()[0]["index"]
interpreter_quant.set_tensor(input_index_quant, test_image)
interpreter_quant.invoke()
predictions = interpreter_quant.get_tensor(output_index_quant)
print('output:,',predictions[0])

output:, [0. 0. 0.0625 0.8515625 0. 0.078125 0.
0. 0. 0. ]

test code on H7

Code: Select all

import sensor, image, time, os, tf

sensor.reset()  # Reset and initialize the sensor.
sensor.set_contrast(3)
sensor.set_pixformat(sensor.GRAYSCALE)  # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA)  # Set frame size to QVGA (320x240)
sensor.set_windowing((28, 28))  # Set 128x128 window.
sensor.skip_frames(time=100)
sensor.set_auto_gain(False)
sensor.set_auto_exposure(False)

net = tf.load('/mnist_model_quant_io.tflite')
clock = time.clock()

while (True):
    clock.tick()
    myImage = image.Image("white.ppm", copy_to_fb = True)
    graytmp = myImage.to_grayscale(True, rgb_channel=0)
    out = net.classify(graytmp)
    print(out)

output as below:
[{"x":0, "y":0, "w":28, "h":28, "output":[0.5019608, 0.5019608, 0.5369792, 0.9202359, 0.5019608, 0.544761, 0.5019608, 0.5019608, 0.5019608, 0.5019608]}]
[{"x":0, "y":0, "w":28, "h":28, "output":[0.5019608, 0.5019608, 0.5369792, 0.9202359, 0.5019608, 0.544761, 0.5019608, 0.5019608, 0.5019608, 0.5019608]}]
[{"x":0, "y":0, "w":28, "h":28, "output":[0.5019608, 0.5019608, 0.5369792, 0.9202359, 0.5019608, 0.544761, 0.5019608, 0.5019608, 0.5019608, 0.5019608]}]
[{"x":0, "y":0, "w":28, "h":28, "output":[0.5019608, 0.5019608, 0.5369792, 0.9202359, 0.5019608, 0.544761, 0.5019608, 0.5019608, 0.5019608, 0.5019608]}]
[{"x":0, "y":0, "w":28, "h":28, "output":[0.5019608, 0.5019608, 0.5369792, 0.9202359, 0.5019608, 0.544761, 0.5019608, 0.5019608, 0.5019608, 0.5019608]}]
[{"x":0, "y":0, "w":28, "h":28, "output":[0.5019608, 0.5019608, 0.5369792, 0.9202359, 0.5019608, 0.544761, 0.5019608, 0.5019608, 0.5019608, 0.5019608]}]
[{"x":0, "y":0, "w":28, "h":28, "output":[0.5019608, 0.5019608, 0.5369792, 0.9202359, 0.5019608, 0.544761, 0.5019608, 0.5019608, 0.5019608, 0.5019608]}]
[{"x":0, "y":0, "w":28, "h":28, "output":[0.5019608, 0.5019608, 0.5369792, 0.9202359, 0.5019608, 0.544761, 0.5019608, 0.5019608, 0.5019608, 0.5019608]}]

why they are so different? 0.5019608 means?
The sum should be 1 or not?
Attachments
white.7z
white.ppm
(207 Bytes) Downloaded 1 time
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Wed May 20, 2020 11:18 pm

If you softmax the output. Our code doesn't do that for you. We just get 0-255 out from the library and then we divide that by 255 and then that's the float result.

Can you attach the trained model for me to run, along with 1 or two images? Thanks,
Nyamekye,
shawn.lee
Posts: 4
Joined: Tue May 19, 2020 6:24 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby shawn.lee » Thu May 21, 2020 4:24 am

Yes, Thank you.
I don't know how to create a white image with 28x28x1 by code(like np.ones()*255)
(if any code could make it , please let me know)
so i read a white.ppm from flash.
I already check they are exactly 255 value every pixels.

I attached the models and a white image
the code as below

Code: Select all

sensor.set_contrast(3)
sensor.set_pixformat(sensor.GRAYSCALE)  # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA)  # Set frame size to QVGA (320x240)
sensor.set_windowing((28, 28))  # Set 128x128 window.
sensor.skip_frames(time=100)
sensor.set_auto_gain(False)
sensor.set_auto_exposure(False)

net = tf.load('/mnist_model_quant_io.tflite')
clock = time.clock()

while (True):
    clock.tick()
    myImage = image.Image("white.ppm", copy_to_fb = True)

    graytmp = myImage.to_grayscale(True, rgb_channel=0)
    #print('graytmp',graytmp)
    #for y in range(0, 28):
        #str_value = ''
        #for x in range(0, 28):
            #str_value += str(graytmp.get_pixel(x,y))+' '
        #print(str_value)
    #print('-------------')
    out = net.classify(graytmp)
    print(out)
Attachments
model and white ppm.7z
(17.31 KiB) Downloaded 5 times
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Sun May 24, 2020 12:44 am

Hi, I was able to run your model.

The model gets the correct result. But, the TensorFlow vector output is not the same.

RAW Output = [0.5019608, 0.5019608, 0.5369792, 0.9202359, 0.5019608, 0.544761, 0.5019608, 0.5019608, 0.5019608, 0.5019608]

After SoftMax = [0.09435112, 0.09435112, 0.09771368, 0.1433513, 0.09435112, 0.09847704, 0.09435112, 0.09435112, 0.09435112, 0.09435112] -> Sums to 1.0

If we look at your result: [0. 0. 0.0625 0.8515625 0. 0.078125 0. 0. 0. 0. ]

Then compare to the original output:

0.5019608 * 255 == 128 -> Which is 0 in Int8 math. /= 128 -> 0
0.5369792 * 255 == 136 -> Which is 8.9 in Int8 math. /= 128 -> 0.06953125‬
0.9202359 * 255 == 234 -> Which is 106 in Int8 math. /= 128 -> 0.828125‬
0.544761 * 255 == 138 -> Which is 10 in Int8 math. /= 128 -> 0.078125

We can then see the result closely matches. However, there is an issue with the scaling input and output logic in our code.

It's not been clear with TensorFlow what the input and output scales are... so, thanks for testing the code out. I should be able to fix my code to make this right. Everyone I ask at Google literally gives a different answer for what the network input and outputs should be in when using float/int8/uint8. It's quite confusing.
Nyamekye,
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Sun May 24, 2020 1:52 am

Nyamekye,
User avatar
kwagyeman
Posts: 4003
Joined: Sun May 24, 2015 2:10 pm

Re: Can't use Tensorflow-Lite for OpenMV

Postby kwagyeman » Sun May 24, 2020 2:03 am

New H7 Firmware.
Attachments
firmware.zip
(1.18 MiB) Downloaded 6 times
Nyamekye,
shawn.lee
Posts: 4
Joined: Tue May 19, 2020 6:24 am

Re: Can't use Tensorflow-Lite for OpenMV

Postby shawn.lee » Sun May 24, 2020 11:17 pm

Thanks. The new firmware output the right vector values.

Return to “Technical Discussion”

Who is online

Users browsing this forum: No registered users and 6 guests