[ASK for help] issues with tf moudle on OpenMV H7 plus

Hi folks,

Recently I took an OpenMV H7 Plus board and built a MobileNetV2 Model for detecting birds but met some issues, could you kindly give some hints?

  1. How do official MobileNet models come?

I’ve noticed that the IDE offers some pretrained and quantified MobileNet models in the CNN network library, but when I compare them to the original paper I find there’s some difference as relu activation and batch normalization layers are completely removed.

  1. Is there there’ any showcase script for converting the tf model into tflite which is suitable for the OpenMV H7 Plus?

Also, I’ve implemented the model and followed official tflite (docs)[训练后整数量化  |  TensorFlow Lite] to convert it but failed. I’ve tried tf2.2(runtime version of MicroPython), tf2.3(first version with integer-only quantization support), tf2.4(latest release) and tf-nightly(with tensorflow_model_optimization support)

in tf2.2, the key script lines are like below:


model = tf.keras.models.load_model(PATH) # load keras model

converter = tf.lite.TFLiteConverter.from_keras_model(model) # init converter

converter.optimizations = [tf.lite.Optimize.DEFAULT] # debug shows this is the key to cause crash

tflite_model = converter.convert() # do conversion

...

it can export the model but the IDE will have the error “Hybrid models are not supported on TFLite Micro.”

if ‘converter.optimizations’ is removed, the output tflite model can slowly work on board (0.5 fps).

to overcome this, I applied int8 quantization with tf2.3


model = tf.keras.models.load_model(PATH) # load keras model

converter = tf.lite.TFLiteConverter.from_keras_model(model) # init converter

converter.optimizations = [tf.lite.Optimize.DEFAULT] # also the key to cause crash

converter.representative_dataset = representative_dataset # load dataset

converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] # choose int8 ops

converter.inference_input_type = tf.int8 # do int8 quantization

converter.inference_output_type = tf.int8 # do int8 quantizatition

tflite_model = converter.convert() # do convert

...

this will cause a new exception “tensorflow/lite/micro/kernels/reduce.cc Currently, only float32 input type is supported.” where GlobalAvaragePooling layer brings in the conflict op.

  1. The only workable model gives a fixed wrong output

With the workable model from tf2.2, the model always outputs 1(I define 0 for bird and 1 for non-bird), but when the model is tested on the PC, it works perfectly. I’m wondering whether the input format(RGB565) caused this as the training input is RGB888, also I’ve tested using sensor.JPEG as input but it’s incompatible with tf.classify

attached is the workable tflite file, original h5 file, and corresponding MicroPython script: OpenMV_share - Google Drive

Thanks in advance for helping me, Looking to hearing from you.

Leo

1 Like

Hi, we basically just work using models that are created from Edge Impulse.

Dealing with TensorFlow model conversion is not something I have tooling setup to help with. TensorFlow Lite has a lot of layer limitations.

Is it possible for you to use Edge Impulse to train your model? https://www.edgeimpulse.com/

Thanks for the quick reply!

Because I’m about to extend the model with custom testing, so I need to control ops inside the model and this is why I use tensorflow instead of Edge Impulse, not making it a blackbox.

So is there any other possible solution? I think figuring out the pipeline from building tf model to converting it into tflite can also help other OpenMV developers.

Thanks in advance!

I have to second Kurileo’s concerns.
TF Lite is the first feature advertised on the product page. Although EdgeImpulse is powerful, it is not an open-source tool (like openMV), thus, better support for TF Lite native workflows would be highly appreciated.

I’m not really an expert on what you need to do. TF is… complex.

Just to be clear. TensorFlow Lite for Microcontroller’s out of the box is quite buggy. CMSIS-NN did not work for the first year until Edge Impulse folks found and fixed several very bad bugs in the code where array bounds were violated.

So… if you are under the assumption you are doing something wrong… this is not necessarily the case. The TensorFlow library is probably just bugged. Given this, are you willing to dive into the firmware and play with fixing the TensoFlow library?

Also, I haven’t updated that code in a while since things started working. Maybe the code base has been updated to support what you need.

بناء وتحويل النماذج  |  TensorFlow Lite ?

Indeed, I also find this :rofl:… As I’m also a contributor of TensorFlow repo and GDE in ML

I’ve taken a look at the (source code)[GitHub - openmv/tensorflow: An Open Source Machine Learning Framework for Everyone] of Tensorflow in MicroPython of OpenMV, and find it’s based on tf2.2, some issues I noticed were fixed in tf2.3 and 2.4 release (RuntimeError: Inputs and outputs not all float|uint8|int16 types.Node number 2 (ADD) failed to invoke. · Issue #37099 · tensorflow/tensorflow · GitHub) & (Unsupported Full-Integer TensorFlow Lite models in TF 2 · Issue #38285 · tensorflow/tensorflow · GitHub).

I think an update to the latest stable release can be greatly helpful, and any issues (and also complaints is ok) I can help to cc it directly to Google and TensorFlow team.

Also great appreciate for your assistance!

Also, I’ve tried and it raises exceptions.

when ‘converter.optimizations = [tf.lite.Optimize.DEFAULT]’ is set. it raises “hybrid model is not supposed”

or converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] and follow are set, it raises “Currently, only float32 input type is supported.”

Okay, we can checkout the latest fork that Edge Impulse says is stable… rebuild the tfile library and then link again with our firmware. Please create a github bug tracker.

1 Like

Thanks a lot! I will test the model asap it’s available.

Once it works, I also would like to share the converting step as a guide

Hi all,
I’ve test converting both in tf2.5(in which offers MLIR support and now becomes rc0 this morning) and tf2.6(latest nightly) yesterday, it still raises issues with OSError: tensorflow/lite/micro/kernels/reduce.cc Currently, only float32 input type is supported. Node MEAN (number 67) failed to invoke with status 1.

Also, the converting script is show as follow and converter.optimizations = [tf.lite.Optimize.DEFAULT] caused crash.

converter = tf.lite.TFLiteConverter.from_saved_model(PATH)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8 
converter.inference_output_type = tf.int8

Hope this information can help,

Leo

Hello, could you please check the process of the upgrade? when will it come?

Hi, we will be updating the IDE heavily soon and then edge impulse in the firmware.

That’s great!