Recently I took an OpenMV H7 Plus board and built a MobileNetV2 Model for detecting birds but met some issues, could you kindly give some hints?
- How do official MobileNet models come?
I’ve noticed that the IDE offers some pretrained and quantified MobileNet models in the CNN network library, but when I compare them to the original paper I find there’s some difference as relu activation and batch normalization layers are completely removed.
- Is there there’ any showcase script for converting the tf model into tflite which is suitable for the OpenMV H7 Plus?
Also, I’ve implemented the model and followed official tflite (docs)[Post-training integer quantization | TensorFlow Lite] to convert it but failed. I’ve tried tf2.2(runtime version of MicroPython), tf2.3(first version with integer-only quantization support), tf2.4(latest release) and tf-nightly(with tensorflow_model_optimization support)
in tf2.2, the key script lines are like below:
model = tf.keras.models.load_model(PATH) # load keras model converter = tf.lite.TFLiteConverter.from_keras_model(model) # init converter converter.optimizations = [tf.lite.Optimize.DEFAULT] # debug shows this is the key to cause crash tflite_model = converter.convert() # do conversion ...
it can export the model but the IDE will have the error “Hybrid models are not supported on TFLite Micro.”
if ‘converter.optimizations’ is removed, the output tflite model can slowly work on board (0.5 fps).
to overcome this, I applied int8 quantization with tf2.3
model = tf.keras.models.load_model(PATH) # load keras model converter = tf.lite.TFLiteConverter.from_keras_model(model) # init converter converter.optimizations = [tf.lite.Optimize.DEFAULT] # also the key to cause crash converter.representative_dataset = representative_dataset # load dataset converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] # choose int8 ops converter.inference_input_type = tf.int8 # do int8 quantization converter.inference_output_type = tf.int8 # do int8 quantizatition tflite_model = converter.convert() # do convert ...
this will cause a new exception “tensorflow/lite/micro/kernels/reduce.cc Currently, only float32 input type is supported.” where GlobalAvaragePooling layer brings in the conflict op.
- The only workable model gives a fixed wrong output
With the workable model from tf2.2, the model always outputs 1(I define 0 for bird and 1 for non-bird), but when the model is tested on the PC, it works perfectly. I’m wondering whether the input format(RGB565) caused this as the training input is RGB888, also I’ve tested using sensor.JPEG as input but it’s incompatible with tf.classify
attached is the workable tflite file, original h5 file, and corresponding MicroPython script: OpenMV_share - Google Drive
Thanks in advance for helping me, Looking to hearing from you.