Image Classification with Transfer Learning On MobileNet issue

Hi, I am running an image classifier with 3 classes using transfer learning MobilenetV1 on OpenMV, in computer Tensorflow has more than 90% accuracy and on OpenMV, code runs but gives wrong inference results.

Looking at my model with the Netron, it has Sub and Mul layers in the input, but in your Mobilent example it does not have such layers, also I get an error that input should be float32 that I could fix it by converting to float 32 input and that model runs in OpenMv but the zip file size was 11 MBytes so I could not upload it.

In layer 33 of the model also when it’s integer input it, as attached, has the mean layer error;
Node MEAN (number 33) failed to invoke with status 1,
but when I convert it to float as I said it does not have that error but gives wring inference results.

V1_integer.zip (2.8 MB)

Here is the OpenMV code;

crb_threshold = 0.8

LED setting (1: red, 2: green, 3: blue, 4: IR)

led = pyb.LED(1)
led.off()
sensor.reset() # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA) # Set frame size to QVGA (320x240)
sensor.set_windowing((128, 128)) # Set 128x128 window.
sensor.skip_frames(time=2000) # Let the camera adjust.

Load the built-in cat dog detection network (the network is in your OpenMV Cam’s firmware).

net = tf.load(‘V1_integer.tflite’,load_to_fb=True)
labels = [‘bee’,‘butterfly’,‘rhino beetle’]

Star the clock to measure FPS

clock = time.clock()

Main while loop

while(True):
# Measure time
clock.tick()
# Get image from camera
img = sensor.snapshot()

for obj in net.classify(img, min_scale=1.0, scale_mul=0.0, x_overlap=0.0, y_overlap=0.0):
    # Give classification scores
    print("**********\nDetections at [x=%d,y=%d,w=%d,h=%d]" % obj.rect())
    for i in range(len(obj.output())):
        print("%s = %f" % (labels[i], obj.output()[i]))
    # Highlight identified object
    img.draw_rectangle(obj.rect())
    img.draw_string(obj.x()+3, obj.y()-1, labels[obj.output().index(max(obj.output()))], mono_space = False)
    # Light LED if dog detected
    idx = labels.index('rhino beetle')
    if obj.output()[idx] > crb_threshold:
        led.on()
    else:
        led.off()
pyb.delay(1500)
print(clock.fps(), "fps")

How can I fix this issue?

Thanks

Hi, we just use EdgeImpulse based networks. I can’t say what’s wrong exactly. I’ll be working on updating our TensorFlow package. It’s about 6 months old on the firmware.

If you need this fixed in the mean-time please use EdgeImpulse to retrain.

1 Like

Thanks, I was able to run classification both in edge impulse and with TensorFlow Quantization.

We have similar issues here using a re-trained MobileNet to classify around a dozen classes of objects.

However, we cannot use EdgeImpulse for training the model due to the restriction on training time and dataset size, we achieve very low accuracy around 70%. EdgeImpulse offers some means to fine-tune the training but it is currently impossible to use SOTA methods.

It would be awesome if we can upgrade the openMV runtime to TF Lite 2.5.

This will happen in the coming months. I’ve been busy with life but should have more time soon.

2 Likes

Sure Kwabena, thanks for this and good luck. Just wanted to touch ground.

Hi Kwabena,

How can I tell the version of OpenMV TFLite runtime from the firmware release? And what is the current TF Lite one on v4.2.1?

Details:
I got a new image dataset and fine-tuned it on a mobilenetV2 and got the tflite and tflite quantized files. The TensorFlow I am using is 2.7, with the older TF version it was working but now I can not run the tflite model on H7 plus R3 using firmware v4.2.1., I have older versions of TensorFlow on my Anaconda packages too which I can try to build tflite models.

Thanks

I don’t know exactly, I’ll see if I can fix this I’ve the weekend by updating our branch we pull from TensorFlow. We’re using edge impulses fork of TensorFlow now since the latest release of TensorFlow crashes on any model being used.

I can’t debug these things because of how complex the TensorFlow library is. I can only just try different library versions.

1 Like