Image Classification with Transfer Learning On MobileNet issue

Hi, I am running an image classifier with 3 classes using transfer learning MobilenetV1 on OpenMV, in computer Tensorflow has more than 90% accuracy and on OpenMV, code runs but gives wrong inference results.

Looking at my model with the Netron, it has Sub and Mul layers in the input, but in your Mobilent example it does not have such layers, also I get an error that input should be float32 that I could fix it by converting to float 32 input and that model runs in OpenMv but the zip file size was 11 MBytes so I could not upload it.

In layer 33 of the model also when it’s integer input it, as attached, has the mean layer error;
Node MEAN (number 33) failed to invoke with status 1,
but when I convert it to float as I said it does not have that error but gives wring inference results.

V1_integer.zip (2.8 MB)

Here is the OpenMV code;

crb_threshold = 0.8

LED setting (1: red, 2: green, 3: blue, 4: IR)

led = pyb.LED(1)
led.off()
sensor.reset() # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA) # Set frame size to QVGA (320x240)
sensor.set_windowing((128, 128)) # Set 128x128 window.
sensor.skip_frames(time=2000) # Let the camera adjust.

Load the built-in cat dog detection network (the network is in your OpenMV Cam’s firmware).

net = tf.load(‘V1_integer.tflite’,load_to_fb=True)
labels = [‘bee’,‘butterfly’,‘rhino beetle’]

Star the clock to measure FPS

clock = time.clock()

Main while loop

while(True):
# Measure time
clock.tick()
# Get image from camera
img = sensor.snapshot()

for obj in net.classify(img, min_scale=1.0, scale_mul=0.0, x_overlap=0.0, y_overlap=0.0):
    # Give classification scores
    print("**********\nDetections at [x=%d,y=%d,w=%d,h=%d]" % obj.rect())
    for i in range(len(obj.output())):
        print("%s = %f" % (labels[i], obj.output()[i]))
    # Highlight identified object
    img.draw_rectangle(obj.rect())
    img.draw_string(obj.x()+3, obj.y()-1, labels[obj.output().index(max(obj.output()))], mono_space = False)
    # Light LED if dog detected
    idx = labels.index('rhino beetle')
    if obj.output()[idx] > crb_threshold:
        led.on()
    else:
        led.off()
pyb.delay(1500)
print(clock.fps(), "fps")

How can I fix this issue?

Thanks

Hi, we just use EdgeImpulse based networks. I can’t say what’s wrong exactly. I’ll be working on updating our TensorFlow package. It’s about 6 months old on the firmware.

If you need this fixed in the mean-time please use EdgeImpulse to retrain.

1 Like