Triggering an Arduino Due when a person enters the frame

Sorry to ask for something so mundane, however I do not want to blow my lovely new H7 plus so early. The hobby has taught me to ask first.
I want to add some pin control to the Person detection example so that I can connect one og the GIO pins on the H7 Plus to a 3.3 v tolerant Arduino Due pin.
I guess that I will need a gnd pin as well.
I understand the Arduino Due side well so only need the help for the H7 side of things.
Best regards and what a wonderful product.
I am trying to use the Hr to detect people so that I can shut down my autonomous mower until the walk away. So this will eventually work in open sunlight and I would want to detect people upto abour 3 to 5m away.
Do you think the examples machine learning tf_person_detection_search_whole_window is the best place to start
Lastly can I add children and dogs to the model at a later date

Yeah, voltages are fine. The STM32 has robust 5v tolerant i/os.

If you want to include children and dogs you need to train a new network using Edge Impulse. You’ll need to find a dataset of children and dogs… but, otherwise, it’s not a compute or coding issue. You just need to train a new network using transfer learning and then switch to that network.

To get a zoom effect use the tf_person_detection zoom example. Our script has the ability to execute the network in a sliding window. So, use that to your advantage.

tf_person_detection zoom
I do not see this in examples
Also how do I ensure the ide and examples are current

There’s an example that has zoom in the name. It’s not named exactly that, there are literally two person detection examples. Use the other one.

There’s an example that has zoom in the name. It’s not named exactly that, there are literally two person detection examples. Use the other one.

I do appreciate your help in getting me going. thank you

I am trying to modify your people detection code to light up the LED if the label is above 0.5 and switch it off if below 0.5 I think the problem relates to the varible type ie string vs float

I am not a python programmer so I would appreciate some help. I have just bought a python book and this will encourage me to learm python at long last

Once I /you sort this problem out I can pretty much work the rest out for myself. aka I will try not to ask again for coding help


if obj.output()<0.50:
green_led.on()
else:
green_led.off()

# TensorFlow Lite Person Dection Example
#
# Google's Person Detection Model detects if a person is in view.
#
# In this example we slide the detector window over the image and get a list
# of activations. Note that use a CNN with a sliding window is extremely compute
# expensive so for an exhaustive search do not expect the CNN to be real-time.

import sensor, image, time, os, tf
from pyb import LED

#self LED test to ensure the LED is working
red_led = LED(1)
green_led = LED(2)
blue_led = LED(3)

red_led.on()
time.sleep(1000)
red_led.off()
green_led.on()
time.sleep(1000)
green_led.off()
red_led = LED(1)
green_led = LED(2)
red_led.on()
time.sleep(1000)
red_led.off()
green_led.on()
time.sleep(1000)
green_led.off()


sensor.reset()                         # Reset and initialize the sensor.
sensor.set_pixformat(sensor.GRAYSCALE) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
sensor.set_windowing((240, 240))       # Set 240x240 window.
sensor.skip_frames(time=2000)          # Let the camera adjust.

# Load the built-in person detection network (the network is in your OpenMV Cam's firmware).
net = tf.load('person_detection')
labels = ['unsure', 'person', 'no_person']

clock = time.clock()
while(True):
    clock.tick()

    img = sensor.snapshot()

    # net.classify() will run the network on an roi in the image (or on the whole image if the roi is not
    # specified). A classification score output vector will be generated for each location. At each scale the
    # detection window is moved around in the ROI using x_overlap (0-1) and y_overlap (0-1) as a guide.
    # If you set the overlap to 0.5 then each detection window will overlap the previous one by 50%. Note
    # the computational work load goes WAY up the more overlap. Finally, for multi-scale matching after
    # sliding the network around in the x/y dimensions the detection window will shrink by scale_mul (0-1)
    # down to min_scale (0-1). For example, if scale_mul is 0.5 the detection window will shrink by 50%.
    # Note that at a lower scale there's even more area to search if x_overlap and y_overlap are small...

    # default settings just do one detection... change them to search the image...
    for obj in net.classify(img, min_scale=1.0, scale_mul=0.5, x_overlap=0.0, y_overlap=0.0):
        print("**********\nDetections at [x=%d,y=%d,w=%d,h=%d]" % obj.rect())
        for i in range(len(obj.output())):
            print("%s = %f" % (labels[i], obj.output()[i]))

            if obj.output()<0.50:
                 green_led.on()
            else:
                 green_led.off()

        img.draw_rectangle(obj.rect())
        img.draw_string(obj.x()+3, obj.y()-1, labels[obj.output().index(max(obj.output()))], mono_space = False)
    print(clock.fps(), "fps")

output() returns an array. You have to index in via . See the print line.

You should put another if around the if else you added and only activate the if/else if you are looking at index 2.

I got it working thanks for your help. I am going to connect this to a 3.3 volt relay to for my project. I will try it outside later on today to see how it behaves. Do you think I will need a global camera shutter? I looked online in the UK and America and could not find anyone selling the shutter can you advise. Regards and thanks. Max

No, the global shutter camera is not needed.