counting flowers


I’m new with OpenMV and want to start a project to count flowers in plants.
We do this now with Python and OpenCV.
And we think it can be much better and faster with OpenMV.
Only thing is how can we start on the right point ?

Maby Tensorflow is an option ?

With OpenCV i get this result.

How are you counting them with OpenCV currently? Find_Blobs() will do what you need for that image. A simple color threshold will find those blobs.

Is it possible to filter all the green out an then find colored blobs ?
Because a flower always change color only it is never green.
So if we take a green background than it is very stable ?

Any suggestions about this ?

How can i filter all the different tones of green out ?

This is a simple example in Open CV to find the flowers.
Its with sliders to get the right adjustments.

import cv2
import json

def trackCheck(x):

img = cv2.imread("Test_15.jpg")

r, c, z = img.shape
img = cv2.resize(img, (int(c*0.5), int(r*0.5)))

cv2.createTrackbar("High-H", "Image", 0, 255, trackCheck)
cv2.createTrackbar("High-S", "Image", 0, 255, trackCheck)
cv2.createTrackbar("High-V", "Image", 0, 255, trackCheck)
cv2.createTrackbar("Low-H", "Image", 0, 255, trackCheck)
cv2.createTrackbar("Low-S", "Image", 0, 255, trackCheck)
cv2.createTrackbar("Low-V", "Image", 0, 255, trackCheck)
cv2.createTrackbar("Area", "Image", 10, 1000, trackCheck)
while True:
    hsvImg = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    orig = img.copy()
    highH = cv2.getTrackbarPos("High-H", "Image")
    highS = cv2.getTrackbarPos("High-S", "Image")
    highV = cv2.getTrackbarPos("High-V", "Image")

    lowH = cv2.getTrackbarPos("Low-H", "Image")
    lowS = cv2.getTrackbarPos("Low-S", "Image")
    lowV = cv2.getTrackbarPos("Low-V", "Image")

    targetArea = cv2.getTrackbarPos("Area", "Image")

    low = (lowH, lowS, lowV)
    high = (highH, highS, highV)

    mask = cv2.inRange(hsvImg, low, high)
    mask = cv2.dilate(mask, None, iterations=1)
    mask = cv2.erode(mask, None, iterations=1)

    result = cv2.bitwise_and(img, img, mask=mask)
    cnts, hierarchy = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    for cnt in cnts:
        area = cv2.contourArea(cnt)
        if area>targetArea:
            #cv2.drawContours(newEdged, cnt, -1, (0,255,0), 3, 8)
            x,y,w,h = cv2.boundingRect(cnt)
            newEdged = cv2.rectangle(orig,(x,y),(x+w,y+h),(0,0,255),1)
    cv2.imshow("Image1", result)
    cv2.imshow("Im", orig)
    k = cv2.waitKey(50)
    if k == 27:

How can i count the blobs when i use keypoints?
Now i can detect the flowers true keypoints.
Only thing is how can i count the blobs ?

You can see the camera detects the flowers.

Here is the code.

sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False, value=100)

def draw_keypoints(img, kpts):
    if kpts:
        img = sensor.snapshot()

kpts1 = None

clock = time.clock()
while (True):
    img = sensor.snapshot()
    kpts1 = img.find_keypoints(max_keypoints=500, threshold=10, scale_factor=1.2)
    draw_keypoints(img, kpts1)

Hi, keypoint objects are documented in the API as having a x and y position. So, you need a clustering algorithm to count the keypoint blobs.

E.g. use something like this to cluster the blobs and.get the count:

Regarding the OpenCV code. We literally have the exact same tooling but better with find_blobs() with the OpenMV Cam.

Just open one of the color tracking scripts and use the Threshold Editor under Machine Vision → Tools to edit the color thresholds for the image. We support erode and dilate operations too. Additionally, find_blobs() supports blob merging so it will give you the output you want more or less by clustering for you.

1 Like

Ok so finding blobs is the way to go !

Only question is what can we do when the color of the flowers changes ?
Can we make the treshold so it looks between 2 trehholds ?
Because we need every color of blobs exept the green.

And maby an example how to count the blobs, is there a standard instruction for ?

Found this post from you:

Postby kwagyeman » Sat Sep 29, 2018 11:08 pm

Okay, start with the color tracking examples built into OpenMV IDE under File->Examples. You should also loo into the sensor control examples which let you control the sensor exposure and whatnot. These help greatly. Anyway, to do color tracking well you can do do this…

Assuming you 100% control the lighting of your garden (if it’s indoors). Then you can do everything via color tracking. First, you just need to determine the color of the soil. You can use the Threshold Editor in OpenMV IDE to do this (see tools → machine vision). Once you have the color of the soil you can find_blobs() with the invert flag set to True to find all colors that aren’t the color of the soil. This will give you blobs that represent each plant. You can then pass options to find_blobs() to merge blobs and whatnot. Anyway, after doing this you can then proceed to look at the color within each blob using get_stats(). This will then return the color distribution for each blob. Note that you want to use a loop over the objects returned by find_blobs() and call get_stats() with the roi argument on each blob returned by find_blobs().

Anyway, for color tracking getting the lighting stable will be your enemy. Focus on that carefully.

This what we are looking for , maby you have a example of this functions?

See the Examples->Color Tracking->Multicolor-tracking examples in the IDE.

Please see what find blobs does:

it has all the features you need in one function. Please try out the example and then modify the arguments to find_blobs() to do what you need to do.

You can get the color thresholds using the IDE by using the Tools->Machine Vision->Threshold Editor function.

I try almost everything and it won’t work.
Maby somebody has an idea to solve this ?

camera must count 3

Hi, it’s that a picture captured from the camera? I can quickly give you a script that does the job…

Yes this is a picture from the cam.

If you can give me a start it wil be great.

# Hello World Example
# Welcome to the OpenMV IDE! Click on the green run arrow button below to run the script!

import sensor, image, time


sensor.reset()                      # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA)   # Set frame size to QVGA (320x240)
#sensor.skip_frames(time = 2000)     # Wait for settings take effect.
clock = time.clock()                # Create a clock object to track the FPS.


    img = None

    if FROM_FILE:
        img = image.Image("test.bmp", copy_to_fb=True)
        img = sensor.snapshot()

    img.binary([(0, 60, 0, 127, 0, 127)])
    img.dilate(7, threshold=7)
    img.erode(2, threshold=5)

    blobs = img.find_blobs([(90, 100)], merge=True, margin=10)
    for b in blobs:
        img.draw_rectangle(b.rect(), color=(255, 0, 0))

Please use the feature in the product versus giving up easily. Copy the file to the disk as a bmp format image and run the above script.

You are great !

It works , sorry for all the questions only it is a bit different when you are normaly using opencv.

Whats the best option to communicate to a PLC?
I need the number of blobs and want to trigger the cam.

We are using modbus ip only there is no ethernet port on the cam.
So maby modbus trough the uart port and convert it to rs485 with this : UART TTL to RS485 Converter - Ben's electronics
Or is the Can shield the way to go ?


Yeah, that ttl adapter will work.

Then use the PYB module to control the uart. It’s very simple to send things.

We are close to get a result.
Only thing is we want to trigger the cam and make just 1 picture.

Modbus works very well.
Only we get a runtime error.

import sensor, image, time
from pyb import UART
from modbus import ModbusRTU
uart = UART(3,115200, parity=None, stop=2, timeout=1, timeout_char=4)
modbus = ModbusRTU(uart, register_num=9999, slave_id=0x03)


sensor.reset()                      # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA)   # Set frame size to QVGA (320x240)
sensor.skip_frames(time = 2000)     # Wait for settings take effect.
sensor.ioctl(sensor.IOCTL_SET_TRIGGERED_MODE, True)
#clock = time.clock()                # Create a clock object to track the FPS.

    if (modbus.REGISTER [0] == 1):
       img = sensor.snapshot()

       img.binary([(0, 100, -2, 127, 6, 127)])
       img.dilate(7, threshold=10)
       img.erode(2, threshold=5)

       blobs = img.find_blobs([(90, 100)], merge=True, margin=-10)

       for b in blobs:
           img.draw_rectangle(b.rect(), color=(255, 0, 0))
           modbus.REGISTER[1] = (len(blobs))
           modbus.REGISTER[0] = 0

       if modbus.any():

       print (modbus.REGISTER[0])
       print (modbus.REGISTER[1])

What’s wrong ?

Here is the error:

Traceback (most recent call last):
File “”, line 14, in
RuntimeError: Capture Failed: -4
MicroPython v1.12-omv OpenMV v3.6.7 2020-07-20; OPENMV4P-STM32H743
Type “help()” for more information.

-4 is a sensor timeout. Please make sure your camera module is seated correctly.

This line making trouble:

sensor.ioctl(sensor.IOCTL_SET_TRIGGERED_MODE, True)

How can i make the trigger mode work correct ?

The OV7725 doesn’t support triggered mode. So, remove that line.