# openmv4 H7 PLUS Measure the distance with ov5640,change framesize=sensor.FHD,memory errors!

# Measure the distance
#
# This example shows off how to measure the distance through the size in imgage
# This example in particular looks for yellow pingpong ball.

import sensor, image, time

# For color tracking to work really well you should ideally be in a very, very,
# very, controlled enviroment where the lighting is constant...
yellow_threshold   = (2, 85, -128, 127, 18, 64)
# You may need to tweak the above settings for tracking green things...
# Select an area in the Framebuffer to copy the color settings.

sensor.reset() # Initialize the camera sensor.
sensor.set_pixformat(sensor.RGB565) # use RGB565.
sensor.set_framesize(sensor.FHD) #【ov5640】<-------------change here
sensor.skip_frames(10) # Let new settings take affect.
sensor.set_auto_whitebal(False) # turn this off.
clock = time.clock() # Tracks FPS.

K=5000#the value should be measured

while(True):
    clock.tick() # Track elapsed milliseconds between snapshots().
    img = sensor.snapshot() # Take a picture and return the image.

    blobs = img.find_blobs([yellow_threshold], x_stride=2, y_stride=1)
    if len(blobs) == 1:
        # Draw a rect around the blob.
        b = blobs[0]
        img.draw_rectangle(b[0:4]) # rect
        img.draw_cross(b[5], b[6]) # cx, cy
        Lm = (b[2]+b[3])/2
        length = K/Lm
        print(length)

    #print(clock.fps()) # Note: Your OpenMV Cam runs about half as fast while
    # connected to your computer. The FPS should increase once disconnected.

ide 3.0.3,windows7 ,openmv4 H7 PLUS
help!

Hi, how many blobs are you finding? Also, please verify at FHD that you see the FHD resolution in the IDE.

thank your answer。i want to run a example for find 1-2 blobs. but the code is broken.
this is the screen of my ide. i want to find my phone( black)


can you give me some example with ov5640 FHD for finding blobs or circle?

Hi, I get expected behavior with no issues with:

# Single Color Grayscale Blob Tracking Example
#
# This example shows off single color grayscale tracking using the OpenMV Cam.

import sensor, image, time, math

# Color Tracking Thresholds (Grayscale Min, Grayscale Max)
# The below grayscale threshold is set to only find extremely bright white areas.
thresholds = (245, 255)

sensor.reset()
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.set_framesize(sensor.FHD)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock()

# Only blobs that with more pixels than "pixel_threshold" and more area than "area_threshold" are
# returned by "find_blobs" below. Change "pixels_threshold" and "area_threshold" if you change the
# camera resolution. "merge=True" merges all overlapping blobs in the image.

while(True):
    clock.tick()
    img = sensor.snapshot()
    for blob in img.find_blobs([thresholds], pixels_threshold=100, area_threshold=100, merge=True):
        # These values depend on the blob not being circular - otherwise they will be shaky.
        if blob.elongation() > 0.5:
            img.draw_edges(blob.min_corners(), color=0)
            img.draw_line(blob.major_axis_line(), color=0)
            img.draw_line(blob.minor_axis_line(), color=0)
        # These values are stable all the time.
        img.draw_rectangle(blob.rect(), color=127)
        img.draw_cross(blob.cx(), blob.cy(), color=127)
        # Note - the blob rotation is unique to 0-180 only.
        img.draw_keypoints([(blob.cx(), blob.cy(), int(math.degrees(blob.rotation())))], size=40, color=127)
    print(clock.fps())

and

# Single Color RGB565 Blob Tracking Example
#
# This example shows off single color RGB565 tracking using the OpenMV Cam.

import sensor, image, time, math

threshold_index = 0 # 0 for red, 1 for green, 2 for blue

# Color Tracking Thresholds (L Min, L Max, A Min, A Max, B Min, B Max)
# The below thresholds track in general red/green/blue things. You may wish to tune them...
thresholds = [(30, 100, 15, 127, 15, 127), # generic_red_thresholds
              (30, 100, -64, -8, -32, 32), # generic_green_thresholds
              (0, 30, 0, 64, -128, 0)] # generic_blue_thresholds

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.FHD)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock()

# Only blobs that with more pixels than "pixel_threshold" and more area than "area_threshold" are
# returned by "find_blobs" below. Change "pixels_threshold" and "area_threshold" if you change the
# camera resolution. "merge=True" merges all overlapping blobs in the image.

while(True):
    clock.tick()
    img = sensor.snapshot()
    for blob in img.find_blobs([thresholds[threshold_index]], pixels_threshold=200, area_threshold=200, merge=True):
        # These values depend on the blob not being circular - otherwise they will be shaky.
        if blob.elongation() > 0.5:
            img.draw_edges(blob.min_corners(), color=(255,0,0))
            img.draw_line(blob.major_axis_line(), color=(0,255,0))
            img.draw_line(blob.minor_axis_line(), color=(0,0,255))
        # These values are stable all the time.
        img.draw_rectangle(blob.rect())
        img.draw_cross(blob.cx(), blob.cy())
        # Note - the blob rotation is unique to 0-180 only.
        img.draw_keypoints([(blob.cx(), blob.cy(), int(math.degrees(blob.rotation())))], size=20)
    print(clock.fps())

These are the default example scripts.

Anyway, regarding how the system runs out of memory… so, the blob tracking algorithm uses a wild fire approach to scan a blob. If the blob is really big this will require a large stack size. The code is designed however to work with the 32MB of DRAM for this minus whatever the frame buffer is using for images. So, you shouldn’t be running out of space for that.

The only other reason you’d run out of RAM is if there’s way too many small blobs found.

Anyway, I’m using an H7 Plus with firmware 4.4.3.

thank you very much ,i will test your code and post the result .
now,my led turn off and on is ok.i want to know if the ram works well .how can i know the ram is ok?
i think my openmv4 h7 pluse works with some prbolem.
is there any test code for openmv4 hardware ?
thank you again.
:smiley:

Mr kwagyeman,your code runs well. thank you very much




as to my code in first post,can you tell me where is the bug?
i will check my code again and test for more.

Yeah, it’s because you didn’t set the thresholds… so, it was causing it to find a lot of small blobs which could not be allocated.

when sensor.set_framesize(sensor.FHD) ,i get the photo:


when sensor.set_framesize(sensor.QVGA) ,i get the photo:

please not that white line of blobs.i find the recogenize of QVGA better than FHD. is it right?

It’s finding it in both cases. It’s just the lines are smaller on full HD. You have to increase the line width on the higher resolution image that you are drawing.

Remember that the frame buffer in the IDE is sacking the image down.

get it。 thank u :grinning:

import sensor, image, time,math

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.XGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock()

while(True):
    clock.tick()
    img = sensor.snapshot().lens_corr(1.8)
    for c in img.find_circles(threshold = 3500, x_margin = 10, y_margin = 10, r_margin = 10,
            r_min = 2, r_max = 100, r_step = 2):
        area = (c.x()-c.r(), c.y()-c.r(), 2*c.r(), 2*c.r())
        #area为识别到的圆的区域,即圆的外接矩形框
        statistics = img.get_statistics(roi=area)#像素颜色统计
        print(statistics)
        #(0,100,0,120,0,120)是红色的阈值,所以当区域内的众数(也就是最多的颜色),范围在这个阈值内,就说明是红色的圆。
        #l_mode(),a_mode(),b_mode()是L通道,A通道,B通道的众数。
        if 0<statistics.l_mode()<100 and 0<statistics.a_mode()<127 and 0<statistics.b_mode()<127:#if the circle is red
            img.draw_circle(c.x(), c.y(), c.r(), color = (255, 0, 0))#识别到的红色圆形用红色的圆框出来
        else:
            img.draw_rectangle(area, color = (255, 255, 255))
            #将非红色的圆用白色的矩形框出来
    print("FPS %f" % clock.fps())

when i use openmv with XGA for looking for circles and blobs, it become very slowly.
is it right? FPS=0.079
it shows that the speed of looking for circles is slower than blobs?

Yes, to find circles the system has to run the huff transformation on the image among many other steps. It’s vastly more computation.

Please not you’ll get the best performance at around 320x240. When you start going higher than that resolution you should keep in mind the performance costs.

ok, roger roger
by the way. hope the openmv5 will
1)have more io pins we can use,
2)the accessory of openmv5 (such as
led light module,servo module ,485 com module)have no conflicting pins for each other。
3)code under ide could run by single step,and variables could be watch。
4)find circle could be finish within100ms with ov5460 for FHD or WQXGA

# Measure the distance
#
# This example shows off how to measure the distance through the size in imgage
# This example in particular looks for yellow pingpong ball.

import sensor, image, time, math

# For color tracking to work really well you should ideally be in a very, very,
# very, controlled enviroment where the lighting is constant...
yellow_threshold   =(30, 100, 15, 127, 15, 127)#<------------yellow ,you can change 
# You may need to tweak the above settings for tracking green things...
# Select an area in the Framebuffer to copy the color settings.

sensor.reset() # Initialize the camera sensor.
sensor.set_pixformat(sensor.RGB565) # use RGB565.
sensor.set_framesize(sensor.FHD) #【ov5640】<-------------change here
sensor.skip_frames(time = 2000) # Let new settings take affect.
sensor.set_auto_whitebal(False) # turn this off.
clock = time.clock() # Tracks FPS.

K=5000#the value should be measured

while(True):
    clock.tick() # Track elapsed milliseconds between snapshots().
    img = sensor.snapshot() # Take a picture and return the image.

    blobs = img.find_blobs([yellow_threshold], x_stride=2, y_stride=1)
    if len(blobs) == 1:
        # Draw a rect around the blob.
        b = blobs[0]
        img.draw_rectangle(b[0:4]) # rect
        img.draw_cross(b[5], b[6]) # cx, cy
        Lm = (b[2]+b[3])/2
        length = K/Lm
        print(length)

    #print(clock.fps()) # Note: Your OpenMV Cam runs about half as fast while
    # connected to your computer. The FPS should increase once disconnected.

the code of Measure the distance is broken。
Mr kwagyeman,would you mind run that code and post the result。
i have no idea what the bug is?

  1. Yep, it has more I/O pins.
  2. I designed it such that you can have the servo shield, rs485, ethernet, and led module at the same time.
  3. MicroPython doesn’t support such features. Whenever they do we could add debugging as QtCreator which we use as the IDE base has a subsystem for this.
  4. Please keep the resolution for actually image processing work under 640x480. You have to really think about what you are trying to do when you go that high. Keep in mind when you double the resolution you are just getting +/-1 pixel of accuracy which may not be needed for your application while you are quadrupling the work on the CPU. Anyway, there are some performance gains I can probably apply to find_circles() to improve the speed by 2.5X in the future. It’s not really going to get much faster until we have a processor with the Cortex-M55 in the future.

As I said, you need to set an area_threshold and pixel_threshold. Right now you are producing way to many blobs for the system to handle and it’s running out of RAM. Please keep in mind there’s a limited and much smaller heap called the MicroPython heap on the system that’s only 256KB. The 32MB of RAM is used for the frame buffer and does not store the algorithm processing python object results.

roger roger
i use the openmv4 for looking the location of target with more accuracy at least 1 mm。
could you give me some idea?

Ah, okay, that makes sense then to have the higher resolution.

So, there’s a smart way to do this to get speed. You need to use an image pyramid.

Basically, create a second frame buffer and store a scaled down version of the image in that frame buffer. Then call find_blobs on that to find the initial object locations. Once you do that you can then use find_blobs() again on the original resolution but with the ROI parameter to just look for blobs in the particular area you already know one is.

This will allow you to refine the results of the operation to get more precision but without a huge speed drop.

This all assumes you can see the targets in the smaller resolution.

roger that 。thank you
if i want to wake up openmv4 by rs485 interrupt, is there some example ?
hop openmv5 has solid board and rugged reliability。