Why is Grayscale image so grainy

Finally got around to installing the global shutter module on my H7. Seems to function, but why the horizontal lines in the image? Is it not seated correctly or ? ?



That looks to me like its just a very low resolution image.
Can you post your code/script?


Multi Color Blob Tracking Example

This example shows off multi color blob tracking using the OpenMV Cam.

Python snapshot Examples, sensor.snapshot Python Examples - HotExamples

import sensor, image, time, math

Color Tracking Thresholds (L Min, L Max, A Min, A Max, B Min, B Max)

L is lightness (0:100) A is Red (+)/Green (-) B is Yellow (+)/Blue (-)

Range varies L -150:150 A or B -127:127 or just +/- 100

The below thresholds track in general red/green things. You may wish to tune them…

thresholds = [(10, 48, -10, 10, -10, 10), # generic_red_thresholds
(10, 48, -10, 10, -10, 10), # generic_green_thresholds
(10, 48, -10, 10, -10, 10)] # generic_blue_thresholds

You may pass up to 16 thresholds above. However, it’s not really possible to segment any

scene with 16 thresholds before color thresholds start to overlap heavily.

sensor.set_framesize(sensor.QVGA) # modulates min blob size

sensor.set_windowing((240, 240)) # Set 240x240 window can apply offsets too





sensor.set_auto_gain(False) # must be turned off for color tracking

sensor.set_auto_whitebal(False) # must be turned off for color tracking no grayscale use

exposure time is 0 to 10000uS

sensor.set_auto_exposure(False, exposure_us = int(300)) # exposure_us = int(current_exposure_time_in_microseconds * EXPOSURE_TIME_SCALE))

gain can be limited by this setting values 2/4/8/16/32/64/128

see NXP AN13243 this may be a ‘clone’ but a very good one !


The gain db ceiling maxes out at about 24 db for the OV7725 sensor.

values 2/4/8/16/32/64/128

sensor.set_auto_gain(False, 4) # gain_db_ceiling = 16.0) # Default gain.

Change this to False to undo the flip.


Change this to False to undo the mirror.


sensor.set_brightness(3) # range is -3 : +3

sensor.set_colorbar(0) # Colorbar display in FrameBuffer

sensor.skip_frames(time = 300) # Use this after every register change 10 frames 300mS default
clock = time.clock()

print some register & configuration settings

print(“New exposure == %d” % sensor.get_exposure_us())

print(“Sensor ID == %d” % sensor.get_id())

print(“Sensor Height == %d” % sensor.height())

print(“Sensor Width == %d” % sensor.width())

returns a tuple (float,float,float) does not work with MT9M114 sensor not on H7 R2

set with same values is how white balance is maintained

What does "sensor.get_rgb_gain_db()" do exactly?


Only blobs that with more pixels than “pixel_threshold” and more area than “area_threshold” are

returned by “find_blobs” below. Change “pixels_threshold” and “area_threshold” if you change the

camera resolution. Don’t set “merge=True” becuase that will merge blobs which we don’t want here.

event_count = 0 # actually a blob counter
img = sensor.snapshot()
for blob in img.find_blobs(thresholds, pixels_threshold=10, area_threshold=70):
event_count = event_count + 1
# These values depend on the blob not being circular - otherwise they will be shaky.
if blob.elongation() > 0.5:
img.draw_edges(blob.min_corners(), color=(55,255,10))
# img.draw_line(blob.major_axis_line(), color=(255,255,0))
# img.draw_line(blob.minor_axis_line(), color=(255,255,0))
# These values are stable all the time.
# img.draw_rectangle(blob.rect())
img.draw_cross(blob.cx(), blob.cy())
# Note - the blob rotation is unique to 0-180 only.
# img.draw_keypoints([(blob.cx(), blob.cy(), int(math.degrees(blob.rotation())))], size=10)
print(“FPS == %f” % clock.fps()) # looks like %d is for integers
print(blob.cx(), blob.cy())
print(“BLOBs == %d” % event_count) # Use %f for floats
# print(event_count)

Please reformat your post using the code formatting tags, this is almost impossible to read!

This line sets the resolution to 1/4 of VGA, i.e. 320 x 240 pixels:

You are not showing the full picture in your initial post, but considering you are setting it to QVGA the part I see seems to be about right. You can use the global shutter in full VGA resolution as it’s grayscale, so the picture indeed fits into memory even on the H7, then the quality should get better (but FPS may go down).

Please keep the focus. Are those horizontal lines in the sample of the picture an artifact of socketing or is something else wrong with the sensor?

Seems I got on the wrong track…

You can always try to re-seat it (have you?) to see if that changes anything.

I’m getting similar lines (not static, more flickering) when I use a high(er) gain on the sensor. So far I suspected noise from the USB, or the DCDC or the MCU to be the reason behind it, but did not investigate it much further. For me it’s not an issue as I don’t use the high gain to keep imager noise low, but you could try to reduce gain to check if that changes anything.
Having a lower gain but a longer exposure time generally results in better image quality anyway.

The H7 isn’t the best designed board for image quality. We fixed this on the upcoming RT1060. There’s just a lot of noise on the 3.3V supply that couples into the sensor. As such, it’s not going to be perfect. However, it shouldn’t affect the usability of the system. If you want better image quality increase the exposure.