Morse Code Project

Just posting something that’s been on my desktop for far too long.

This application uses the OpenMV Cam to decode a bit pattern from a blinking array of LEDs.

# Untitled - By: kagyeman - Sun Nov 6 2016

import sensor, lcd

MEAN_TRIGGER_THRESHOLD = 170
SAMPLE_THRESHOLD_COUNT = 10

sensor.reset()
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.set_framesize(sensor.QQVGA2)
sensor.skip_frames()
sensor.set_gain_ctrl(False)
sensor.set_whitebal(False)
lcd.init()

def get_bit():
    while(True):
        if(sensor.snapshot().statistics()[7] >= MEAN_TRIGGER_THRESHOLD):
            count = 0
            while(sensor.snapshot().statistics()[7] >= MEAN_TRIGGER_THRESHOLD):
                count += 1
            return count >= SAMPLE_THRESHOLD_COUNT

def get_byte():
    byte  = get_bit() << 7
    print("building %d" % byte)
    byte |= get_bit() << 6
    print("building %d" % byte)
    byte |= get_bit() << 5
    print("building %d" % byte)
    byte |= get_bit() << 4
    print("building %d" % byte)
    byte |= get_bit() << 3
    print("building %d" % byte)
    byte |= get_bit() << 2
    print("building %d" % byte)
    byte |= get_bit() << 1
    print("building %d" % byte)
    byte |= get_bit() << 0
    print("building %d" % byte)
    return byte

x_char_size = 8
y_char_size = 10
x_chars = 128//x_char_size
y_chars = 160//y_char_size
term_array = [[32 for x in range(x_chars)] for y in range(y_chars)]
x_pos = 0
y_pos = 0
last_char = 0

def copy_row(rowDst, rowSrc):
    global term_array
    global x_pos
    global y_pos
    global last_char
    for i in range(x_chars):
        term_array[rowDst][i] = term_array[rowSrc][i]

def scroll(amount):
    global term_array
    global x_pos
    global y_pos
    global last_char
    if(amount >= 1):
        for i in range(y_chars-1):
            copy_row(i+0,i+1)
    elif(amount <= -1):
        for i in range(y_chars-1):
            copy_row(y_chars-i-1,y_chars-i-2)

def print_num(char):
    global term_array
    global x_pos
    global y_pos
    global last_char
    if(char == 8): # backspace
        term_array[y_pos][x_pos] = 32
        x_pos -= 1
        if(x_pos < 0):
            x_pos = x_chars-1
            y_pos -= 1
            if(y_pos < 0):
                y_pos += 1
                scroll(-1)
    elif(char == 9): # tab
        for i in range(4 - (x_pos % 4)):
            term_array[y_pos][x_pos] = 32
            x_pos += 1
            if(x_pos > (x_chars-1)):
                x_pos = 0
                y_pos += 1
                if(y_pos > (y_chars-1)):
                    y_pos -= 1
                    scroll(1)
    elif(char == 10): # newline
        x_pos = 0
        y_pos += 1
        if(y_pos > (y_chars-1)):
            y_pos -= 1
            scroll(1)
    elif(char == 13): # cr
        if(last_char != 10):
            x_pos = 0
            y_pos += 1
            if(y_pos > (y_chars-1)):
                y_pos -= 1
                scroll(1)
    elif((char >= 32) and (char <= 126)): # char
        term_array[y_pos][x_pos] = char
        x_pos += 1
        if(x_pos > (x_chars-1)):
            x_pos = 0
            y_pos += 1
            if(y_pos > (y_chars-1)):
                y_pos -= 1
                scroll(1)
    last_char = char

def print_char(char):
    print_num(ord(char))

def print_string(text):
    for i in range(len(text)):
        print_char(text[i])

def render_term():
    img = sensor.snapshot()
    img.xor(img)
    for i in range(y_chars):
        for j in range(x_chars):
            img.draw_string(j*x_char_size, i*y_char_size, chr(term_array[i][j]))
    lcd.display(img)

print_string("Kate & Kwab Term")
render_term()

while(True):
    byte = get_byte()
    print("this is the byte %d" % byte)
    print_num(byte)
    render_term()

Hi , this is a very interesting project .
I would like have some more details about some piece of code starting for instance from this one :

if(sensor.snapshot().statistics()[7] >= MEAN_TRIGGER_THRESHOLD):
            count = 0
            while(sensor.snapshot().statistics()[7] >= MEAN_TRIGGER_THRESHOLD):
                count += 1
            return count >= SAMPLE_THRESHOLD_COUNT

If I understand well from the documentation sensor.snapshot().statistics()[7] return the upper quartile of image object. :unamused:
The questions are :
A) what rappresent the upper and lower quartile of an image and which is the main scope of this function ?
B) why is used the upper quartile to be ompared with a threshould value ?
C) where can be found documentation that can help to understand more in details ,also for a non expert like me ,most of the functions used with OpenMv ?
Thanks.

I pulled most of this from here:

See the external links on the bottom:

Basically, instead of looking at the mean which is heavily influenced by outliers you look at percentage based values like the median, lower quartile, and upper quartile. These values are based on looking at percentages of samples below some point. So, the lower quartile is the 25% range of the distribution, the median is the 50% area, and the upper quartile is the 75% area.

Anyway, since I wrote that I made the stats package on the camera a lot better. Checkout this:

https://github.com/openmv/openmv/blob/master/usr/examples/10-Color-Tracking/automatic_rgb565_color_tracking.py#L27

By getting the 1% percentile and 99% percentile you basically get the min and max of a color area without outliers affecting the results resulting in picking really good color tracking bounds.

So, in summary, all this stuff is done to get around outliers which ruin your results if you just use mean, min, and max. By using the percentage based stuff you get a lot better and more stable results.

Clear ! Thanks for the depth explanation.