SD Card Problem Migrating to H7

Everything works seamlessly pulling the code over to the H7, except… I can Open a file and Add_frames and there is no complaint. Then when I try to Close the file it crashes. Looking later, zero bytes have been written to the video file on SD. This all works neatly on the M7. Anything I should know that’s different? Here’s a snippet of the code

def openVideo():
    global fileSeq
    fileSeq += 1                #change filename
    if fileSeq > 99: fileSeq = 1
    filename1 = "MvLineFile%i.mp4" % (fileSeq)
    global m
    m = mjpeg.Mjpeg(filename1)  #open next file
    global fileOpen
    fileOpen = True

def closeVideo():
    global m
    m.close(clock.fps())        #make sure file written to sd
    global fileOpen
    fileOpen = False
    
if fileOpen == True:
    m.add_frame(img)            #capture image with lines and text

Can you attach the full script ? Also does the example mjpeg.py work ?

Hi, can you give a full sample script that causes the issue? If it works on the M7 and not the H7 then it’s a bug. Ibrahim will be able to fix it.

Code Follows. Thanks for the quick response Kwabena. It needs to have an Arduino attached to get the BGN or END signal. But maybe you can see how I’m using the SD anyway.

# RedYellowH7Sd -

#94 100 -27 1 20 127
#YELLOW_THRESHOLD = (94, 100, -40, 1, 20, 127)  #Kwabena yellow line
YELLOW_THRESHOLD = (75, 100, -27, 1, 20, 127)  #Kwabena yellow line
#YELLOW_THRESHOLD = (90, 100, -20, 127, 40, 127)  #Dave yellow line
#YELLOW_THRESHOLD = (90, 100, -20, -10, 55, 127)  #Dave yellow line
RED_THRESHOLD = [(50, 65, 40, 80, 15, 127)]   # generic_red_threshold
GLARE_THRESHOLD = (90, 100, -101, 0, 30, 50) #Dave garage
MAG_THRESHOLD = 5
Top_THRESHOLD = 4
RED_SCREEN = (0xFF, 0x00, 0x00)

import sensor, image, time, math, pyb, ustruct, mjpeg
from pyb import UART

uart = UART(3, 115200, timeout_char=2000)
enable_lens_corr = True # turn on for straighter lines...

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QQVGA)  #160 by 120
#roi_sample = (left,top,right,bottom)
roi_top = (0,0,160,60)

roi_bottom = (20,30,140,110)

sensor.set_vflip(True)
sensor.set_hmirror(True)
#sensor.skip_frames(time = 0)  per4 kwabena
sensor.skip_frames(time = 200)
#sensor.set_auto_gain(True)
sensor.set_auto_gain(False)
sensor.set_auto_whitebal(False) #not the default
clock = time.clock()
fileSeq = 0
fileOpen = False
time = [0]
start = pyb.millis()
carSpeed = 0
workStr = ''
filename1 = "MvLineFile%i.mp4" % (0)
m = ""


def valmap(x, in_min, in_max, out_min, out_max):
    return int((x-in_min) * (out_max-out_min) / (in_max-in_min) + out_min)

def openVideo():
    global fileSeq
    fileSeq += 1                #change filename
    if fileSeq > 99: fileSeq = 1
    filename1 = "MvLineFile%i.mp4" % (fileSeq)
    global m
    m = mjpeg.Mjpeg(filename1)  #open next file
    global fileOpen
    fileOpen = True

def closeVideo():
    global m
    m.close(clock.fps())        #make sure file written to sd
    global fileOpen
    fileOpen = False

while True:
    clock.tick()
    img = sensor.snapshot().histeq()
    if enable_lens_corr: img.lens_corr(1.8) # for 2.8mm lens...seems to help


#  first look to see if there's a stop sign in front of us
#  ? use roi to look only at center?
#  ? seems like we dont need area_threshold if target is octagon ..area_threshold=50,
#  ? x/y stride seems like it could apply to us with solid color target

    stopSize = 0  # null value for stop sign
    for blob in img.find_blobs(RED_THRESHOLD, pixels_threshold=25, x_stride=4, y_stride=4, merge=True):
        img.draw_rectangle(blob.rect(),thickness=2)
        img.draw_cross(blob.cx(), blob.cy(),thickness=2)
        print (blob.w())  #width
        stopSize = blob.w()

            #  img.binary([GLARE_THRESHOLD],zero=True) # Chg non yellow to black
    bAngle = 99 #null value for bottom angle
    tAngle = 99 #null value for top angle
    line = img.get_regression([YELLOW_THRESHOLD], roi=roi_bottom, robust=True)
    if line and (line.magnitude() >= MAG_THRESHOLD) and line.y1()<40:
      # good line in bottom roi for steering
        img.draw_line(line.line(),thickness=2, color=(0,127,127))
        if line.y1() > line.y2():  #line is upside down
            bAngle = valmap(line.x2(), 0, 160, -45, 45) # x2 is the top
        else:   #11/24/18 start at 15 to bias tward left
            bAngle = valmap(line.x1(), 0, 160, -45, 45) # angle
    line = img.get_regression([YELLOW_THRESHOLD], roi=roi_top, robust=True)
    if line and (line.magnitude() >= Top_THRESHOLD) :
        #good line in top roi for slowing down speed
        img.draw_line(line.line(), color=(0,127,127),thickness=2)
        if line.y1() > line.y2():  #line is upside down
            tAngle = valmap(line.x2(), 0, 160, -45, 45) # x2 is the top
        else:
            tAngle = valmap(line.x1(), 0, 160, -45, 45) # angle
        send_string = "b=%d t=%d s=%d" % (bAngle, tAngle, stopSize)
        # send bottom angle for steering and top angle for look-ahead action
    else:
        send_string = "b=%d s=%d" % (bAngle, stopSize)  #only send bottom angle

    uart.write(send_string)
    uart.write("\n")    #send to arduino on serial

    if uart.any() > 0:
        workStr = uart.readline().decode() #read into string
        workStr = workStr[:-2]            # chop off cr lf
        if workStr == 'BGN' and fileOpen == False:  #look for control chars
            openVideo()
            start = pyb.millis()        # start clock over
        elif workStr == 'END' and fileOpen == True:
            closeVideo()
            start = pyb.millis()        # start clock over
        elif workStr == 'BBB':
            img.draw_string(0, 40, "BRAKES",color = RED_SCREEN, mono_space=False)
        else:
            if workStr.isdigit():         #else update carSpeed for display
                carSpeed = int(workStr)

    img.draw_string(0, 0, "ms: %i"%(pyb.elapsed_millis(start)),color = RED_SCREEN,mono_space=False)
    img.draw_string(0, 10, "top: %i"%(tAngle),color = RED_SCREEN,mono_space=False)
    img.draw_string(0, 20, "bot: %i"%(bAngle),color = RED_SCREEN,mono_space=False)
    img.draw_string(0, 30, "spd: %i"%(carSpeed),color = RED_SCREEN,mono_space=False)

    print (clock.fps(), send_string)

    if fileOpen == True:
        m.add_frame(img)            #capture image with lines and text

I was thinking the problem might stem from the fact that with the H7 I’m doubling the resolution (4x pixels), but I just ran a test on the H7 using the code from the M7 (80x60) and it failed in the same circumstance, i.e. when I tried to close the SD video file.

Yes I can reproduce the bug with the mjpeg.py example, will debug it and get back to you.

I tried the mjpeg.py example and it too fails. It start up with the message, runs for a few seconds (200 frames I assume) and then dies. After that I have to un-power and re-power the cam to get the USB to connect.

Our messages are crossing. OK. Thanks for looking into it. You guys are the greatest!

I don’t know if this is relevant, but I tried the H7 recording GIF and it works fine. So maybe it’s more of a JPEG issue than an SD one.

Hi, this is a bug in FatFS exFAT implementation. I enabled exFAT recently because it’s faster than FAT, especially when working with large SD cards (> 32GBs), but it seems broken. Your M7 probably works because you don’t have that FW that enables exFAT. Anyway, the latest FatFs fixes this issue, but I can’t update FatFs now because MP changed it a lot. So I’ll just disable exFAT for now, and I created an issue hopefully it will be updated and I’ll re-enable exFAT after.

It sounds like you’re saying that I should re-load the firmware in the H7. It there a special version I should use?

I got the firmware post on the “other” forum thread Use FLIR Lepton and Camera toghether on one module - OpenMV Products - OpenMV Forums. I loaded it and it worked perfectly for solving the SD card problem. Thank you.

Any older release should work, I’ll be doing a new release with exFAT disabled soon.