Improving segmentation from video feed

Apologies for posting so many threads but since the’yre of different topics, I figured this needs a new thread.

I managed to detect, crop & successfully segment license plate image using haar cascade classifier. However, the code is in a for loop, so it’ll keep detecting and segmenting images/characters. The problem i’m facing is that there’s no way to ensure that the segmentation is good (the detection by haar cascade usually is not an issue).

My license plate has 7 characters and sometimes it detects only 3 or sometimes it detects 8 (of which 2-3 are repeats). Is there a way to improve this process?

My first thought was to use .find_rects() and set a high enough threshold that basically the license plate is detected when it’s close enough instead of using haar cascade. I have erosion and dilation in my code before finding blobs but this is rather ‘image specific’ as depending on the image captured, the same steps might not work.

Also, how would i escape the for loop once i’ve captured those 7 characters? would a break statement suffice or to count the number of saved images?

if temp_h > temp_w: 
       
        crop2.save('trial-%d'%i,roi=(temp_x,temp_y,temp_w,temp_h))
        utime.sleep_ms(500)

I’d count the characters in a loop and break when done. Regarding dilate and erode… yeah, it’s not going to work all the time. If you want better accuracy you have to switch to using CNNs.