Optical Laserscanner

Dear readers

I have recently started working with the OpenMV technology, I have a project where it is necesarry to make a scan of a box. In this box are multiple object which needs to be determined.
One of the approaches includes the OpenMv camera in combination of a line-laser. The setup is as follows:
-The camera is mounted at an angle with the horizon (pointing down into the box)
-The laser is mounted straight above the box (it provides a line )
-The output of the camera is the bottom of the box with a line vertically in the middle

The essence of this setup is to measure the difference in the position of the line (the line moves left or right when an object is higher than the bottom). I have filtered this line out (binary) and can add the blob recognition. The problem is that i need a function that can say per horizontal pixel-line where the first binary (high) point is on this line. or i need a function which can divide this large blob into very small blobs, so i can plot the x position of every pixel on the vertical line. I have looked into the forum and library, but i dont think there is any standard function for this. Does somebody know a solution?. I hope I have been clear, for questions just ask.

Much thanks in advance

Hi, just use the “roi” feature of find_blobs. Basically, compute a giant list of ROIs for every area you think you want to find pixels in the image and then only call find_blobs for those rois. Each roi (region-of-interest) should be (x, y, w, h) → (x, y, 1, h). Where x is incremented per ROI and y is the same for all ROIs. H should be equal to the max height you expect.

find_blobs will run ultra fast when working on a 1xH pixel area. Note that you also need to set “x_stride=1” in find_blobs.

Anyway, so, as for the list of ROIs. I would programmatically generate that and then call the find_blobs function on each value in the list to generate the list of detections.

Thanks for the reply. I’ve made the script where the find.blob-function scans every row in the screen. However, since I want to have an accuracy of 1 pixel, the scan was way too slow(with height of 1 or 2 pixels).
Now I made a new script with the function: get_pixel. This script is faster, but still not as fast as I would like; the recieving of every x-position of the laserline in one frame takes about 700 miliseconds. (in contrary to the blob finder which took more than 5 seconds). My question now is how to make it even faster. Is there a faster approach? does the get_pixel function take a lot of time?
Another concern for me is the FPS, which drops to 1 FPS after completing the cycle a couple of times.

Here I will add the script and image of the problem:

import pyb, sensor, image, math, os
sensor.reset()
sensor.set_framesize(sensor.QVGA)
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.skip_frames(30) # Let new settings take affect.
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
threshold= [235,255]
# threshold = [255,255,0,0,0,0]
x=0
y=0
i=0
while(True):
    img=sensor.snapshot()
    img.binary([threshold])
    for i in range(0,4):
    
        for y in range(0,210):
            p=0
            x=0
                
            while p==0:
                x+=1
                p=img.get_pixel(x,y)
        
            xfirstwhite=x
            
            while p==255:
                x+=1
                p=img.get_pixel(x,y)
                      
        
            xfirstblackafterwhite=x
           
        
            xcenter=(xfirstwhite+(xfirstblackafterwhite-1))/2
    print("Done")

Thanks in advance!

Hi,

I though you only need to call find_blobs on every column in the image between some min row and max row. Isn’t that the case? Not on every pixel. It shouldn’t take 5 seconds. Can you post the code for find_blobs. Let me take a look at it and see what can be done.

Anyway, if you want to go faster you’ll have to edit the firmware directly in C. Will you be okay with this? It’s not really that hard.

Thanks again
Here is the script of the blobfinder:

import pyb, sensor, image, math
sensor.reset()
sensor.set_framesize(sensor.QVGA)
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.skip_frames(30) # Let new settings take affect.
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
threshold= [254,255]
#threshold = [255,255,0,0,0,0]
x=0
y=0
w=320
h=2
while(True):
    img=sensor.snapshot()
    img.binary([threshold])
    
    for blob in img.find_blobs([threshold], roi=[x,y,w,h], pixels_threshold=1, area_threshold=1):
        #img.draw_rectangle([x,y,w,h])
        z=int(blob.x())
            #print(z)
    
        if (z>0):
            y=y+h
              
            #print("y:",y,"x:",z)
    
            
        if (y>239-h):
            y=0
            print("Done")

I will look into the firmware, thanks again!

This code scans one row in the image per frame. So, it will take a while to scan the image. If you want it to go faster then you need to check more than 1 row per frame:

import pyb, sensor, image, math
sensor.reset()
sensor.set_framesize(sensor.QVGA)
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.skip_frames(30)
sensor.set_auto_gain(False)
sensor.set_auto_whitebal(False)
threshold=(250,255)
x=0
y=0
w=320
h=1
rows_per_frame = 1 # increase to do more rows per frame
while(True):
    img=sensor.snapshot()
    for i in range(rows_per_frame):
        blobs=img.find_blobs([threshold], roi=[x,y,w,h], x_stride=1, y_stride=1, pixels_threshold=1, area_threshold=1)
        # need to find the lowest x position
        min_x = 320
        for b in blobs:
            min_x = min(min_x, b.x())
        print("x %d" % min_x)
        # goto next line
        y += 1
        if y >= 240:
            y = 0
            print("Done")

I’m still not quite sure what you are doing so I can’t say if C code can help. I don’t think there’s a bottle neck with the python code.

Thanks for the answers, I am still working on the outcome.
Now i have come to another problem:
These are the steps in the proces:
1-make snapshot of the background (the laser is off)
2-turn laser on
3-make snapshot of the background+laser
4- with the function: image.difference, subtract these images.
5-laserline remains

This proces works correctly, the only problem is that the pictures are over-illuminated. I have start playing with the auto_gain and gain_ceiling but this seems not to affect the images (enough).
So my question is how can i stop the camera from gaining the picture. (i want the image that is provided by the camera in the first frame when the script is running (i.e. the helloworld.py))

PS: I will post soon the result of the problem.

Thanks in advance, I hope I am clear

Hi, yes… So, you can turn auto gain and auto white balance on and off. This will allow you to prevent the gain from running or turn it back on again. See the color tracking scripts for what to do on this. The number of frames you wait determines the outcome.

If you want to control the gain values directly then you can do so via register reads and writes. Please see the data sheet for the camera which is on the product page. In the future we can make this easier but right now we just have low level control.

Thanks again,
My collegue, who is a software engineer is looking into the firmware and the register. Now I (who is not a software engineer) got another problem:
I want to apply a lens correction to the image, since i need a rectangular image and not the fisheyed image.
but there are some errors on the way:
when i just take an image with the sensor.snapshot() and apply the lens_corr() to it, it tells me: FB Alloc Collision!!!.

Now i tried to solve this myself with the function image.Image(copy_to_fb=True) with the intention to load it into the framebuffer, but now it gives the error MemoryError: Out of Memory!!!.

I dont think its necesarry but i have a SD card in the camera itself.
You know any solution?

Thanks for the help so far!!!

Wait for the OpenMV Cam M7. It has enough RAM to do this. We’re shipping a lot this week. Otherwise you’ll have to lower the resolution on the M4. Email me your order number and I’ll bump your priority for shipping. Squeaky wheel gets the oil, etc.