Cake quality control

Hi,

I am working on a quality control solution for cakes laying a conveyor belt, 3 products per second. I am checking:

  • if (any of the edges of) the cake is broken
  • if the surface is burnt
  • if the surface is not flat

An example image:
rotation.png
The cake has 6 edges and I have set up different cameras for each edge.

The steps of the process:

  1. Locate the product and “zoom in”
  2. Check the edges of the cake
  3. Check the color of the surface (burnt or not)
  4. Check the surface layer (flat or not)

Step 1.
I am able to find the object very quickly (85fps) by downgrading the frame_size to QQQQVGA grayscale temporarily and use the find_blobs method.

a) The orientation is not correct. I was about to do img.rotation_corr(z_rotation = blob.rotation()), but the rotation attribute of the block is a very low number (should be in degrees as I understood) so it does not really help since the rotation is about 45 degrees.
sample.png
b) What is the best way to zoom in?

c) BTW I was thinking on getting the threshold for the blob detection like this, but I am not sure if it necessary, since a predefined threshold works fine, but I guess it might help in a slightly different light environment:

threshold = img.get_histogram().get_threshold()
blobs = img.find_blobs([(0, threshold.value())], invert=False)

Do you think it is necessary?

Step 2. (Edges)
I was thinking on the dilate function and also find line segment method, but I have not come up with a proper solution. There must be something more simple I guess. Could you please advise what is the best approach to check if the edge is broken?

Step 3. (Color)
I guess I have to use the img.get_histogram() function and calculate a mean on the whole product surface and check if it above a threshold and repeat the same process on smaller parts of the product. Please advise if there is a better approach!
Sample of a burnt surface:
burnt.png
Step 4. (Surface)
Since it is 3D, I was testing the IR lens, also with an external IR backlight coming through the product from the bottom, but it was not successful. Even though it is not feasible since the conveyor belt. What I have found useful are the erode function and the grayscale binary filter. Could you please suggest any better idea?

Question 5: Throughout the process, I am trying to use the same image (not taking another snapshot) if the quality of the image is sufficient for the next step. Is there a way to reset to the original image? Since the product might have already moved on the conveyor belt, I would like to avoid taking another snapshot.

Thank you,
Peter

Awesome! Glad to meet someone on the forums who knows what they are doing! :slight_smile:

Since the object is square it’s impossible to get an orientation automatically using the .rotation() attribute of a blob. That attribute is calculated by looking at x/y point distributions and unless the object has an elongated shape there’s not really any signal to look for. Anyway, the rotation is in radians. Not degrees. Unfortunately, our library often returns radians or degrees for methods. So, please read the doc carefully. Um… that said… you can try with rotation correction anyway and see what happens:

img.rotation_corr(z_rotation = math.degrees(blob.rotation()))

As for zooming in:

Set the resolution to QQVGA and then use find blobs… however, increase the X and Y stride of find_blobs() to make it go faster. The camera only have two frame rates (> QVGA) produces a slower FPS than (<= QVGA). Making the res smaller just reduces processing… but, doesn’t actually change the frame speed. So:

find_blobs(x_stride = 8, y_stride = 8)

Play around with the strides to find one that works. Striding is basically sub sampling the image. Alternatively, you can also use the image.pooled() method to create a downsampled image copy from the main image if you don’t want to use strides:

img.pooled(4, 4).find_blobs()

This creates a very small res image copy. That find_blobs() will run very fast on. Then, you can just work on the original image afterward.

As for thresholding. Yes, that method will work great. That automatically finds the image threshold by splitting the image histogram.

As for detecting if the edge is burnt… or broken. Mmm, this is hard. I’d say for a burnt edge… you may be able to count pixels within the blob that meet the color threshold test or not. The burning in the image you gave me isn’t that bad though… so, it’s hard for it to be obvious to the camera. If images are similar enough you can make a threshold on the number of pixels detected per cake and then say it’s bad if it’s outside of a limit. This is basically a test on the count of burned pixels. Um, note that through filtering… like using the adaptive filtering parameters of mean/median/mode you can make a detector that will be a lot more sensitive to color changes.

As for the edge being broken… well, I guess you can use binary methods. In the next firmware about to come out we’ve really made all the low level stuff better. You’ll be able to use the top_hat() method to find the binary image edges. Or you can just use the find_edge() method. Once you do that… you can threshold based on the number of active pixels in the image (use find_blobs() to count). A missing piece of the edge of the cake will make the contour longer. That said, detecting small changes is very hard. If like half the cake is missing then this is easy.

As for restoring an image. Yes, you can allocate another frame buffer. See sensor.alloc_fb(). This will allow you to create another image buffer. Then, you can use img.replace() to copy an image into it. See the frame differencing example scripts for more information.

Thank you so much for the ideas, I will work on them!

You have mentioned the new firware. Is it the 2.8 which you have posted in this post?

Could you please send an updated version where I have access to the top_hat method and few lines of code for that?

Thank you,
Peter

Hi, the new firmware will be released this weekend. Please wait till then. You can find the method name and call values from our GitHub.in the mean time.