Hi,
I am working on a quality control solution for cakes laying a conveyor belt, 3 products per second. I am checking:
- if (any of the edges of) the cake is broken
- if the surface is burnt
- if the surface is not flat
An example image:
The cake has 6 edges and I have set up different cameras for each edge.
The steps of the process:
- Locate the product and “zoom in”
- Check the edges of the cake
- Check the color of the surface (burnt or not)
- Check the surface layer (flat or not)
Step 1.
I am able to find the object very quickly (85fps) by downgrading the frame_size to QQQQVGA grayscale temporarily and use the find_blobs
method.
a) The orientation is not correct. I was about to do img.rotation_corr(z_rotation = blob.rotation()), but the rotation attribute of the block is a very low number (should be in degrees as I understood) so it does not really help since the rotation is about 45 degrees.
b) What is the best way to zoom in?
c) BTW I was thinking on getting the threshold for the blob detection like this, but I am not sure if it necessary, since a predefined threshold works fine, but I guess it might help in a slightly different light environment:
threshold = img.get_histogram().get_threshold()
blobs = img.find_blobs([(0, threshold.value())], invert=False)
Do you think it is necessary?
Step 2. (Edges)
I was thinking on the dilate function and also find line segment method, but I have not come up with a proper solution. There must be something more simple I guess. Could you please advise what is the best approach to check if the edge is broken?
Step 3. (Color)
I guess I have to use the img.get_histogram() function and calculate a mean on the whole product surface and check if it above a threshold and repeat the same process on smaller parts of the product. Please advise if there is a better approach!
Sample of a burnt surface:
Step 4. (Surface)
Since it is 3D, I was testing the IR lens, also with an external IR backlight coming through the product from the bottom, but it was not successful. Even though it is not feasible since the conveyor belt. What I have found useful are the erode function and the grayscale binary filter. Could you please suggest any better idea?
Question 5: Throughout the process, I am trying to use the same image (not taking another snapshot) if the quality of the image is sufficient for the next step. Is there a way to reset to the original image? Since the product might have already moved on the conveyor belt, I would like to avoid taking another snapshot.
Thank you,
Peter