Hi,
I want to blend two pictures, but only over a specific region of the image (for example, blending only on half of the picture). Based on the documentation, I expected this to be possible using the “mask” attribute, but it does not behave the way I anticipated.
For instance, I use the blend() function to average the background over time, and that works correctly. However, when I try to apply the “mask” attribute to perform the blending only on a subset of the image, I end up with something that looks more like a basic AND operation between the image and the mask — the previous blending effect is completely lost.
Does anyone know how to properly use the mask for partial blending, or if this feature behaves differently than expected?
Mmm, what blend will do right now when you pass a mask is to read in both images, perform the blend, line by line, and then write out only pixels that have the mask pixel set. I have this line, though, in the helper code: openmv/modules/py_image.c at master · openmv/openmv · GitHub , which will cause Blend to treat the image behind drawn on when blending as if it’s pure black.
I think there’s a bug here when using the masks. I’ll make a ticket for this, but, if possible, don’t use the mask ops. I may completely remove masks from the API. The reason being that they cannot be SIMDed. So, using them makes everything slow. It will be faster to just do everything without the mask operation at all.
E.g. perform the blend operation and then OR/AND pixels into place. You can also use MIN/MAX to clear parts of an image to 0 or to 1 for this purpose.
Thanks,
This is the second time you’ve helped me in one week — it’s always a pleasure to read your messages. It’s unfortunate that the mask parameter is planned to be removed. In my opinion, it had great potential to simplify and clean up the code, so losing it will be a drawback.
Anyway, thanks to your feedback, I changed my approach and now use OR/AND operations to achieve the same result. I’m sharing the basic code below for anyone who might need it.
#CODE
import sensor
import time
import image
sensor.reset()
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.set_framesize(sensor.B64X64)
sensor.skip_frames(time=2000)
clock = time.clock()
back_fb = sensor.alloc_extra_fb(64, 64, sensor.GRAYSCALE)
tmp = sensor.alloc_extra_fb(64, 64, sensor.GRAYSCALE)
mask_fb = sensor.alloc_extra_fb(64, 64, sensor.GRAYSCALE)
mask_fb.replace(image.Image("round.pgm", copy_to_fb=False))
while True:
clock.tick()
img = sensor.snapshot()
tmp.replace(img)
img = img.blend(image=back_fb,alpha=242)
# Keep only the masked portion of the blended image
img = img.b_and(image=mask_fb)
# Keep only unmasked portion of old background
tmp = tmp.b_and(image=mask_fb.copy().invert())
#Combine the two to form new background
img = img.b_or(image=tmp)
back_fb.replace(img)
print(clock.fps())
Yeah, I implemented the mask operation trying to mimic what opencv stuff could do. But, you cannot actually use it to do anything with SIMD. So… using it will always be super slow. So slow that it makes sense to drop it in the future.