Issues with image format(s)

Hi, all.

I’m working on a project that will use face detection with the H7 Plus and have a few issues with image formats that I could use some guidance with.

Basically, I want to capture an image, detect the # of faces, and then I will be sharing the # of faces and the image with other applications (using an MQTT broker). Almost everything is working fine, with one notable exception. It was recommended to use the following settings for image capture for face detection:


HQVGA and GRAYSCALE are the best for face tracking.


The face detection algorithm is working fine, but when I transfer the grayscale image bytes (captured using the .bytearray() method), I’m unable to decode them in the client application using CV2.

If I use the following setting:

sensor.set_pixformat(sensor.JPEG) # Set pixel format

…I can successfully decode the image bytes on the other end using:

img = imdecode(jpg_as_np,0)

jpg_as_np is a Numpy array of type uint8.

However, this doesn’t work for grayscale images. Decoding fails. I’ve tried it with:

img = imdecode(jpg_as_np,1)

…which should theoretically work for grayscale images, but it doesn’t.

A few options that I can see:

  1. Capture the image as JPG, then create a grayscale converted copy for use by the Haar Cascade algorithm
  2. Figure out how to decode the grayscale format being sent into a format that I can visualize using imshow()

I’m not really sure where to start with either option.

Any insights greatly appreciated!


Leave the pix format as grayscale.

Then do this to send the data:


E.g. you compress the image on the camera using the built in jpeg compression hardware. When you set the pix format to jpeg you are getting a jpeg image from the camera IC itself.

Well that was easy! Worked perfectly. Thanks.