Load jpegs taken by H7+ for post-processing

I need to load images that were taken with the H7+ and classified using tf.classify to re-classify them with different tflite models for simulation, diagnostics and performance comparison.

I wrote a code that scans the SD card for jpegs, loads them one by one, and runs tf.classify() on them.

However, whenever I run the code, there is a memory error. I don’t understand why because the images are generated by the H7+ itself and loaded in the same way?

Hi, don’t call dealloc extra fb. That’s to be used in a very particular way. Right now it’s just messing up our code.

You can only call that after an alloc.

To note, we normally try to avoid exposing how memory works since we don’t have an OS. However, we do in this case. If you are planning to use it you should read the docs about how memory is partitioned.

The error also occurs without the dealloc extra fb line (I added it after obtaining the error, thinking it might help).
I am confused because when the image is created by the board’s snapshot, it can be analysed with tf.classify, but when it is loaded from the card instead, it doesn’t work. So what can I do?

Could be an issue with the JPEG decoder… Can you post the full script ? Please use code tags.

EDIT: Actually please just attach the script, model, labels etc… in a zip file.

Oh, yes, actually, it’s totally an issue with the jpeg decoder. The way this is setup right now you can’t actually read jpegs directly. You need to decode them explicitly first.

So, do

img.to_rgb565()

This is fixed in a new version of TF I will get out before the end of the week.

Excellent, thank you guys for fixing this!
I still get a “Warning: JPEG too big! Trying framebuffer transfer using fallback method!” but the script proceeds and completes just fine.

The first 2 jpegs are decoded fine but the next ones are weirdly garbled…
image
sent a PM to @kwagyeman

Is this a jpeg format file? looks like decoding failed…

yes the image whose screenshot I posted above is from the IDE preview after it was decoded by the board. It is the third image in a sequence of very similar images that were generated by the board. Only the first two were decoded properly. I sent a series of 6 images with the script and model to Kwabena.

That’s interesting, in my understanding, that’s almost the images in the same config? and the first two were successfully processed but the third one failed? It should be a bug…

Yes all images with the same config, only a few seconds difference.

It’s a bug with Image loading and setting up the frame buffer. Jpeg decompress is fine.

Fixed Fix jpeg loading by kwagyeman · Pull Request #1478 · openmv/openmv · GitHub

The fix for being able to run TF on a jpeg image without having to convert it to RGB565 is coming soon. I got all that logic into the next release of TF with object detection. I just have to find the right version of the TF library that doesn’t crash with a segfault (Google releases code that segfaults - yes).

That’s awesome! Thanks for your hard-working XD

and for your information @darrask , could you please have a try?

I confirm that the jpeg loading is fixed in the durrent 4.1.5 firmware under development - it is now with proper width and height parameters - thanks!
However, this does not allow yet to avoid using the img.to_rgb565() conversion code - I suppose this is coming next.

I can give you the not yet released firmware which does away with needing that.

1 Like

Gladly - it would also help us investigate other matters!

I would be very glad to get the newest firmware - there are two other bugs related with JPEG loading that I hope could have been resolved already with this maybe?

firmware.zip (1.1 MB)

Attached