Capturing and transfering lossless or RAW images via rpc from Openmv to pc

Hi, hopefully I’m asking this correctly, I’m not a great programer and I’m a chemist in way over his head.

I’m using the RPC library examples given in the IDE and described in the following link How to use the OpenMV RPC Library to Transfer Images to your PC - YouTube

I have no issues running this and it works perfectly for what Its meant to be. The problem is it transfers jpegs which is a lossy compression format, due to the nature of what I’m using the devices for I’m taking pictures of what is probably one of the worst possible targets for image quality loss via lossy compression. (battery electrodes charging and discharging in situ, in short grey noisy blobs appearing and disappearing at mostly random).

Is there a way I can adapt the RPC examples or use an alternative method to get either files transferred in a lossless or RAW format, or is the images collected by camera directly converted into a lossy format on capture before they can be intercepted via software. However I’m under the impression that the RAW transfers between two openmv cameras (as described around 1:26 in link) to each other implies this is possible.

An extremely slow transfer of images even as low as a frame rate of 1/60 FPS is completely acceptable for my use as the batteries don’t change very quickly.

I’m using OpenMV Cam H7 Plus with a SD card, The code I’m using is stock RPC example (Image Transfer - As The Controller Device and its pair) with extremely mild <10 lines of additional code. I would like to point out I’m not particularly great at coding as I’m a chemist who is way in over his head.

With additions of on PC side
tosave = Image.open(io.BytesIO(img))
tosave = tosave.save(“test.tif”)

and some other lines of print and some looping numbers to look for it freezing for UI reasons mostly looking for camera to be disconnected either via being bumped or acid eating wires (hasn’t happened so far).

On Camera side
adding a loop to reset camera every so often (as camera times times out after 12+ hours for whatever reasons this seems to fixes that)
if (i < 1000):
i=i+1
interface.loop()

if (i > 1000):
    machine.reset()

and also import machine at the top.

There is additional code that picks up the saved file and sorts it but isn’t relevant for this.

Yeah, just remove the compress() part in the script on the camera. This will send the image as a RAW byte array.

However, there’s no software on the PC to decode this… so, you are on your with manually decoding RGB565 pixels one at a time.

To double check, I’m understanding you correctly.

img = sensor.snapshot().compress(quality=90)
becomes
img = sensor.snapshot()

And then it will send some unintelligible signal (to example pc code) to the pc, that I will have to write more code to get into an image?

If my understanding is correct thank you, for clean advice and timely turn around.

The image sent then is the raw byte buffer.

For grayscale images this is 8 bits per pixel. For RGB565 images is 16-bits per pixel with 5 bits for red, 6 bits for green, and 5 bits for blue. For YUV images it’s 8 Y and the 8 U or 8 V alternating.

You need to understand image formats to deal with the data that will come from this.