Image.copy(): "OSError: Cannot copy to fb!"

I’m getting heap allocation errors when I try to extract an ROI from the sensor image. I don’t need the primary image at this point, only the ROI, so I figured I’d run the copy() back into the frame buffer (I presume I would have to call sensor.snapshot() after a frame buffer Image.copy() to get the ROI back again), but the copy fails with “OSError: Cannot copy to fb!” and no further explanation of the cause of the problem.

How do I copy from an Image into the frame buffer (specifically to avoid heap errors) and then get the extracted image back (must I call sensor.snapshot() again) for further use?

Thanks.

As a follow-up, is there any way to give Image.copy() a preallocated heap array to copy into? I could do this manually via get_pixel() by placing extracted pixels from an ROI into a preallocated array, but I obviously want to use Image.copy() for speed.

As an extra feature, it would be cool to run pooling and copy at the same time so as to perform ROI extraction and downsampling simultaneously. Currently, copy() doesn’t have a pooling option and the pooling functions don’t have an ROI option. Either approach could work in theory, although one might be “better” for reasons specific to OpenMV.

The plan is to actually do all this very soon. I plan to make copy support an roi and scale value. Then I need to go through and fix up how image buffers can be transferred around.

Mmm: https://github.com/openmv/openmv/blob/master/src/omv/py/py_image.c#L1154

Looks like that line is wrong. Should be == That would be a bug. Okay, I’ll schedule to work on this this week. I’m almost done with the new blob code and can get that merged into master hopefully by Wednesday/Thursday.

Thanks.

I realize you have admitted this is upcoming work, but I’m still curious if I’m “doing something wrong”. I am getting consistent heap errors after my program runs for a minute or two. This isn’t surprising since it repeatedly stomps all of the memory as it copies out an ROI from the sensor’s latest image to process every single frame. Why has this not been a more serious problem in the past? Are most people not processing some cropped ROI from the sensor? Are most people processing the entire sensor image every single frame? I am aware of the sensor windowing function but that constrains what you can show on the LCD too, so I wouldn’t expect that to be a universally preferred approach to ROI extraction.

Are most people successfully extracting ROIs from the sensor via copy() without ruining heap in a minute or two? I’m curious if I’m not handling this all correctly.

I don’t think most folks use copy(). Generally, most folks stick to just running methods on ROIs and stay to one frame buffer. Keep in mind every method takes an ROI argument to avoid having to use copy().

Anyway, to prevent heap fragmentation import the gc module, del the one that once you are done with it, and then call gc.collect().

You have to do this because MP will avoid freeing the heap as long as possible but keep in mind a lot of the python calls allocate stuff. So, what happens is that when you have big allocate a bunch of small ones happen afterwards each time repeatedly for little things which causes the heap to get rather fragmented. If you do big allocs you have to basically manage this yourself.

Ah, I did something similar on a PyBoard project, manually triggering the GC at optimal times. I haven’t tried that on my M7 code yet. It had crossed my mind as something to try, but I avoided it because, of course in theory, we shouldn’t be calling the GC our own, so I feel “guilty” doing it. :smiley:

I’ll see if it alleviates my situation.

BTW, the reason I can’t pass the ROI into a compiled function is that I’m sending it out over SPI to an external device.

It’s looking more and more like I’m going to have to roll up my sleeves with the compiled routines. I’m actually interested in another need there anyway. I want a compiled function that extracts an ROI and sends it directly out over SPI without coming up to the Python layer at all. That’s too esoteric to expect as a general function so I’ll have to add it myself at some point.

Cheers!

I’m working on making the copy() method better today. Note that I’m not able to fix the memory issue. You just have to call the GC methods for that. But, it will be far more functional once I’m done and it will support scaling and cropping in place.

New copy code is done.

copy(roi=(x, y, w, h), x_scale=1.0, y_scale=1.0)

You also have crop() and scale() now too which take the same arguments. Now, the big new feature that I added is that when you specify:

copy_to_fb for copy you can pass another image as the value and copy will copy the pixels into that image and change the image’s shape. This allows you to move images around in image buffers. I will be rolling out a similar like feature to all methods which do copy operations so you can move buffers around now.

That said, this is very not pythonic.
firmware.zip (919 KB)

Can’t wait to test drive it. Thanks.

So, silly question. How do I create a blank Image of a given resolution and mode to pass into copy in the copy_to_fb parameter? I can visualize how to create a blank bytearray for raw pixel data and pass that in, but I don’t see a way to create an Image. The only Image ctor I see in the docs creates an image from a file. The only other way to create an image is through a copy, crop or other alteration function.

Is the intent that I create my buffer the old way the first time, by not passing in copy_to_fb, and then pass that buffer back in thereafter?

Having coded this approach up, it seems to work. It just prevents me from allocating my larger buffers (images and such) at the beginning of the program. I have to lazily allocate this ROI buffer the first time I copy it out of the frame buffer (hoping there’s a contiguous block of memory that will hold it at that point in time) and then pass it back in on subsequent calls. For the time being, this seems to work. Thanks.

Um, so, keep in mind I’m only comfortable now opening this feature set up with the H7. The idea of having multiple images in RAM before was not really possible when we first started coding this on the M4. There just wasn’t space. The memory architecture behind the scenes is kinda messed up in this regard but I think I can work through all the problems.

Anyway, for creating a blank image there’s no mehod for this now. I will add one however. You can also allocate frame buffers in the sensor module using the extra_fb_alloc() method. These are large and blank. You can also copy an image and then clear it.

Note about the memory architecture. So, back on the M4 the heap was ultra tiny and you couldn’t really fit any images on it. On the M7 it was only marginally increased because we had to stick a lot of DMA buffers in the same RAM as it. On the H7 the heap is for the first time as large as the frame buffer.

Anyway, from a computing perspective the heap and the stack share the same RAM and grow towards each other.

Next, the key insight that made the OpenMV Cam possible on the M4 was the idea of a frame buffer stack. Basically, the frame buffer is a region in RAM that is very large where snapshot drops it’s images. It grows up while the frame buffer stack grows down. Both of these data structures are in a different memory segment on the chip versus the heap and stack. The frame buffer stack allows us to allocate lots of RAM real quick for large data structures to do iamge processing.

Now, the original idea was that you’d only have one image in memory at a time. However, we kept adding features that violates all of this to get more sales. So, the Frankenstein we’ve ended up with means an image can be in the heap, or in the frame buffer, or on the frame buffer stack.

I’ve written copy() to deal with this all and will update all the methods that can create copies to deal with these issues behind the scenes so that you can easily move images around. All the ram is addressed the same so it’s not too much of an issue to deal with but things can get tricky when I have to handle in place operations and etc.

I don’t think I realized I could keep multiple images in the frame buffer. I thought it was or referred to the image from the sensor such that any manipulations of the “frame buffer” altered that one “buffered most recent frame”. I had misunderstood that entirely. I’ll look into that option. It sounds like a good direction to go.

Thanks.

So I’m attempting to preallocate an image to hold an extracted ROI from the sensor’s snapshot. This seems to work:

roi_img = sensor.alloc_extra_fb(roi_w, roi_h, sensor.RGB565)

However, I have discovered that if I change the pixformat, the extracted ROI does not change with it even though it claims to update the size. The docs said copy() will change the Image size, e.g.:

img.copy((roi_ul_x, roi_ul_y, roi_src_w, roi_src_h), x_scale=roi_scale, y_scale=roi_scale, copy_to_fb=roi_img)

but, even though it may change the Image size, it doesn’t change the format (so if you switch the sensor from RGB to grayscale, the extracted ROI remains RGB and produces tuple pixels). So, I figured I would deallocate and reallocate the ROI buffer:

sensor.dealloc_extra_db()
roi_img = sensor.alloc_extra_fb(roi_w, roi_h, sensor.GRAYSCALE)

which produces the following perplexing error:

AttributeError: ‘module’ object has no attribute ‘dealloc_extra_db’

Hmm, dealloc may be called something else. One my phone right now. Please check the py_sensor.c file on GitHub for the method name.

As for the copy issue. Can you elaborate? What particular code has issues? You may be right in that the book filed is not updated correctly.

Okay, it’s fixed now. The way I wrote the function is for it to create a new image handle when it runs leaving the other ones stale. I’ve now updated it to modify the old image handle that point to the same thing. That said, I can’t handle all copy types. If you have an image in the frame buffer and you pass copy(copy_to_fb=True) to force it to update the main frame buffer and you did this from an image outside the frame buffer I am unable to update the old frame buffer handle since it’s floating somewhere on the heap and I don’t have immediate access to it… Otherwise, the method handles all possible cases now regarding any images passed to the method.

Mmm, this stuff is rather ugly. Anyway, when using copy operations assume old handles are stale. Please use the handle the methods return.
firmware.zip (919 KB)

Ah, that’s a typo in the docs. It’s dealloc_extra_fb() not db.

Ah, so even if I pass an image reference in to use as a preallocated buffer, I should still capture the returned reference and use that from that point forward.

Yeah, that’s the best thing to do. With the updates you don’t have to do this all the time, but, using the returned reference is the safe test way.