How to add function to image class

I’m not the most experienced developer so I’m not sure how to do this.
I wanted to write my machine vision function to use on the openmv cam.
I’ve already written the function in python and tested that it works, it only implements iterations and numpy element wise addition and multiplication.
I made my python code work on the openmv using the ulab library, and its quick enough to get a good frame rate.

Problem is this is done on an ndarray of the image that is loaded from a file on the openmv. I can’t turn the openmv image class object into an ndarray because of memory, and there is no function to output it to an ndarray. It would also be too slow to rewrite my algorithm to do get_pixel.

Therefore I think I need to somehow write my own initialiser for the image class object so the data is present in an ndarray. The problem is I have no idea how to do this, I don’t know how to find the files on github and not sure how to compile, and deploy it to my board.

Could someone please give me some hints on what I should do to figure this out?

Many thanks.

The ulab module doesn’t support images (specifically not RGB images since it doesn’t support 3 dimensional arrays), I’ve opened an issue about this here ndarray from image ? · Issue #2 · v923z/micropython-ulab · GitHub the developer of ulab said it will be supported at some point. I can make grayscale images work (as 2d arrays) if that helps, but it takes a lot of memory anyway so it will only work on the smallest images (or on our new camera with SDRAM). I’ve decided to just keep ulab as a general purpose library.

As an alternative, you can write your code in C and if it’s useful enough, send a PR and I’ll add it to the firmware.

Hey,
The algorithm is for grayscale images, so ulab is working fine, and it would be great to have the image as an ndarray if thats possible! :smiley:
Why would outputting the array as a ndarray take more ram? I may be wrong but ulab’s documentation claims that it only takes a couple bytes more to store the data as a ulab ndarray, as opposed to string the numbers individually, as it’s frugal with ram. Source: https://micropython-ulab.readthedocs.io/en/latest/ulab.html#ndarray-the-basic-container

The algorithm is definitely not useful enough, as its very specific to my application.
I can write it in C, but problem is I have no idea how to integrate it into the existing image class firmware and deploy it to my openmv, I’m very amateur in that regard.
Would I need to edit something at master/src/omv/py/py_image.c and then use the makefile? :confused:

The easiest approach would be if its possible to get the image class to present the grayscale data in an ndarray by maybe putting an argument into
img = sensor.snapshot(type=‘ndarray’) or something.

Thanks for the reply! :slight_smile:

See these methods that talk with ULAB:

https://github.com/openmv/openmv/blob/master/src/omv/py/py_image.c#L264

https://github.com/openmv/openmv/blob/master/src/omv/py/py_image.c#L283

Basically, an image to it appears as a new list of ints per row.

Regarding adding a new algorithm… see this PR for an example of how to do this:

Generally, you need to make all the edits to the firmware in the same files as the above PR does when adding new code. The wiki tells how to compile the code: Home · openmv/openmv Wiki · GitHub

Mmm, I think it should be possible to cast the image array as a C array in ulab which would prevent the need to copy things like we are doing currently. Ibrahim, can you comment on this?

I think you just need to create an ND array object:

typedef struct _ndarray_obj_t {
mp_obj_base_t base;
size_t m, n;
mp_obj_array_t *array;
size_t bytes;
} ndarray_obj_t;

Set the base, set m/n, set array to point to the raw data, and set bytes to the pixel size in bytes. We’d just add a method to the Image class which would be like “.np” which would return the above object for use with ulab. Only grayscale would work easily out of the box… but, it should work.

Actually I enabled that already for GS and RGB (RGB images returns 2D arrays of uint16) and forgot all about it, here:

You should be able to run something like:

import sensor, image, time, ulab as np, gc

sensor.reset()                      # Reset and initialize the sensor.
sensor.set_pixformat(sensor.GRAYSCALE) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA)   # Set frame size to QVGA (320x240)
clock = time.clock()                # Create a clock object to track the FPS.

while (True):
    img = sensor.snapshot()         # Take a picture and return the image.
    a = np.array(img, dtype=np.uint8)
    print("mean: %d std:%d"%(np.mean(a), np.std(a)))

I’ll add that to the examples…

It’s an array of arrays (of arrays for 3 dimensional arrays, when implemented) with MP int objects as elements allocated on the heap, so it takes RAM but not all once, it allocates this memory when accessed, which could cause the heap to be fragmented easily.

Note there may be a way to avoid copying/making new objects, but the arrays will point to the framebuffer memory which could get overwritten or resized.

Oh thats good to hear!
I tried the code and it works fine, but for some reason when I comment out the line that calculates and prints the mean and std, it throws a MemoryError.
How can this be? How does print the mean and std use less ram? Has this got something to do with python realising that you aren’t using the ndarray and creating a copy, but if you use it, then it doesnt?

Also I assume that creating this ndarray creates a copy of the one dimensional array thats already stored in the image class object. What I meant in this thread, is if its possible to have that img object store the image as an ndarray instead of a one dimensional one.

I’m not sure, probably has something to do with fragmentation.

No it’s not, our code needs it to be in a 1D C array.