I’m not the most experienced developer so I’m not sure how to do this.
I wanted to write my machine vision function to use on the openmv cam.
I’ve already written the function in python and tested that it works, it only implements iterations and numpy element wise addition and multiplication.
I made my python code work on the openmv using the ulab library, and its quick enough to get a good frame rate.
Problem is this is done on an ndarray of the image that is loaded from a file on the openmv. I can’t turn the openmv image class object into an ndarray because of memory, and there is no function to output it to an ndarray. It would also be too slow to rewrite my algorithm to do get_pixel.
Therefore I think I need to somehow write my own initialiser for the image class object so the data is present in an ndarray. The problem is I have no idea how to do this, I don’t know how to find the files on github and not sure how to compile, and deploy it to my board.
Could someone please give me some hints on what I should do to figure this out?