Hi, Is there any way I could implement semantic segmentations?
You need to train a CNN to do this. Ibrahim added an algorithm to do this based on the paper for the non-CNN version a while back. I can’t comment on how well it works.
We do that a method in the py_tf library that will accept an image segmenter and output a segmentation mask.
Thank you, Kwabena. Where can I find more information on these topics?
Not sure, I don’t know if Edge Impulse released a framework for object segmentation.
For our SDK, if you have a CNN that takes an image and then outputs a set of image layers that represent class scores per pixel we have everything setup to handle that.
Thanks, I will try it out.
We are trying to use OpenMV for a space mission. “Munal - Nepal’s First High School Satellite” Here in the camera mission, we are trying to implement classification (good/bad image) and then segmentation on good images to get the contents of the image and append it in a log. The log will be used as a reference to downlink images afterward. For segmentation, four classes: Space, Land, Sea, and Cloud are specified.
Do you have any recommendations to successfully complete this mission?
I’d ask the folks on the Edge Impulse forums on how to make a segmentation CNN. I don’t know if this is supported by them yet but this is what you want to do. They already have it basically with their FOMO CNN so they should be able to tell you what you need to do to train an image segmentor.
That would be very helpful. I will wait for some insights on the training image segmenter.
Thank you, Kwabena.
Hi, Kwabena. Is there any way or any new updates coming soon that will support semantic segmentation?
It’s dependent on Edge Impulse. I told the CTO about this feature and the need for it. They have the ability to do it with their modes but they don’t have a UI for it yet to make it easy to make the masks.
Hi, are there any updates on this? Also, I tried the BYOM feature of EI but the model output should comply with either classification, regression, or object detection. My model currently produces an image of dimension equals the number of classes with a probability of the corresponding class using simple RCNN. The problem is I cannot find proper inference to run this model in OpenMV.
Can we set a meeting to solve this? Our scheduled launch date is this July and we really want to execute this.
Sure, email me at email@example.com.
However, have you seen: tf — Tensor Flow — MicroPython 1.19 documentation
The code is already there to process a segmentation CNN as this is what FOMO is…
So, the input is [i_h][i_w][i_c] where c is 1 for grayscale and 3 for RGB and the output is [o_h][o_w][o_c]. The code will then turn the output input a list of images of [w][h] with the class score in c as the pixel value.
This code is known to be working already since FOMO uses it. E.g. the tf — Tensor Flow — MicroPython 1.19 documentation just adds blob detection on the segmentation output.