Standard Lens FOV

Hi all!
I’m using my OpenMV for a robotics application, and will make a conical mirror for increased FOV. However, the camera itself must have an FOV at or below 140 degrees horizontal and 80 vertical. Is the standard lens fitting, or do I need a telephoto to decrease the FOV?

The default lens has the following FOV:

HFOV = 70.8°, VFOV = 55.6°

You can find the FOV (and other specs) of each lens in its product page:

Thank you so much!
I just had one more question.
Since I need to recognize blobs in blue and yellow, and it would be much better if the blob would not split into multiple smaller ones, I have been looking into denoising. A simple averaging should suffice. However, I could not find any fitting img. transforms for denoising of any kind. Is it possible to individually read and write pixels in the result of an img.snapshot?
Thanks again

Hi, you can get and set pixels, but, it’s quite slow. What are you try to do?

Since I will be operating the camera in suboptimal light conditions, there is some RGB static in the image. This gets in the way of blob recognition, which is vastly more accurate and reliable when the static is gone (eg the camera is outside). That’s why I thought denoising via an averaging algorithm could remove most of the noise to get a better lock on the blob.
My project specifically is RoboCup Junior; my team is using the camera to track the goals. That means blue/yellow blob tracking, and so that we can aim our robot at the goal better I need to have the entire goal as a single blob, not multiple small ones, which also works better with less noise. Multiple small blobs means that a. programming the main computer to find the goal is much harder and b. the communication over UART is much slower.

Oh, this is easy to fix with the OpenMV Cam.

Just pass multiple color thresholds to the threshold list for find_blobs(). This allows you to make each threshold setting much tighter than normal to bound the color you want to track. Then use the merge/margin parameters of find_blobs() to have the blobs automatically merged for you into one. Even though the camera is internally finding a lot of small blobs it will combine all of them for you into one.

Ohhhhh ok thanks!
Over UART I want to return one xy each for both the blue and the yellow goal. Currently I’ve been testing using the blob finder in single color mode, so that it only finds the blue goal and not the other random things in the environment. If I pass extra color thresholds for blue and yellow, will the two colors be kept apart so that I can separate them into two blobs to send with UART? Intuition tells me I’d have to call find_blobs twice separately, once for each color.
Also, what are the arguments I need to pass img.find_blobs to modify merge parameters?

See the examples and the API please. :slight_smile:

Sorry to bother you again.
I was wondering whether it is possible to compile additional python libraries onto the OpenMV Cam?

Yeah, you have to edit the C code to do that however. Alternatively, just drop a library in python on the flash drive.

How would I do that individual pixel get and set?
The reason is that I’m trying to make the camera set its thresholds based on goals which have set positions, but because of changing lighting conditions hard-coding threshold numbers is risky.

edit: also, the library import does not seem to be working. I tried to put OpenCV source code, but do I just then import when I want to call its functions in the code?

Hi, the OpenMV Cam doesn’t run OpenCV code because we are a Microcontroller which doesn’t have an operating system on it.

As for getting pixel stats for an area. Please use the get_stats() method to get pixel stats of an area.