Script autostart

I did that before… I tried it again but still no udev folder

I did:

cd openmv/openmvide

./setup.sh

on the terminal… I also tried double clicking it an executing it, but nothing changed

A udev folder will not appear. However, if you do “ls /dev” you should now see a device called “/dev/openmvcam”.

Hello! THank you! I will try that later. In the meantime, is there a way to send each frame the camera takes in real time to display in this GUI in Raspberry Pi?

maybe using the USB_VCP.send() function to send to the RPi through the usb port and the image.compress() function? Any help would be appreciated. Thank you!
Untitled.png

Yes, just do:

print(img.compress(quality=90),end="")

You’ll get a JPG byte stream. You may wish to send the size of the jpg byte stream first however.

img.compress(quality=90)
print(str(img.size()) + \n)
print(img,end="")

You’d then scanf to get the size in bytes and then read that many bytes next.

Thank you! I see the dev/openmvcam did appear on my /dev… I’m sorry for the additional questions, but I’m attaching my openmv to a gimbal and it doesnt really fit unless it is attached horizontally which means the view of the camera would be rotated about 90 degrees… I found the image.rotation_corr(z_rotation=90) but unfortunately, the rotated image was cropped because of the rotation… I’m currently using a 240x160 frame size if that’s helpful… I also tried zooming to get rid of the black margins of the new rotated image (caused by the rotation), but it gets too zoomed

Is there a way to configure the sensor or change the image so that it is rotated 90 degrees without affecting the dimensions of the image or getting the image cropped. Thanks again for the help!

As of right now there is not. However, if you set the resolution to something like QVGA and then set_windowing to a square res and then rotate that and then use an ROI of 240x160 you should get the results you want on the image.

yes… i thought of using a square dimensions too so that rotating it wouldn’t be a problem… but I saw in the comments of the facial detection examples code that the 240x160 was the best for facial recognition… What can you suggest to be the best square dimensions for facial recognition… Thanks!

Hi, that was the case of the M4. Not anymore. Um, just make sure the res is some form of 3:2.

So, the method takes an ROI. Make the ROI a 3:2 area (240x160).

Hello! I’m sorry, I had no idea how to set the ROI to 240x120 after the image.snapshot() and image.rotation_corr() have been called… I couldn’t find a method that set the windowing to 240x120… I tried image.copy() and printed it on the Open Terminal, but I didn’t quite get the dimensions I needed… Thanks

Find features takes an ROI: Pass roi=(0, 0, 240, 120) to find_features. Note that the ROI is in the upper left of the image. You’ll want to adjust the x/y values to fix that if you want the ROI to start elsewhere.

I tried it but unfortunately, I added the roi=[0,0,240,160] to the find_features() method, but the image is still cropped…

The ROI passed to find features is not displayed. It just tells the method where to work on in the image.

Hello! I bought multiple of these openmv cameras, and I wanted to use them in different ways… Since I attached my openmv cam to a gimbal of a drone, it is extremely difficult for the gimbal to carry the camera with a usb connected to it (used for real-time image streaming)… so I wanted to ask if there is a way to send the image.snapshot() images over the UART port so that the camera would no longer be heavy?

Thanks

Hi, just do:

uart.write(img.compress())

On a valid uart object (see example scripts to make one). Or, alternatively, you can do img.compressed_for_ide() if you’re sending the data back to the IDE. Note that the data rate is so high that USB is really the other thing that cuts it for sending video. WiFi may also.

Ibrahim is finally working on WiFi programming support.

Thanks! Will try that soon! Anyway, I’m trying to determine the distance of a human from the camera through the image it sees, and I’ve tried doing some experiments on determining the focal length of the camera, however my experiments were not that consistent… Any chance you guys know what the focal length of the camera is? I’m currently using the OpenMV M7 camera… Thanks

All the lens specs are on the product page. It’s 2.8mm.

Hello! I tried streaming the image through the USB VCP class and the byte size of the image through the UART class as seen on the first image… But when I try to read the amount of bytes in the usb connection (as seen in the 2nd image), it says the buffer is not large enough… I searched for the specs of the camera and got 300KB++ buffer size… as seen on the 2nd image, the byte size of each snapshot is around 2000+ bytes, so I’m not sure what’s going on… I have a 16GB sd card attached to the openmv. Can you help me out? Thanks!


Hi, the error is coming from your Pi. Not the camera. The code on the camera looks fine.

The issues is the frombuffer call. Please note that the image is in JPG format. So:

import os
import io
import Image
from array import array

image = Image.open(io.BytesIO(image_data))
image.show()

Hello! I tried doing what you suggested with the following revisions of the code… So the serial port delivers the length of the bytes of the image and the USBVCP delivers the image in bytes (as seen in the first picture)… The python script receiving this takes the bytes and stores it in the string… I tried to print these bytes in the openmv ide and got that the bytes were:

\xff\x00\ and further stuff like this

However these characters seem to be stored in a string instead of a byte object as seen on the 3rd picture… I tried adding the Image.open(io.BytesIO()) but it got me an error saying “cannot identify image file <_io.BytesIO object at 0x75784960>”

and I also tried replacing it with Image.open(io.StringIO()) but still nothing…

Sorry for the inconvenience… Thanks!