I am trying to measure the distance of a small circular object (cross section of 2.2 in^2) at distances of up to 5 meters outdoors, all around the camera. In order to do this, I mounted an OpenMV with a super telephoto lens and a laser measuring module from https://www.jrt-measure.com/ onto a pan/tilt mechanism. The camera drives the pan/tilt mechanism as it searches for circles, then points the laser at the object to measure it’s distance. As I was working on it, I realized that it was unlikely to work since my environment will be very dynamic (the structure that the pan/tilt would mount to is inherently unstable and outdoor lighting will vary wildly making finding the laser dot difficult.)
I took a step back and realized that I could use a Raspberry Pi with (4) 5 MP Arducams and the Arducam multiplexer to get a 360 degree view around the camera to locate and measure the distance of the object from the camera(s). I think this multicamera configuration will be much more robust than the pan/tilt, but using the Pi is overkill. My FPS requirements are fairly low as I’d need to find a cirlce in a 5MP image every second, so it would take 4 seconds (5 would be OK too) to search the entire area.
Would you guys ever develop an OpenMV version capable of meeting these requirements (High resolution, low FPS?) I realize the multiplexing is a pretty unique use case, but I could just as easily use 4 5MP OpenMV’s, as opposed to a PI and a mux. I love what you guys have built and really want to use it, so I figured I’d tell you what I was up to and see what you thought.
Yeah, actually, we are about to release a version with 32 MB of SDRAM and the same OV5640 5MP camera that the Raspberry Pi uses. On this system there will be no resolution limits (speed of course won’t be fast a high res).
We just finished developing the unit and we expect to have it on sale by February. We just placed an order for 1k units. I just have a bit more driver work before we can release a video about it and take preorders. We are still working on the OV5640 driver but everything else is ready. We even have TensorFlow support for it too and you can run Mobilenet on it. (TensorFlow is coming for all Cortex M7 OpenMV Cams today).
I totally understand pricing. I only recently dove deeper into Raspberry Pi, and realized how “magical” their prices truly are (especially when you think about them having to limit the number you can buy of certain products.) Having spent some time with the H7 and the IDE, you’re offering quite a bit of value at your price points.
As for the multiplexing, the product I’m using doesn’t use an FPGA:
As someone with not much Linux/Python/OpenCV experience, your product offers a significantly better experience. For proof on concepts, I’d just use 4 modules. Given that my FPS requirements are fairly low, I was wondering if there was any way to ultimately use 1 processor to drive 4 cameras when I go into production.
Um, so, the FPGA is necessary just to act as a pin mux. The product you were looking at won’t work. We don’t use MiPi CSI. Its an 8-bit parallel bus.
You’d have to design a board that sends the I2C bus to all four cameras and then selects on camera video stream to forward. The lowest end of fpgas from Lattice can be made to do this. The verilog would be very simple and not require a clock and would just be combinational logic.
Alternatively, any high speed mux/demux chip would work too. I2C muxes and data line mux chips are available also. Using an FPGA will be more cost effective however.
OK. Thanks. Is there any way that this custom board could interface with the OpenMV headers the way the existing shields do? You’d need an input connector on the OpenMV and then connectors on the shield for additional cameras. The OpenMV cam would be the main camera, and you could drive the other cameras using the processor on the main board, I2C through the headers, and input the video stream through the new connector on the board. I’m sure it’s a long shot, but it sure would be sweet for my application. Maybe it would be for others?
I understand. I was just trying to describe what I was thinking, in the hope that you might think it was cool and/or potentially had a market and consider building a mux shield. As stated, it was a long shot.
From your hints, I wasn’t sure how the video stream would be accessed, hence my comment on needing another connector on the OpenMV. Now I see that you can use the sensor module connector. So a shield could connect to that as well as the headers, and then have additional camera connectors and an FPGA to multiplex the signals.
Hi, the video data bus is quite straight forward. It’s 8 bits of data with a clock and two control signals (vsync/href). It’s all going on direction. Fpga mixing code juts needs to capture data into a set of flops on the rising edge of the clock and then output that. You’d then in the FPGA code make a mux that selects between the 8 different data streams.
Finally, there’s an xvclk signal from the camera that you’d send to all cameras and the fpga to power it.
Note that if you find 8 4-to-1 mux chips you can use that instead too. It’s just the fpga would be cheaper.
As the control i2c bus… That is more complex, the OpenMV Cam only expects one device on the end of the i2c bus so you’d want to mux the i2c bus too.
Hi, happy to know that you are going to release OV5640 high-res sensor with your next version, I hope that you are considering ‘FREX’ feature to avoid motion blur in your next release which you have already handling with MT9034 sensor.