Global Shutter

@blogflap - Okay. We built the sensor for folks using the system in production environments. The use cases are for capturing precision images of objects moving on factory production lines. Every example use case did not require color.

@rajeshktyn - Just buy the MT9V034C sensor that supports color. That said, you’re on your own for image processing. The image has to be in VGA or higher res. Then, you have to de-bayer and down-sample. This is extremely CPU intensive.

I’m looking forward to the imminent arrival of my H7 and global shutter. Color is not at all important to me (well, that’s overstating it – given unlimited resources, I would happily accept color, haha). My question is about the theoretical 200fps framerate. Do the higher framerates require downsampling? Is 200fps only at, oh say, 40x30 pixels or something of that sort? Also, for all practical purposes, what sort of processing to you realistically expect the H7 to be able to perform at high framerates even if the shutter itself can produce such rates? I mean, a 200fps sensor attached to a processor that can only perform any useful algorithm a all (even just iterate through the pixels for heaven’s sake) at a much lower is confusing to me. Or, alternatively, is the 200fps only achievable in blind recording, just dumping sensor data to an SD card as fast as possible without imposing any computational overhead in the processor?

Thanks.

200 FPS is at the 80x60. 400 fps is 40x30. As for you actually being able to do something at that frame rate, that’s not too important. What’s important is that the latency is extremely low.

I.e. if you are building a line following robot, doing quad copter stablization, etc. the time between the current image and the next one is <10ms. For example with the OV7725 camera you can’t get much beyond 80 FPS. Generally, you can only achieve half of what the raw rate is so your algorithm will only hit 40 fps (25ms) with the OV7725. With a 200 FPS frame rate and dropping every other frame you’re now in the 10 ms area. At 400 fps you’re in the 5ms area.

In regards to the frame rates coming into the system, the 400 fps is the rate at which is present in RAM after the DCMI hardware sucks the image in. Assuming you write some tight C code - yes, you can do something with the image. You’ll have to plan what you want to do carefully however.

For our Russian line follower friends, they will be able to use get_regession() (not-robust) to track a line under the robot at easily 100-200 FPS as the non-robust linear regression is a very fast single pass algorithm.

One note however - to get the high frame rate you have to lower the exposure. So, the image becomes dark. You can use gamma correction to make the image brighter however.

I’m familiar with the exposure trade off and gamma corrective measures. I was experimenting with a 3500 fps camera recently. Thanks.

Hi
Anybody tried global shutter frame capturing based on Hardware exposure pin ( PB4 - SPI3_MISO) ?

There’s an example showing how to turn on hardware triggering. It’s built-into the firmware.

I know already of the customer who used it to capture images of moving objects.

Hi, Can you please share that Hardware Trigger tutorial / sample

Examples → Global Shutter → Triggered Mode.

When you call snapshot it takes a picture. If you want to sync to an external source just use pyb to wait for an I/O pin to go low then high. E.g:

while pyb.PIN(0).value(): # wait for pin to be low
    pass
while not pyb.PIN(1).value(): # wait for pin to be high
    pass
img = sensor.snapshot()

Hi,

Thanks for your help, I have tried this but didn’t give expected result, my object is moving fast (15 FPS) and capture only a small potion of the moving object means my trigger will work every 66.6 milliseconds but the position accuracy need to maintain 60 micro Second. so I think I need to use hardware trigger directly exposure pin of MT9V034 which connected to PB4 of STM32. so I have map STM32 PB4 pin to openMV output pin and use directly from trigger encoder. any body have any other suggestion to achieve this with openMV board.

On the whole color debate above, one very application-specific approach would be to record Bayer filter data (i.e., raw sensor data) straight onto the SD card, such that it could be deBayered and processed at a later time. In such a way, one could use a global shutter to record high framerate color data for later offline processing. This would be a totally unrelated application from any intent at online color-image processing, regardless of framerate, because as you said, deBayering on the H7 would ruin your performance.

I also recognize that the proposal of recording Bayer data for later offline use, without necessarily using the sensor data online at all, is too specific for you to have designed the H7 or the global shutter module toward, so no worries.

…although now that I think about it, I can imagine a use case in which you record Bayer data for offline use, but directly process the Bayer matrix in grayscale form online. A simple 2X decimation would give you a decent grayscale luminance image for, perhaps, object tracking – of an object that you might then later process in color offline.

…but, I can’t recall now whether it is possible to simultaneously record to SD card and process images “in-app”, or whether you have to choose between shuttling the frame buffer directly to the card or otherwise processing it on the fly.

Well, you can build a unit with a color sensor. You just have to take our files and switch out the MT9V034 part to the color color one. Our code then won’t line the difference. As long as you keep the res above vga you have a bayer image.

Would be great if you can explain this a bit further, what it the MTV and where those files are located?

Thanks a lot

Hi, please see the Openmv boards repo and see the OMV4 board and the sensor modules of it. The design files for the global shutter camera are there.

It is an amazing app to watch online shows and movies for free. It allows you to stream all categories of videos from anywhere within seconds. You can access it on different platforms like for Android, iPhone, iOS, Windows PC/Laptop, Firestick, Smart TV and other Android devices.

Hello Sir, as you suggested, I hv changed color version of sensor with board and everything working as original board in Grayscale Mode, but when I change to color it shows Pixle format not support Error, Any suggestion?

It doesn’t support color.

Oh, Really ??? As your suggestion earlier I spend so many days and money to achieve this, you can find my previous queries and your answers, last two years i m behind this.

Sorry, I didn’t read your original response.

You have to edit the C code to support the color sensor. The output of that sensor is Bayer so you need to adjust this method here:

https://github.com/openmv/openmv/blob/master/src/omv/mt9v034.c#L168

To support PIXFORMAT_BAYER and then set the pix format in your python code to bayer. Our library will then automatically debayer the image for you. Note that the output of the MT9V034 is RAW unprocessed BAYER so our bayer code likely won’t give you the image result you want.

Finally, you have to run the camera in VGA res or higher and down-sample on the H7 as the bayer output cannot be down-sampled by the camera chip.

can u share an example code plz…

No