I was interested in building two separate histograms. Instead of accumulating all events into one snapshot, I want one histogram that accumulates all intensity increases and one that accumulates all intensity decreases. For my application, I want to compare the on vs off events in a more detailed way than just a maxed out sensor.set_contrast.
I am currently thinking I will create a new function in the firmware similar to snapshot and snapshot post-process that will accumulate the on/off events separately. Is there a better way to accomplish this? I saw in another thread there was a project to make the direct event stream more accessible. Will that be completed soon?
Hi, I will be implementing support for direct stream mode before the end of the month and probably starting next week.
Youāll have a list of events as an NDARRAY and then youāll be able to pass that to the Image() constructor to create an image.
Will the NDARRAY just be a 320x320 grid where at my preferred sampling rate, I request the full state of all pixels and convert it to an image? Will I be able to access the [x, y, polarity, timestamp] like the Prophesee kits?
Additionally, do you suspect any problems with attempting to process and store two histograms with one modified snapshot call? Will there be any issues with the timing?
Hereās the implementation:
You will be able to create two histograms as you desire. Youāll have to split off positive and negative events though from each other.
I am a bit confused by that implementation. What triggers an update to the ndarray? Does the event buffer keep track of pixels that have no intensity changes? Additionally, does this maintain the āaccumulationā utilized by the framebuffer system?
Itās like this:
sensor.reset()
evt_buffer = np.zeros((2048, 6), dtype=np.uint16)
while True:
evt_count = sensor.ioctl(sensor.IOCTL_GENX_READ_EVENT, evt_buffer)
#.....
img = image.Image(evt_buffer, format=EVENT_DATA)
You call the read event ioctl. This updates the ndarray with events. Then if you want an image from the events you pass it to the Image() constructor. Otherwise, you have the RAW events and you may do with them what you want.
Understand that with this interface, if you want to split the the histogram image into two you have to do that via code manually splitting events in the event ndarray.
The ndarray is [type, ts_s, ts_ms, ts_us, x, y]. I can make the Image() call process all non-zero events into an image. So, by duplicating the ndarray and then zeroing events which you donāt want in one image youāll get your desired result.
Iām not sure about this. If a given pixel in a 5ms time frame received both an increase and a decrease in intensity, would each histogram reflect the presence of each? The ndarray is only as big as the event buffer, so wouldnāt only one of the histograms be updated with the latest intensity reading? I donāt want to separate the On/Off from the same capture, I want to save all presence of On/Off between captures.
I am reading through the firmware and am confused by how the current snapshot method accounts for accumulation. Does it record the event stream in snapshot and then adjust the grayscale pixel intensity by polarity*contrast in snapshot_post_process?
Hi, we currently run the camera in histogram mode where it generates the histogram for you and sends 8-bit values where the events have already been combined.
In event mode you get the actual events. This has to be post-processed to generate an image. Iām saying the new API will allow you to control how thatās done.
If you want to append the ndarray returned to another ndarray you can. You can accumulate events as long as you have RAM to store them.
I am reading through the firmware and am confused by how the current snapshot method accounts for accumulation. Does it record the event stream in snapshot and then adjust the grayscale pixel intensity by polarity*contrast in snapshot_post_process?
In event mode I have code to do this, but, we donāt use that mode right now. Instead we use histogram mode. Once I add the new ndarray event support the post-processing code will be moved to Image().
Understood. Thank you for the help!
I have been trying to build the firmware from source to turn off the histogram and recompile. I followed the instructions written out in docs/firmware.md as well as the amended instructions here. I am running into the same āundefinedā errors.
In another thread I saw it mentioned that the docker instructions are recommended, is there no way to get the Linux build to work? I am failing after the
"make -j$(nproc) TARGET=<TRAGET_NAME> " / āmake TARGET=OPENMV4P -jā
I tried different board names and it didnāt change anything.
I attempted the Docker compilation as well, I am getting very similar errors.
Are you building with the latest firmware? I just turned off histogram mode and re-compiled and get no undefined issues. It would help to post them.
Anyway, Iāll be starting support for the event mode where you can get an ndarray of events very soon. I should be able to implement it next week.
My docker compilation is generating the following errors.
Create /workspace/build/OPENMV_RT1060/lib/micropython/flexram_config.s
Create /workspace/build/OPENMV_RT1060/lib/micropython/genhdr/pins.h
GEN /workspace/build/OPENMV_RT1060/lib/micropython/genhdr/mpversion.h
GEN /workspace/build/OPENMV_RT1060/lib/micropython/genhdr/qstr.i.last
../../extmod/modtls_mbedtls.c:45:10: fatal error: mbedtls/platform.h: No such file or directory
45 | #include āmbedtls/platform.hā
| ^~~~~~~~~~~~~~~~~~~~
compilation terminated.
mbedtls/mbedtls_port.c:35:10: fatal error: mbedtls/platform_time.h: No such file or directory
35 | #include āmbedtls/platform_time.hā
| ^~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
../../extmod/modcryptolib.c:64:10: fatal error: mbedtls/aes.h: No such file or directory
64 | #include <mbedtls/aes.h>
| ^~~~~~~~~~~~~~~
compilation terminated.
../../extmod/modhashlib.c:35:10: fatal error: mbedtls/version.h: No such file or directory
35 | #include āmbedtls/version.hā
| ^~~~~~~~~~~~~~~~~~~
compilation terminated.
and ends with
make[1]: *** [../../py/mkrules.mk:133: /workspace/build/OPENMV_RT1060/lib/micropython/genhdr/qstr.i.last] Error 1
make[1]: *** Deleting file ā/workspace/build/OPENMV_RT1060/lib/micropython/genhdr/qstr.i.lastā
make: *** [/workspace/ports/mimxrt/omv_portconfig.mk:202: MICROPYTHON] Error 2
make: *** [Makefile:14: build-firmware] Error 2
Are the submodules not being updated and made correctly?
I am following the instructions at docs/firmware. My board is an rt1062.
git clone GitHub - openmv/openmv: OpenMV Camera Module --depth=50
cd openmv/docker
make TARGET=OPENMV_RT1060
This is what I am running, with docker open in the background.
I wonder if the firmware is attempting to access deprecated methods of the micropython library, or is making a deprecated reference to it more broadly.
Do you know if a fix is available? I saw a firmware update to the GENX320 was pushed today.
I mean, I can compile the firmware myself. I donāt quite know what error you are running into. The build system works. Docker is also tested per build.
Anyway, Iām starting on the GENX320 changes now, if you donāt mind waiting till the end of the week Iāll be done with the feature shortly.
Half way done on the changes already. Will be done by Thursday.