Hi.
Is there any way to send TTL output signal from openMV when video recording is started, and then save recorded video in mp4 format? Just like when you do this with the “record” button?
Thank You.
Respectfully,
Elia.
Hi.
Is there any way to send TTL output signal from openMV when video recording is started, and then save recorded video in mp4 format? Just like when you do this with the “record” button?
Thank You.
Respectfully,
Elia.
Hi, you can code the camera to toggle an I/O pin when you start recording video. The current systems we sell do not have H.264 hardware onboard so it’s not possible for them to encode MP4 video. However, we will be able to do this in the future. For now, you can record MJPEG video which just requires a higher bitrate and uses more space on the disk for the same quality.
Hi Kwabena,
thank You so much for Your quick response. I’ll try that and will get back with the code.
Meanwhile, question. When I was saving video in mp4 format, camera showed it should be, e.g. 6 min 53 s, but after convertion (I said no rescale when saving?), it was 6 min 40 s, so a lot of frames were missing, and it was asynchronized with electrophysiology recordings.
And you can see this only with longer videos, if you record for example 30 seconds, difference won’t be visible.
But when I saved in avi format, I think it fixed the problem, duration stays the same.
So my question is, do You know, will mjpeg format have the same problem as mp4? I’m a little bit worried, cause synchronization is pretty important for me.
Thank You ~
The current MJPEG recording code on the camera itself captures frames with MicroSecond accuracy. However, when converting it to an MP4 the sampling rate may change.
You can use FFMPEG outside of OpenMV IDE to control the conversion process better. It should match the frame rate though by default.
You recorded the video from the camera right? Then used the IDE to convert the MJPEG/AVI video file to MP4? Note that an MJPEG file is an AVI file. It’s the exact same stuff inside. However, the file name might be causing different behavior via FFMPEG.
I was trying just to record video from openMV IDE with “record” button.
The script was just the default “hello world” script.
import sensor
import time
sensor.reset() # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA) # Set frame size to QVGA (320x240)
sensor.skip_frames(time=2000) # Wait for settings take effect.
clock = time.clock() # Create a clock object to track the FPS.
while True:
clock.tick() # Update the FPS clock.
img = sensor.snapshot() # Take a picture and return the image.
print(clock.fps()) # Note: OpenMV Cam runs about half as fast when connected
# to the IDE. The FPS should increase once disconnected.
when I hit “stop”, it proposes to save the video. Also it asks to rescale the video, I choose no.
And I just discovered, whether I save in mp4 or avi format, video becomes smaller and looses some frames, I guess. For example, when I save 5 min video, it becomes 4 min 50 s, when I save 3 min 6 seconds, it becomes 3 min 3 seconds, and so on.
also I tried to record video using this script
import sensor, mjpeg, time
from pyb import Pin
pin0 = Pin('P0', Pin.OUT_PP)
pin0.low()
pin0.high()
# Setup camera.
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
#sensor.skip_frames()
pin0.low()
c = time.clock()
# Create the mjpeg object.
r = mjpeg.Mjpeg("exampleEliatimecheckwithtimer")
# Add frames.
for i in range(6600):
c.tick()
r.add_frame(sensor.snapshot())
# Finalize.
r.close()
And again, video supposed to be 3 min 26 seconds, but it was ~ 3 min 16 sec.
I’m not sure what’s happening.
How do I save the video so it keeps exactly the same duration and not loosing anything?
Can You please help?
Hi, I can debug what’s going on with the IDE.
As for the camera, it could just be dropping frames.
If you want perfect frame rate record on the camera you need to call sensor.set_framerate() to fix the frame rate and sensor.set_framebuffers(N) where N is a number 4or above create an elastic fifo buffer that will not drop frames when the SD card erases blocks which will causes you to drop frames.
I cannot test this on the camera at the moment but I can check the IDE over the weekend.
Hi,
I tried sensor.set_framerate() and sensor.set_framebuffers(N) and still the video was 2.40 instead of 2.45(
Is there any way to get exactly the same, full video? Or it will always just loose frames?
Thank You.
I’m at home from traveling. I can debug this now. Sorry for the delay.
There’s a bug with the MJPEG code. It seems to lose time as it computes the cumulative moving average of the frame rate.
All frames are captured in the video; it’s not dropping frames. It’s just that at the end of the video, you have to specify the frame rate to play them back at, and the number calculated is not exact enough. This results in the video playback being slightly faster than it should be and the video appearing much shorter when very long.
The IDE seems to have the same issue when transcoding the ImageIO format from a .bin file which has a millisecond timestamp per frame to the mjpeg format which just has a frame rate for playing back frames.
…
I can work on fixes for both of these starting next week.
Anyway, the videos you are capturing, include all the frames the camera sees. If you are using code to pull each frame out one at a time. You could just note the camera FPS as the capture FPS and then use that to figure out what frame corresponds to what in the mean-time until I have this fixed.
import sensor
import image
import time
import machine
record_time = 60500 # 10 seconds in milliseconds
sensor.reset() # Reset and initialize the sensor.
sensor.set_pixformat(sensor.JPEG) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.VGA) # Set frame size to QQVGA (160x120)
sensor.set_framebuffers(10)
sensor.skip_frames(time=2000) # Wait for settings take effect.
clock = time.clock() # Create a clock object to track the FPS.
led = machine.LED("LED_RED")
stream = image.ImageIO("/stream.bin", "w")
# Red LED on means we are capturing frames.
led.on()
start = time.ticks_ms()
while time.ticks_diff(time.ticks_ms(), start) < record_time:
clock.tick()
img = sensor.snapshot()
# Modify the image if you feel like here...
stream.write(img)
print(clock.fps())
stream.close()
led.off()
raise (Exception("Please reset the camera to see the new file."))
This code uses the ImageIO object to record the frame. There’s a millisecond timestamp on each frame in the file format. Once I fix the IDE conversion issue any videos recorded using this format should be able to be converted exactly.
Use the Tools->Video Tools->Convert Video file to covert to MJPEG or MP4.
Hi, thank You so much for Your answer and explanation, it’s good to know.
Hope You’ve had some rest after travel, take Your time.
Will be waiting, please let me know when it will be fixed.
Thank You!