Panning camera and trying to get images

I have an H7 camera mounted on a pan/tilt servo mount. The servos do not have external feedback signals, so when I send them a command to go to a certain angle, I have no feedback to make sure they got there. For this sample, I’m simply trying to pan the camera and display images on the PC after it stops at certain positions, but I don’t get the images in time at each stop. Is there any way to get something this simple to work? Here is the code I’m running.

import time, sensor, image, math, utime
from pyb import Servo

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.set_windowing((200,220))
sensor.skip_frames(time = 2000)

Pan = Servo(1)   # P7 used for panning servo
Tilt = Servo(2)  # P8 used for tilting servo
Tilt.angle(4)

print("\n1. Home Position")
Pan.angle(0,2000)
img = sensor.snapshot()
utime.sleep_ms(5000)

print("2. Pan angle -10")
Pan.angle(-5,2000)
img = sensor.snapshot()
utime.sleep_ms(5000)

print("3. Home Position")
Pan.angle(0,2000)
img = sensor.snapshot()
utime.sleep_ms(5000)

print("Finished")

Do sensor.flush() after sensor.snapshot(). snapshot internally calls flush before taking a new pic so you don’t have to think about this generally. When you add delays you have to flush yourself.

Note that the script needs to be running for flush to work and that the script exits when it reaches the last line. So, flush before the sleep.

Thanks. I will try it.

I changed the sample code to this and still don’t see consistency in the images transferred to the PC. I’ll see the home image and then it will still be there when at the -20 deg position and may even shift a bit, but not show the -20 image. Sometimes the -20 image does show, but not until the camera is back facing the home position. It’s very inconsistent.

Either the flush isn’t working, or I am still doing something wrong. With the length of my sleeps and the added delays in the move commands, there should be plenty of time to transfer the images to the PC before moving again.

import time, sensor, image, math, utime
from pyb import Servo

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.set_windowing((200,220))
sensor.skip_frames(time = 2000)


Pan = Servo(1)   # P7 used for panning servo
Tilt = Servo(2)  # P8 used for tilting servo
Tilt.angle(4)

while(True):

    print("\n1. Home Position")
    Pan.angle(0,2000)
    img = sensor.snapshot()
    sensor.flush()
    time.sleep(5000)
    
    print("2. Pan angle -20")
    Pan.angle(-20,3000)
    img = sensor.snapshot()
    sensor.flush()
    time.sleep(10000)
    
    print("3. Home Position")
    Pan.angle(0,3000)
    img = sensor.snapshot()
    sensor.flush()
    time.sleep(5000)
print("\n1. Home Position")
    Pan.angle(0,2000)
    print("Snapshot")
    img = sensor.snapshot()
    img = sensor.snapshot()
    sensor.flush()
    time.sleep(5000)
    
    print("2. Pan angle -20")
    Pan.angle(-20,3000)
    print("Snapshot")
    img = sensor.snapshot()
    img = sensor.snapshot()
    sensor.flush()
    time.sleep(10000)
    
    print("3. Home Position")
    Pan.angle(0,3000)
    print("Snapshot")
    img = sensor.snapshot()
    img = sensor.snapshot()
    sensor.flush()
    time.sleep(5000)

Look when snapshot is printed. You’ll notice that Pan.angle() doesn’t block. You should probably add a delay after Pan.angle() for the same time you used. I think the servos have a timer module that powers then so they change asynchronously to your code.

I did snapshot twice because I was confused too. Not sure if it’s needed.

I changed the code to this;

#first make sure we atart at home position
Pan.angle(0)
Tilt.angle(4)
time.sleep(5000)

#now look through simple motion to get images at different angles
while(True):

    print("\n1. Home Position")
    Pan.angle(0)
    time.sleep(1000)
    print("Taking Snapshot")
    img = sensor.snapshot()
    sensor.flush()
    time.sleep(5000)
    print("Image 1 should show on PC by now")
   
    print("2. Pan angle -20")
    Pan.angle(-20)
    time.sleep(1000)
    print("Taking Snapshot")
    img = sensor.snapshot()
    sensor.flush()
    time.sleep(5000)
    print("Image 2 should show on PC by now")

    print("3. Home Position")
    Pan.angle(0)
    time.sleep(1000)
    print("Taking Snapshot")
    img = sensor.snapshot()
    sensor.flush()
    time.sleep(5000)
    print("Image 3 should show on PC by now")

It didn’t do any good. By omitting the second parameter in the pan.angle command, it tells it to get there as fast a possible. Notice that I had time in there before. I took it out and added a second delay after panning. I take the image, flush the buffer and wait 5 seconds. I don’t see the image on the PC until it is moving to the next position. I see a few things here.

  1. The pan.angle (and tilt.angle) commands are non blocking, even if you add time to them e.g. Pan.angle(20,2000)
  2. If my servos had an analog output wire for feedback (essentially a potentiometer on the shaft that gives you a voltage proportional to the rotation), I’d be able to monitor it with an analog input and make it blocking
  3. time.sleep seems to be blocking the transmission of the image to the PC.

Please do two snapshot calls back to back and leave the flush in. I’ll need to debug why one snapshot isn’t enough…

Didn’t really help.

while(True):

    print("\n1. Home Position")
    Pan.angle(0)
    time.sleep(1000)
    print("Taking Snapshot")
    img = sensor.snapshot()
    img = sensor.snapshot()
    sensor.flush()
    time.sleep(5000)
    print("Image 1 should show on PC by now")

    print("2. Pan angle -20")
    Pan.angle(-20)
    time.sleep(1000)
    print("Taking Snapshot")
    img = sensor.snapshot()
    img = sensor.snapshot()
    sensor.flush()
    time.sleep(5000)
    print("Image 2 should show on PC by now")

    print("3. Home Position")
    Pan.angle(0)
    time.sleep(1000)
    print("Taking Snapshot")
    img = sensor.snapshot()
    img = sensor.snapshot() 
    sensor.flush()
    time.sleep(5000)
    print("Image 3 should show on PC by now")

Mmm, will have to look into this.

The IDE just grabs whatever image is in our internal jpeg buffer. Flush should do the trick. I might have bugged the API working on other things.

You can do this for now to force an update:

print(sensor.compress_for_ide(quality=90), end='')

Use that instead of flush. This bypasses the way the IDE pulls images and forces an update of the frame buffer. However, it’s a blocking operation so it will reduce performance. Flush jpeg compresses the frame buffer and puts it into a jpeg buffer that the ide can pull with no overhead.

Also, use compressed_for_ide() above if you need the frame buffer again… compress does an in-place compress.

Mmm, will have to look into this.

The IDE just grabs whatever image is in our internal jpeg buffer. Flush should do the trick. I might have bugged the API working on other things.

You can do this for now to force an update:

print(sensor.compress_for_ide(quality=90), end='')

Use that instead of flush. This bypasses the way the IDE pulls images and forces an update of the frame buffer. However, it’s a blocking operation so it will reduce performance.

Got an error: AttributeError: ‘module’ object has no attribute ‘compress_for_idle’

I’m using IDE 2.4.0 and camera firmware version 3.6.1

img.compress_for_ide()

Sorry, it’s on the image object. Not the sensor module.

sensor.snapshot().compress_for_ide()