Just wondering if the pixel coords or frame size are different from when viewed in the IDE frame buffer compared to when it’s purely running off the STM32? I have the camera mounted on a gimble controlling servos to centre itself on a blob, and it works very well through the IDE, however it seems to behave strangely and centre the blob towards a corner of the frame when running independently. (i.e. it’s not longer looking directly at the blob, but keeping it more in ‘the corner of it’s eye’)
Hi, the camera computes everything. The IDE just displays results. Can you post your code? There may be an issue with how you did stuff.
In particular, keep in mind once you turn the frame buffer off your FPS goes way up. If you’re using something like integral control it will accumulate a lot faster. To test this… Click the disable frame buffer button on the top right hand corner of the IDE. After doing so the camera will run at max speed.
Ok figured it out now that I have had a chance to look. It was simple mistake, I was initialising the sensor as QVGA after I had already found the centre of the sensor using sensor.width()/2 and sensor.height()/2, by initialising as QVGA first, it fixed it.