OpenMV as Autoguider for astrophotography?

OpenMV related project discussion.
frank26080115
Posts: 32
Joined: Tue Jul 28, 2020 2:32 am

Re: OpenMV as Autoguider for astrophotography?

Postby frank26080115 » Sun Sep 06, 2020 4:02 am

I'm jumping the gun here. The polar scope's been tested outdoors like only twice. There's still room for improvements but it's usable. There might be bugs and annoyances to fix later. But now I'm thinking about the guide scope idea more again.

I previously mentioned that I designed a PCB to support a ST-4 port for autoguiding, that port sends 4 different signals, basically almost as if you had 4 buttons that goes North South East West

But this is less than optimal. The computerized mounts have RS232 and USB ports and support ASCOM, over ASCOM you can send pulse durations digitally. Everybody in the astrophotography community claims this works better. I can understand why.

The RS232 port is actually the +/-12V kind (god damn it guys it's 2020). If I wanted to use it, it means I need a whole MAX232 or something on the PCB.

But, the H7 has a OTG port

6 years ago, I used ST's USB host library for a huge project, added support for USB hubs for it from scratch, added a Bluetooth stack on top of it too. Adding ASCOM, which is just virtual UART, should be easy for me

I wanna toss that into OpenMV's firmware. The first obvious problem would be that REPL and debug wouldn't work, I guess my code would need to have a timeout of some sort and shut down host mode and reinitialize device mode if nothing enumerates.

What do you think?

Don't worry I'm not asking you to do this for me. Just wondering if you spot something obviously wrong in my plan. I checked the schematic... it should work... there are OTG cables that have a split just for power too.

Actually, question: what kind of events over the USB will interrupt execution of a py file? IDE interrupt, KeyboardInterrupt, those two I know about, but what else?
User avatar
kwagyeman
Posts: 4550
Joined: Sun May 24, 2015 2:10 pm

Re: OpenMV as Autoguider for astrophotography?

Postby kwagyeman » Sun Sep 06, 2020 11:01 am

The STM32H7 supports 2 USB ports. I would design a new board using the second USB port and leave the first untouched. This way you don’t have to deal with the changes.

Those are the two events. Everything else is from the script itself.
Nyamekye,
frank26080115
Posts: 32
Joined: Tue Jul 28, 2020 2:32 am

Re: OpenMV as Autoguider for astrophotography?

Postby frank26080115 » Fri Oct 02, 2020 12:04 am

I've moved forward with the autoguider project. The camera will be mounted on a motorized equatorial mount, kinda like a pan-tilt except the the main axis is parallel with the earth's spin. I need a way to ensure that a frame of 1 second exposure has no motion blur at all. This sounds easy on paper, stop moving the motors, start the snapshot, end the snapshot. But it's not.

I ran the following code

Code: Select all

import astro_sensor
import pyb

cam = astro_sensor.AstroCam()
cam.init(shutter_us = 1000000)

while True:
    t = pyb.millis()
    cam.snapshot_start()
    while cam.snapshot_check() == False:
        pass
    tpassed = pyb.elapsed_millis(t)
    img = cam.snapshot_finish()
    print("t %u" % tpassed)
    pyb.delay(pyb.rng() % 2000)
The resulting output looks like

Code: Select all

t 844
t 1057
t 325
t 537
t 683
t 1112
t 1398
What I expected is a uniform list of numbers all about 1000. The numbers I got back are all random. What this means is that the camera is constantly running regardless of any function calls. This will not achieve what I need. I need to be able to actually control the exact moment the shutter opens and closes.

How can I do that?

Should I be putting the sensor to sleep after every snapshot and then wake it up before every snapshot? I need the sleep and wake to be pretty fast. An experiment showed that starting a snapshot immediately after wake, or even delaying 100ms, doesn't work, returns a completely black image (though the time span could be like 95 or 2762 ms, still not near the 1000 I expect). So this doesn't work.

If you have no solution for me then I will be forced to toss every other frame, which... isn't exactly a bad idea. I have one second to take a still shot, then the next 500ms will be allocated for motor movement, and the final 500ms will be to kill momentum and vibration. So it's not the end of the world.

Thanks!

EDIT: read through the list of registers on the sensor, looks like I need to play with FREX mode? I'll be experimenting with that for a bit then.
User avatar
kwagyeman
Posts: 4550
Joined: Sun May 24, 2015 2:10 pm

Re: OpenMV as Autoguider for astrophotography?

Postby kwagyeman » Fri Oct 02, 2020 1:11 pm

Yeah, you're going to need to manually do register writes and control the camera directly with FREX setup. You should be able to trigger camera exposure in software.
Nyamekye,
frank26080115
Posts: 32
Joined: Tue Jul 28, 2020 2:32 am

Re: OpenMV as Autoguider for astrophotography?

Postby frank26080115 » Sun Oct 04, 2020 2:19 am

I don't think I can just figure out how to do the FREX method without an example or app note or something. So I did the other option, which is to timestamp each frame with the HAL_GetTick() value of when HAL_DCMI_FrameEventCallback() is fired. It turns out a 1 sec exposure means frames are appoximately 1.3 seconds apart, and that's not due to garbage collector or some other micropython level overhead.

This means for my webpage interactions, I need to have the micropython push data to the web UI instead of having the web UI request data, just to make sure the data transfer finishes before a frame is captured. Luckily I think I found some micropython WebSocket server code that I can leverage.

just to show off the progress physically:

Image

Image

Image

Image

Image
User avatar
kwagyeman
Posts: 4550
Joined: Sun May 24, 2015 2:10 pm

Re: OpenMV as Autoguider for astrophotography?

Postby kwagyeman » Sun Oct 04, 2020 12:24 pm

This is really cool man!

I don't know how the FREX feature works. But, we will eventually implement it.

Right now, my focus is on a bicubic/bilinear scaling pipeline with color table lookup and alpha blending. The code is taking a lot of time to write since we are bringing the DMA2D feature in along with using the SIMD ops on the corex-m4/m7. The point is to make the code as fast as possible. We've done everything integer only for maximum speed too since using floats costs speed performance when converting floats to ints and back and forth.

Anyway, once this pipeline is finished I will then be able to add bliniear/bicubic scaling to our rotation correction method and update that method to basically be equivalent to warp perspective from OpenCV. Then we will also update our lens correction method to use this along with adding in the same level of support OpenCV has to tangential misalignment.

The whole point of all this work is to bring our vision system up to OpenCV's level... and this means making sure the basics are solid.
Nyamekye,

Return to “Project Discussion”

Who is online

Users browsing this forum: No registered users and 3 guests