OpenMV as Autoguider for astrophotography?

Gah, different night different problems.

You feel that heat today? San Mateo has got rolling blackouts. I drove up to Davis to a friend’s place, who has a AC’ed house, carrying all my gear. The sky is great here.

Bad news 1 is that the memory is still a problem but now it seems to be more related to my way of setting the threshold. I can deal with this. I need to make that much more configurable and think about better ways of automating exposure and calculating or adjusting thresholds.

My old threshold is the image mean multiplied by 3. Which worked great on my light polluted simulations… but now it’s obviously making the threshold so low that noise is becoming stars.

A potential technique might be to do an initial check of a smaller region just to see if it’s getting an unexpected number of stars without blowing up the RAM… hmm…

The bigger bad news… The sensor’s got hot pixels! Permanent white pixels, not just single pixels, blobs! And I’m also seeing amp glow

This is probably why the more expensive cameras are all aluminum… heat sink…

I can probably write some code to register the hot-pixels and filter them out. I might buy a few more OpenMVs just to see if the damage is permanent. My work has a temperature controlled chamber I use for testing battery safety and I can toss the camera in to see how it handles different temperatures.

I let it cool down and removed the WiFi shield and the pixels disappeared… I’m gonna buy a can of super-cold later lol

I researched small peltiers and they are all too thick and cost like $30…

Dude can I ask you to make me a for the camera? The WiFi shield is making it impossible to engineer any way of heat sinking the camera. A cable would let me mount the camera somewhere with more room and I can stick cooling solutions on it. I’m sure a lot of other OpenMV users would appreciate an extension cable too.

We sell it already.

Thanks, I guess I didn’t see it on elmwood or sparkfun and thought it didn’t exist.

I managed to be able to serve scaled and compressed JPG over HTTP, but a weird thing is happening. Sometimes the socket times out (OSError -6), file was being sent but not fast enough. But, it seems like this even would cause the camera to not detect a new frame has completed.

Of course this is using my own modified firmware with the non-blocking snapshooting so you might not be able to, or want to, help with this problem. Do you have any hints? I have no idea how exceptions (I have no idea how exceptions work in C) and interrupts (I think it might be a DMA interrupt being missed?) interact with each other.

The workaround is a simple timer that resets the camera sensor, it’s not a fatal bug but very very annoying.

God damn this heat, sooo many hot pixels and the PLL won’t let me do some values, stay cool man

edit: I did allocate extra frame buffer before scaling, I told the scale function to copy to the extra frame buffer, so it’s not some buffer writing collision, I think…

Socket error 6 has nothing to do with the camera. It’s thrown by the wind 1500 module randomly wherever it feels like and you have to close the socket. It’s just a general socket error. There’s no real recovery from it other than closing and reopening the socket.

I wrote a “closeall” function into the WINC and py_winc part of the firmware. Now, when my watchdog times-out, it doesn’t reboot the WINC, it closes all the sockets regardless of whether or not they are open. It’s kind of a dirty hack.

It’s actually working great! This is much faster at recovering from a bad request, and doesn’t drop the actual WiFi connection.

Haven’t tested it in AP mode yet or on mobile.

I found a funny situation that can be encountered with the non-blocking snapshot function, I did a in-place scale of the image before a snapshot finished, and the next snapshot finished with an image at the scaled size, not the full size expected. The problem was solved with the extra framebuffer allocation. Definitely need to be more careful about that buffer.

I also changed the firmware to accept negative numbers for area_threshold and pixels_threshold, if the numbers are negative, then it’ll filter out large blobs but keep small blobs. I also added height_threshold and width_threshold. This is an additional effort to prevent using up heap memory. Not sure if you want these changes officially? I can’t do a pull-request right now with it, I stacked that change on top of other changes that you probably don’t want.

Added a filtering for hot-pixels, just a list of X,Y coordinates, very easy.

Added a database of 2740 stars with 500 of them identifiable, had to fit it within 200kb of data and added about a second more worth of HTTP load time. The JS is doing the pattern matching.

Kinda feature complete, need to do more testing in AP mode and mobile, and more night time tests when the clouds are gone.

Thanks again

Probably should have an explicit area max versus min. So, make a feature request and I can implement it better.

Hey have you thought about selling something like CS mount to M12 lens adapter - ccdcmoslens.com ? Or how about https://www.securitycamera2000.com/products/20mm-Board-Camera-CS-Mount-Base.html ?

I am code feature complete. I’m doing only UI tweaks right now waiting for the smoke to clear. A CS-mount 25mm lens would be a f/1.2 like https://www.ebay.com/itm/Yohii-1-3-F1-2-25mm-0-98-Inch-CCTV-Lens-CS-Mount-Security-IR-Lens-for-CCD-CCTV/324269266267 , that’d give me almost twice the potential frame rate, or let me lower the gain by half.

Would all of this work out? Or am I missing something? Would it be impossible to focus? If the focal plane is too far forward I can 3D print a spacer, but if it’s too far back then I’m screwed.

I ordered an entire second H7 Plus yesterday. I need the new one since mounting a CS mount or a M12-to-1.25inch adapter requires me to put the WiFi on the bottom instead of the top.

Oh and can you please tell me the exact screw sizes of all three screws that comes with the lens mount? The two black ones for mounting and the silver one for the lens threads? I’d ideally purchase a few bags from McMaster-Carr

Measure the screw sizes… with the camera you have. We are just buying cheap sub $1 M12 lens mounts. It’s not really engineered by us. The screws are just self tapping screws. You need not get exactly the right size for them.

Regarding C mount. I haven’t explored it. Given C mount lenses are more expensive it wouldn’t make a lot of sense for our market.

still living in a bbq, can’t test outside still

I made a PCB design, OpenMV-Astrophotography-Gear/elec/guider-shield at master · frank26080115/OpenMV-Astrophotography-Gear · GitHub , it’s a fork of your WiFi shield, but I shifted the pins up so I can have more room for the CCTV camera lens, plus I added a I2C port expander plus optoisolator for ST-4 control. I’m not going to sell it or anything but let me know if you have a problem with how I preserved your copyright text (I didn’t change it at all)

I also figured out how to put compiled versions of my micropython code directly into the firmware flash as frozen mpy. But it took so much room that I had to remove a bunch of stuff, so I removed everything that had to do with neural networks, the FIR, and TV. It is my understanding that this will significantly reduce RAM being used as the execution will happen from flash, so I will be able to detect more stars before running into the heap memory exception.

I am doing documentation, the README is up on my repo along with some other educational pages GitHub - frank26080115/OpenMV-Astrophotography-Gear: using OpenMV to assist in astrophotography , let me know if you don’t want me to use that one photo or logo from your website

Hey, really great write up. I’m going to tweet it.

Awesome work!

Also, you are free to use our logos and etc. However, if you plan on selling the product you cannot use the logos in such a way that makes people think that your product is an official OpenMV product.

Yes, moving the scripts to flash frees up RAM since the bytecode is in flash now. Note on the latest firmware we moved the stack to the ITCM in the cortex m7 which makes the stack 64KB and frees up over 10KB for the heap. So, you should have a lot more RAM.

Finally, you are free to remix the board design too if you want. That said, it’s not super easy to do the H7 Plus board routing however given the SDRAM.

I’m jumping the gun here. The polar scope’s been tested outdoors like only twice. There’s still room for improvements but it’s usable. There might be bugs and annoyances to fix later. But now I’m thinking about the guide scope idea more again.

I previously mentioned that I designed a PCB to support a ST-4 port for autoguiding, that port sends 4 different signals, basically almost as if you had 4 buttons that goes North South East West

But this is less than optimal. The computerized mounts have RS232 and USB ports and support ASCOM, over ASCOM you can send pulse durations digitally. Everybody in the astrophotography community claims this works better. I can understand why.

The RS232 port is actually the +/-12V kind (god damn it guys it’s 2020). If I wanted to use it, it means I need a whole MAX232 or something on the PCB.

But, the H7 has a OTG port

6 years ago, I used ST’s USB host library for a huge project, added support for USB hubs for it from scratch, added a Bluetooth stack on top of it too. Adding ASCOM, which is just virtual UART, should be easy for me

I wanna toss that into OpenMV’s firmware. The first obvious problem would be that REPL and debug wouldn’t work, I guess my code would need to have a timeout of some sort and shut down host mode and reinitialize device mode if nothing enumerates.

What do you think?

Don’t worry I’m not asking you to do this for me. Just wondering if you spot something obviously wrong in my plan. I checked the schematic… it should work… there are OTG cables that have a split just for power too.

Actually, question: what kind of events over the USB will interrupt execution of a py file? IDE interrupt, KeyboardInterrupt, those two I know about, but what else?

The STM32H7 supports 2 USB ports. I would design a new board using the second USB port and leave the first untouched. This way you don’t have to deal with the changes.

Those are the two events. Everything else is from the script itself.

I’ve moved forward with the autoguider project. The camera will be mounted on a motorized equatorial mount, kinda like a pan-tilt except the the main axis is parallel with the earth’s spin. I need a way to ensure that a frame of 1 second exposure has no motion blur at all. This sounds easy on paper, stop moving the motors, start the snapshot, end the snapshot. But it’s not.

I ran the following code

import astro_sensor
import pyb

cam = astro_sensor.AstroCam()
cam.init(shutter_us = 1000000)

while True:
    t = pyb.millis()
    cam.snapshot_start()
    while cam.snapshot_check() == False:
        pass
    tpassed = pyb.elapsed_millis(t)
    img = cam.snapshot_finish()
    print("t %u" % tpassed)
    pyb.delay(pyb.rng() % 2000)

The resulting output looks like

t 844
t 1057
t 325
t 537
t 683
t 1112
t 1398

What I expected is a uniform list of numbers all about 1000. The numbers I got back are all random. What this means is that the camera is constantly running regardless of any function calls. This will not achieve what I need. I need to be able to actually control the exact moment the shutter opens and closes.

How can I do that?

Should I be putting the sensor to sleep after every snapshot and then wake it up before every snapshot? I need the sleep and wake to be pretty fast. An experiment showed that starting a snapshot immediately after wake, or even delaying 100ms, doesn’t work, returns a completely black image (though the time span could be like 95 or 2762 ms, still not near the 1000 I expect). So this doesn’t work.

If you have no solution for me then I will be forced to toss every other frame, which… isn’t exactly a bad idea. I have one second to take a still shot, then the next 500ms will be allocated for motor movement, and the final 500ms will be to kill momentum and vibration. So it’s not the end of the world.

Thanks!

EDIT: read through the list of registers on the sensor, looks like I need to play with FREX mode? I’ll be experimenting with that for a bit then.

Yeah, you’re going to need to manually do register writes and control the camera directly with FREX setup. You should be able to trigger camera exposure in software.

I don’t think I can just figure out how to do the FREX method without an example or app note or something. So I did the other option, which is to timestamp each frame with the HAL_GetTick() value of when HAL_DCMI_FrameEventCallback() is fired. It turns out a 1 sec exposure means frames are appoximately 1.3 seconds apart, and that’s not due to garbage collector or some other micropython level overhead.

This means for my webpage interactions, I need to have the micropython push data to the web UI instead of having the web UI request data, just to make sure the data transfer finishes before a frame is captured. Luckily I think I found some micropython WebSocket server code that I can leverage.

just to show off the progress physically:

This is really cool man!

I don’t know how the FREX feature works. But, we will eventually implement it.

Right now, my focus is on a bicubic/bilinear scaling pipeline with color table lookup and alpha blending. The code is taking a lot of time to write since we are bringing the DMA2D feature in along with using the SIMD ops on the corex-m4/m7. The point is to make the code as fast as possible. We’ve done everything integer only for maximum speed too since using floats costs speed performance when converting floats to ints and back and forth.

Anyway, once this pipeline is finished I will then be able to add bliniear/bicubic scaling to our rotation correction method and update that method to basically be equivalent to warp perspective from OpenCV. Then we will also update our lens correction method to use this along with adding in the same level of support OpenCV has to tangential misalignment.

The whole point of all this work is to bring our vision system up to OpenCV’s level… and this means making sure the basics are solid.