OpenMV as Autoguider for astrophotography?

Awesome!

Also, besides lowering the PLL you can increase the divider output. The PLL has to be within certain freqs to lock. So, you can’t set it to anything. But, you can increase the output divider as much as you want.

You can also bypass the PLL.

Tonight I actually took images of Polaris. I couldn’t even see it with my eyes due to the light pollution, but the camera can definitely picked it up. I scripted something that iterated through all potential exposures and collected a bunch of images to run image processing on later.

It turns out, the image.Image(path) constructor is REALLY picky. It would not open the .jpg files that it saved, claiming “unsupported format”. I used Affinity to save them as 8 bit RGB JPG with sRGB ICC profile and it still says “unsupported format”. So I used MS Paint to save the files as 24-bit bitmap BMPs. These are some fat 15 megabyte files, 2592 x 1944.

Well… now OpenMV loaded the images, I run img.to_grayscale(copy = False) on it to save memory and to speed up blob finding. I run .find_blobs and get this exception: “Out of normal MicroPython Heap Memory! Please reduce the resolution of the image you are running this algorithm on to bypass this issue!”

That’s weird… I can run the following code and not get the exception

import sensor, image, time

sensor.reset()                      # Reset and initialize the sensor.
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.set_framesize(sensor.WQXGA2)
sensor.skip_frames(time = 2000)     # Wait for settings take effect.
clock = time.clock()                # Create a clock object to track the FPS.

while(True):
    clock.tick()                    # Update the FPS clock.
    img = sensor.snapshot()         # Take a picture and return the image.
    hist = img.get_histogram()
    stats = hist.get_statistics()
    blobs = img.find_blobs([(stats.mean() * 3, 255)], merge = True)
    print("fps = %f    ,    blobs = %u" % (clock.fps(), len(blobs)))

(above code was not run on an image that actually contained stars, but I did manage to get like 30 blobs at one point, randomly waving the camera around my apartment)

So… either I’m loading the image into frame buffer wrong, or my demo code is about to explode the RAM and I just don’t know it yet?

Anyways… coding in a park on a picnic table at midnight isn’t that romantic, and the sprinklers came on while I was there… please help me run my code on the images while I am at home lol thanks

EDIT: oh yea I did try throwing gc.collect() everywhere, didn’t help

We don’t support opening jpg files. Just saving them. There’s been a GitHub enhancement issue to support this for a while.

You are running out of ram because you tried to copy the image to the MicroPython heap. Please note that you are using a Microcontroller. Not a PC. We are able to do a lot but through tricks.

When you did the copy you need to assign it to the frame buffer which is 32 MB.

extra_fb = sensor.alloc_extra_fb(2592, 1944, sensor.GRAYSCALE)

img.to_grayscale(copy=extra_fb)

Read this:

https://docs.openmv.io/openmvcam/tutorial/system_architecture.html#memory-architecture

You tried to load the 10MB image to the 256KB MicroPython heap.

Be very careful when dealing with large images. You have to be very explicit about using frame buffers which are stored in the 32MB SDRAM.

to_grayscale(copy = extra_fb) doesn’t work, the error says that extra_fb is an image object and the parameter “copy” is expecting an integer. I’m guessing I’m supposed to pass the pointer to the allocation, not the whole object?

anyways… instead of using to_grayscale, I did simply used get_pixel and set_pixel to do the image conversion. It takes like 5 minutes… but now, when I do find_blobs, it’s saying “MemoryError”, no other text.

Do I need to stream the bitmap into the sensor’s frame buffer?

I’m seriously considering pointing the camera at a paper picture of the sky now… But even that’s difficult with the telephoto lens.

Hmmm,

to_grayscale() doesn’t support frame buffer relocation with copy. I apologize.

do this:

extra_fb = sensor.alloc_extra_fb(2592, 1944, sensor.GRAYSCALE).replace(img).to_grayscale()

replace() copys an image from one frame buffer to another. So, it will transfer the main image to that frame buffer. Then to_grayscale() will reduce the frame buffer from RGB565 to GRAYSCALE.

It would be more efficient per say if copy supported targeting frame buffers. That’s a to do.

https://docs.openmv.io/library/omv.image.html?highlight=replace#image.image.replace

Replace supports loading files too.

I think that worked, it says it found 14 stars pretty fast, I think the blob finding is fast enough for this.

Weird thing happened, I ran some code that looks like

        for i in stars:
            cir = i.enclosing_circle()
            img.draw_circle(cir[0], cir[1], cir[2], 255, 1, False)

notice that the color is 255, and the img object is supposed to be grayscale. I save the image as another jpg, and the circles, they are… drumroll… YELLOW lol, it looks nice but I’m super confused why it’s yellow. It’s like the image became CMYK or something?

Interestingly, I found that sensor noise isn’t a problem with this sensor. Light pollution is the major enemy. In soccer field nearby I really wouldn’t push the gain past 50, histogram mean would be something like 55. On the top of the mountains, I can push it to 128 and still get perfectly black skies.

Honestly, at the soccer field, I couldn’t even see the big dipper with my eyes, let alone the small dipper, soooo it’s kind of a useless test case since nobody in their right mind would attempt astrophotography in that location.

Hi,

Please note the difference between a grayscale image and an RGB565 image that is just grayscale. When you take 255 and treat that as an RGB565 byte reversed value you get yellow.

Use tuple notation (255, 255, 255) to avoid confusion. If you pass a number that number is treated as a RAW pixel.

You called that method in the original image you captured which is RGB565, not the grayscale copy in another buffer.

Hey is the cx and cy of a blob weighted center or just bounding box center?

The lens seems to be giving the stars a slight asymmetrical glow since the focus isn’t perfect, the cx and cy isn’t exactly centered on what should be considered the center. So I wrote some code to iterate only through the region of interest to find the weighted center instead of using cxf() and cyf(). Not sure if I’m wasting my time doing something that’s done faster in the C backend?

edit: I’m using brightness as the weight, so I’m guessing you didn’t do this in C already since it makes no sense for something that uses thresholding

CX/Cy are the centroid (weighted)

They are based on the binary mask however. Not the brightness. If you want brightness just loop over the pixel region in Python using get_pixel(). The objects should be small so performance should be fine.

Note, there’s a special parameter that gives you the x and y histogram of the blob. If you output these and use them you can handle larger blobs without a slow down. The centroid can be directly determined through the use of the x and y histogram of the thresholded blob pixels (i.e. only pixels that pass the threshold are added to the histogram)

I have an idea about calibration. The goal is to find the “center of rotation” when the camera is rotated.

For a polarscope camera, calibration is usually done by comparing two images, if in both images, I can identify Polaris, it means I have more than one star to work with. This is easy. See attached image, where X is the calculated center of rotation.

But… this is just a stretch goal. What if I could do this during the day?

Looking through the documentation. I can use find_displacement ? It says it can do translation but not rotation, or do rotation but not translation. I have no idea if I can… do it twice? Will the algorithm handle that? If I had two points and an angle, I can compute a triangle, with the third point being the center of rotation.

The other method would be keypoint matching, but it seems like the match object returns a list of points only for the second image, not the first? Or can I run the matches twice, the second time with the parameters swapped? This would only work if the returned two lists of match coordinates are sorted and can be correlated! I’m running the demo and I’m 99% sure this won’t work…

Just a stretch goal, it might not even be more accurate than doing it at night. The lens would be fixed to focus on stars, any near by scenery would be blurry and feature extraction might not work well, or be inaccurate.

find_displacement() is phase correlation on the whole image. It can find rotation displacement. However, it’s not designed for an FFT of more than 1024x1024 pixels.

I had tried to design the algorithm to handle both rotation and translation but was unable to get that to work. The translation part works well. The rotation and scale doesn’t really.

I think you should just put the centroid of the starts on matrix and use the npy like library on board to work some linear algebra.

I don’t quite know how you’d see the stars during the day.

All methods require strong features. If you don’t have anything with corners to look at then you’re out of luck. The sky will not cut it.

that idea was meant for day time pointed at like a far away building for feature extraction

forget about that, bad idea

I’m actually 99% implementing the Python code. My HTTP server is fully working as well and I’m minimizing the amount of work that the Python has to do. Anything that can be done via Javascript on the smartphone will be done by Javascript instead. A lot of data is being sent out as JSON using ujson, and it makes it super easy for me to debug via just my computer.

BUT!!!

find_blobs is throwing MemoryErrors! Only sometimes! “Out of normal MicroPython Heap Memory!”

I have added gc.collect() to many places

I have added micropython.opt_level(2) to every file

I am using micropython.const(expr) where I can

I have commented out most useless functions

I can print out the memory data using micropython.mem_info(True)

I might have the skills to dig into the firmware C code, maybe there are options in the makefile I can simply disable?

Can I simply find the malloc() that’s failing and giving it a static buffer instead? Or is there a dynamic buffer that’s failing?

Should I try cross-compiling py files into mpy?

What can I do that doesn’t involve reducing the image resolution?

I have a lot of

if self.debug: print("xxxxxxx")

, do those add up significantly on the heap?

Help me out here! Sooooo close!!! After this I just need to do HTML and JS, no more Python!

ERROR[681153]: <class 'MemoryError'>
Traceback (most recent call last): >> File "<stdin>", line 441, in main >> File "<stdin>", line 435, in task >> File "<stdin>", line 369, in solve >> File "star_finder.py", line 37, in find_stars
MemoryError: Out of normal MicroPython Heap Memory! Please reduce the resolution of the image you are running this algorithm on to bypass this issue!

stack: 1444 out of 64512
GC: total: 246080, used: 68848, free: 177232
 No. of 1-blocks: 1020, 2-blocks: 100, max blk sz: 1188, max free sz: 10123
GC memory layout; from 30003c90:

a whole bunch of stuff here

if you are curious, the latest code is up on OpenMV-Astrophotography-Gear/openmv_filesys at master · frank26080115/OpenMV-Astrophotography-Gear · GitHub

EDIT: WOOOOOOT I compiled my own firmware that removed many struct members of the linked list node “find_blobs_list_lnk_data”, and reduced “FIND_BLOBS_CORNERS_RESOLUTION” to 12, and also I’m not tracking 3 lists of stars any more, just 1 list, so the gc can collect older lists, I am now very watchful of what can be gc’ed

I also reduced the USB CDC buffer sizes from 512 to 128

The error seems to have stopped but I haven’t stress tested it outdoors yet, but the report says 147 stars at one point (I’m pointing it at crap around my apartment, with some weird thresholds), which should work fine later on

Fingers crossed!

Also, it seems like burning a new firmware heats it up so much that the PLL freaks out

Oh and if you wanna see my firmware changes, GitHub - frank26080115/openmv at find-blobs-lightweight

I don’t think you need to reduce the USB CDC buffers. But, okay. Weird that you ran out of heap. Must be a lot of stars. That’s quite a lot of allocations.

The only reason why it saw that many stars is that I’ve disabled the checks against the histogram just for the sake of stress testing. Testing against my realistic samples, it’s doing fine. I’ve also added a warning mechanism for “too many stars”.

Serving a HTML page with a single threaded HTTP server is stupid annoying. I can’t just use script tags or style tags, I actually have to use JS to load one file at a time, jquery and jquery-ui are both huge even when minified. I can’t even have more than one on-going AJAX requests going at once so one AJAX completion has to trigger the next one.

I spent the first part of the day just researching ways to signal to a server that it can’t have more than one socket open at a time, I never found a way. Even if the WINC supported multiple pipes, without multiple threads, it wouldn’t even help. And even if the H7 is dual core, I need like 5 threads lol

I’m going to have to render my detected stars onto a SVG canvas just to save the bandwidth on the image transfer.

You can have more than 1 socket open at a time. The WINC supports 6 TCP sockets at once.

6 TCP
4 UDP

Maybe have the server do less work? Just provide the data to a client and have them do all the work?

Maybe have the server do less work? Just provide the data to a client and have them do all the work?

That is what I am doing actually. The problem is that if I have a single HTML page but two javascript files and one CSS file, when the HTML is loaded, 3 + 1 (for the favicon) simultaneous HTTP requests would go out at once.

6 TCP

In practice this isn’t helpful if I am serving one request at a time. On each loop it would’ve served one request, it’s never accepted another socket when I use socket.accept() on the second request. It’s the most reliable when I stagger my requests to one at a time.

By the way, my method of loading JS one at a time is working great on desktop but failing on mobile so it’s no-go. I’m going to use the Python to pack the JS and CSS into the main body of the HTML instead and serve it all as one big file.

I got as far as rendering the stars detected onto a SVG. All the data I would ever need has been packed neatly into JSON so that was actually easy.

Sometimes the AJAX would fail, something would go wrong with the WINC and it’d never accept another socket connection until a reboot. I added my own “watchdog timer” for it and reboots the WINC but this only works well in station mode, haven’t tested AP mode but I think it’d lose the WiFi client so I’m pessimistic. Plus, my own phone would just go back to default WiFi if I lose the fake WiFi and everything would just not work after that.

Mmm, I’m not a web programmer. I think making the camera serve complex webpages is more than I think it should be doing. The MCU makes more sense as just a device which is sending data via a 1 socket connection to a web server online which can supply that rich data. I wouldn’t make it the web server.

I think you misunderstood. To keep the MicroPython as lightweight as possible. I need to make the HTML pages as complex as possible. We are in agreement on this point.

But the problem is that if the HTML has many JS files, the browser sends a bunch of HTTP requests all at once. Once the JS has loaded, the browser runs the JS, not the microcontroller.

The problem is only handling those HTTP requests. This only happens once on page load.

I just solved this problem by concatenating the JS files with the original HTML page. Concatenation is happening in Python, this way I can still keep separated JS files for organization. It’s saving me soooo much headaches.

The MCU is running a loop running snapshots and find_blobs as fast as it can. It is sending a list of stars, image statistics, and some other info to the browser when the browser asks for it at regular intervals. It is accepting commands to change exposure and stuff. The image is a SVG generated by the browser using the list of stars and image statistics, the JPG is never actually transferred.

The Polaris pattern matching code is still Python but it’s not heavy at all, only 20 possible stars to match. The bigger patter matching code will have to be JS and optional, 300+ star database, with each star having a “signature” made with about 4 other nearby stars. It wouldn’t even load the database into Python.

The only problem left is that I still need to make sure that the JS only send one AJAX request at a time. So basically, I can’t just say “update the exposure when the slider is moved”, I have to make it do “when the next star list arrives, instead of requesting the next star list, check if the slider has moved and update if so, otherwise, request the next star list”. I’m now doing this by using the UI events to push into a queue, and having the status update pop from that queue.

The hardest part is how to handle stuff when the WINC simply stops working.

Yeah, I’m saying just send the raw data to a server in the cloud and have the camera just be a data generator.

Mmm, I guess however, this is remote in the wildnerness.

Okay, I see why it needs to be onboard.

Um, yeah, cool, so, if you see things that need to be fixed in the firmware let us know.

I will say that the WINC1500 routinely drops TCP sockets. If you see the RTSP code I wrote you need to pretty much assume the socket will break on you randomly and re-create it. I wish I knew how to fix this… when I was debugging the thing I noticed that after sending it data for a while it would just close the socket for not necessary a good reason.