Filesystem access, PC and pyb simultaneously

I’m having issues when accessing the file system simultaneously from python and a PC over USB.
It seem the list of files are not updating properly on my pc when the files are modified by the board itself.

To recreate, simply run this code on the camera and then add some files from the PC:

import os
import pyb

LED_red = pyb.LED(1)
LED_green = pyb.LED(2)
LED_blue = pyb.LED(3)

# Clear LEDs

# Go!
while (True):
    # List files in root
    files = os.listdir()

    # Remove some reserved directories...
    if "System Volume Information" in files:
        files.remove("System Volume Information")

    # If any files are present, delete one
    if files:
        # Print message about file deletion


        # Remove file
        os.remove("/" + files[0])
        # Toggle led to indicate file deletion
        # No files found?
        print("No files found")


Files added by the PC is seen and deleted by the board, but deletions are not seen from the PC?
Is it related to this discussion:

Hi, you should avoid doing that, writing files from both PC and OMV will corrupt the filesystem. The OS assumes it has exclusive access to the filesystem and it will cache read/writes and won’t see changes to the filesystem. There’s nothing we can do to fix this issue.

Just to add to Ibrahim’s reply… to fix this issue would require us to change how OS USB file system drivers would behave. So, we can never get around this issue.

Thank you for the quick and straight answers!

Is there another way of getting images to the camera?
For offline testing I upload recordings and code and then reboot to have the camera run the recordings. But in this case I would like to be able to transfer images as we go to the camera (images are created in a simulation) and have it return the result immediately.
I assume the only way then is to transfer it over USB?
Afaik it is not possible to have the image created directly in the framebuffer from the stream, meaning the image size would get limited to whatever can fit on the heap.

Just open the camera’s serial port in a program and have a script running on the camera that prints images in jpeg format to the PC. You can transfer any data structures via this.

Um, this question has be answered quite a few times on the forums and there are code snippets lying around for it.

After the Kickstarter I’ll put some effort into making a really nice python class for this type of thing since folks keep asking. However, I may just make OpenMV IDE have a command line argument that puts it into a mode where it dumps frames.

I think you misread my question, i want to go the other way. I have a simulation on the PC, outputing simulated images i want the M7 to process.
Ie, i want to send images from the PC to the M7. As far as i have been able to find this has not been discussed, sorry if I have missed something.

Edit: The reason for all this is to create a HIL testing setup for a robot that uses M7 cameras to navigate.

Oh, use the serial port for that too. See the VCP USB class on our documentation.

That said you can’t use OpenMV IDE when using this stuff. However, you can technically develop the code using the hardware UART and like and FTDI chip and then switch the UART to the USB one when done.

Doing that I assume image size is limited by heap size, ie it’s not possible to write to the frame buffer directly?
Based on what I read about creating images from files here:, it seems I can only work with files ~16kB in size?

On the OpenMV Cam M7 you should try to keep images sizes less than 16KB in size if copy_to_fb is False. Otherwise, images can be up to 320KB in size.

Absolutely, sounds reasonable to use another UART to be able to connect with OpenMV in parallel.

Edit: Typo and clarification

Um, there’s a frame buffer you can write to directly. However, you may need to edit our firmware to enable doing this.

Um, see the allocate extra fb method under sensor. Once you do that you can send data to that FB via a receive call. That said, unless the data is in the right format the camera won’t be able to make sense of it. Note that RGB565 is byte reversed on our camera because the OmniVision sensor outputs it byte reversed on the camera data bus so all our code is written to deal with that.

Thanks for the info! As we are intending to look for April tags etc in the images, the documentation seems to recommend not allocating extra FBs.
I might give it a shot, but I think simultaneously we will re-evaluate our test setup and see if we work this differently in our end :slight_smile: