Newbie OpenMV Arduino I2C

i just got my brand new OpenMV Cam!!
So i tried to connect it to my Arduino via I2C (as you can see in the Pic).
After that i copied the example to my Cam and my Arduino.
But nothing happens.
After “waiting for Arduino” i only geht the ErrorCode 16.


Can you please help me?



Did you add pull ups to the I2C lines? They are needed.

Anyway, if that doesn’t work please post some more info about what your setup is and exactly what script you ran. The I2C script should work well.

I think i have a problem in my wireing! :unamused:
Because i took the example of:

Thank you so much


Your wiring looks okay, if I’m seeing correctly, are those 10K resistors ? You should use 4.7-10K resistors. Also double check that long yellow wire it looks damaged.

Your Arduino needs 5V. Not 3.3V. The OpenMV Cam is 5V tolerant. Just tested the I2C now. It works fine. Just move the power wire for the I2C pull ups to 5V.

Wait, tested with 3.3V too and that works also.

I’m using an Arduino Mega w/ the OpenMV Cam M7 using the script. The Mega is running the Pixy I2C library code.

Thank you for your Posts!!!
But i am sorry nothing works.
new yellow wire.
I put in a 4.7k resistor.nothing i still have the busy error.
5V…busy error

Then i tried the pixy-code…new problem:
i can connect via the ide but when i try to start the script the camera disconnects.

Any Ideas???

I appended my scripts and my OpenMV Ide Version


openmv.txt (2.49 KB)
arduino.txt (798 Bytes)

I only got this working with a Mega. The Uno didn’t work. Anyway, can you use serial instead? That works a lot better. Do you have a Mega to do that with? Also, what’s your application.

In general, having the OpenMV Cam as a slave processor is difficult…

Ok i will try the serial communication.
So i just Need to send some Data from the Camera to an arduino UNO.
For me it ist Not Importand how to send This Data.
So if you have an example?
That would be awesome!!!
Please excuse my spelling ,but i have a german iPad…

Thanks a lot

So, the UNO only has one serial port. This makes it very hard to use with the OpenMV Cam unless you’ve got your serial code perfect on the Arduino…

Can you let me know what you are trying to do? Do you need the Arduino Uno at all?

Dear Mr. kwagyeman, Mr.Iabdalkader I have bought OpenMV Cam M7 a week ago and I have red this post and implemented its circuit wiring on Arduino Mega but with no luck till now
Please could you explain a step by step tutorial here on the forum to connect your cam M7 with arduino Mega and control it to take a snap shot on an events happened on Arduino
We need to do this urgently!

Hi, this is very easy to do if you can use one of the Arduino Mega’s serial ports. Note, we also have example code shipped with the IDE for how to connect the camera to an Arduino.

Anyway, can you connect the serial port of the OpenMV Cam to the one of the extra serial ports on the Arduino Mega? I’m not home right now but I’ll be able to provide a guide in about 12 hours.

Hi, here’s code that emulates the CMUcam5 Pixy. It show’s how to use the OpenMV Cam pretty well. Finally, the Pixy library can then be used by you on the Arduino. I’ve attached the Pixy library. See the UART pixy example.

You don’t need all the code below for whatever your application is, so, feel free to remove non-serial stuff. That said, to get up and running quickly try to keep the serial protocol the same.

# Pixy UART Emulation Script
# This script allows your OpenMV Cam to emulate the Pixy (CMUcam5) in UART mode.
# Note that you need to setup the lab color thresholds below for your application.
# P4 = TXD
# P5 = RXD
# P7 = Servo 1
# P8 = Servo 2

# Pixy Parameters ############################################################

color_code_mode = 1 # 0 == Disabled, 1 == Enabled, 2 == Color Codes Only, 3 == Mixed

max_blocks = 1000
max_blocks_per_signature = 1000
min_block_area = 20

uart_baudrate = 19200

# Pan Servo
s0_lower_limit = 1000 # Servo pulse width lower limit in microseconds.
s0_upper_limit = 2000 # Servo pulse width upper limit in microseconds.

# Tilt Servo
s1_lower_limit = 1000 # Servo pulse width lower limit in microseconds.
s1_upper_limit = 2000 # Servo pulse width upper limit in microseconds.

analog_out_enable = False # P6 -> Analog Out (0v - 3.3v).
analog_out_mode = 0 # 0 == x position of largest blob - 1 == y position of largest blob

# Parameter 0 - L Min.
# Parameter 1 - L Max.
# Parameter 2 - A Min.
# Parameter 3 - A Max.
# Parameter 4 - B Min.
# Parameter 5 - B Max.
# Parameter 6 - Is Color Code Threshold? (True/False).
# Parameter 7 - Enable Threshold? (True/False).
lab_color_thresholds = [(0, 100, 40, 127, -128, 127, True, True), # Generic Red Threshold
                        (0, 100, -128, -10, -128, 127, True, True), # Generic Green Threshold
                        (0, 0, 0, 0, 0, 0, False, False),
                        (0, 0, 0, 0, 0, 0, False, False),
                        (0, 0, 0, 0, 0, 0, False, False),
                        (0, 0, 0, 0, 0, 0, False, False),
                        (0, 0, 0, 0, 0, 0, False, False)]

fb_pixels_threshold = 500 # minimum number of pixels that must be in a blob
fb_merge_margin = 5 # how close pixel wise blobs can be before merging


e_lab_color_thresholds = [] # enabled thresholds
e_lab_color_code = [] # enabled color code
e_lab_color_signatures = [] # original enabled threshold indexes
for i in range(len(lab_color_thresholds)):
    if lab_color_thresholds[i][7]:
        e_lab_color_signatures.append(i + 1)

import image, math, pyb, sensor, struct, time

# Camera Setup

sensor.skip_frames(time = 2000)

# LED Setup

red_led = pyb.LED(1)
green_led = pyb.LED(2)
blue_led = pyb.LED(3)

# DAC Setup

dac = pyb.DAC("P6") if analog_out_enable else None

if dac:

# Servo Setup

min_s0_limit = min(s0_lower_limit, s0_upper_limit)
max_s0_limit = max(s0_lower_limit, s0_upper_limit)
min_s1_limit = min(s1_lower_limit, s1_upper_limit)
max_s1_limit = max(s1_lower_limit, s1_upper_limit)

s0_pan = pyb.Servo(1) # P7
s1_tilt = pyb.Servo(2) # P8

s0_pan.pulse_width(int((max_s0_limit - min_s0_limit) // 2)) # center
s1_tilt.pulse_width(int((max_s1_limit - min_s1_limit) // 2)) # center

s0_pan_conversion_factor = (max_s0_limit - min_s0_limit) / 1000
s1_tilt_conversion_factor = (max_s1_limit - min_s1_limit) / 1000

def s0_pan_position(value):
    s0_pan.pulse_width(round(s0_lower_limit + (max(min(value, 1000), 0) * s0_pan_conversion_factor)))

def s1_tilt_position(value):
    s1_tilt.pulse_width(round(s1_lower_limit + (max(min(value, 1000), 0) * s1_tilt_conversion_factor)))

# Link Setup

uart = pyb.UART(3, uart_baudrate, timeout_char = 1000)

def write(data):

def available():
    return uart.any()

def read_byte():
    return uart.readchar()

# Helper Stuff

def checksum(data):
    checksum = 0
    for i in range(0, len(data), 2):
        checksum += ((data[i+1] & 0xFF) << 8) | ((data[i+0] & 0xFF) << 0)
    return checksum & 0xFFFF

def get_normal_signature(code):
    for i in range(len(e_lab_color_signatures)):
        if code & (1 << i):
            return e_lab_color_signatures[i]
    return 0

def to_normal_object_block_format(blob):
    temp = struct.pack("<hhhhh", get_normal_signature(blob.code()),,, blob.w(), blob.h())
    return struct.pack("<hh10s", 0xAA55, checksum(temp), temp)

def get_color_code_signature(code):
    color_code_list = []
    for i in range(len(e_lab_color_signatures)):
        if code & (1 << i):
    octal = 0
    color_code_list_len = len(color_code_list) - 1
    for i in range(color_code_list_len + 1):
        octal += color_code_list[i] << (3 * (color_code_list_len - i))
    return octal

def to_color_code_object_block_format(blob):
    angle = int((blob.rotation() * 180) // math.pi)
    temp = struct.pack("<hhhhhh", get_color_code_signature(blob.code()),,, blob.w(), blob.h(), angle)
    return struct.pack("<hh12s", 0xAA56, checksum(temp), temp)

def get_signature(blob, bits):
    return get_normal_signature(blob.code()) if (bits == 1) else get_color_code_signature(blob.code())

def to_object_block_format(blob, bits):
    return to_normal_object_block_format(blob) if (bits == 1) else to_color_code_object_block_format(blob)

# FSM Code

fsm_state = 0
last_byte = 0


def parse_byte(byte):
    global fsm_state
    global last_byte

    if fsm_state == FSM_STATE_NONE:
        if byte == 0x00: fsm_state = FSM_STATE_ZERO
        else: fsm_state = FSM_STATE_NONE

    elif fsm_state == FSM_STATE_ZERO:
        if byte == 0xFF: fsm_state = FSM_STATE_SERVO_CONTROL_0
        elif byte == 0xFE: fsm_state = FSM_STATE_CAMERA_CONTROL
        elif byte == 0xFD: fsm_state = FSM_STATE_LED_CONTROL_0
        else: fsm_state = FSM_STATE_NONE

    elif fsm_state == FSM_STATE_SERVO_CONTROL_0:
        fsm_state = FSM_STATE_SERVO_CONTROL_1

    elif fsm_state == FSM_STATE_SERVO_CONTROL_1:
        fsm_state = FSM_STATE_SERVO_CONTROL_2
        s0_pan_position(((byte & 0xFF) << 8) | ((last_byte & 0xFF) << 0))

    elif fsm_state == FSM_STATE_SERVO_CONTROL_2:
        fsm_state = FSM_STATE_SERVO_CONTROL_3

    elif fsm_state == FSM_STATE_SERVO_CONTROL_3:
        fsm_state = FSM_STATE_NONE
        s1_tilt_position(((byte & 0xFF) << 8) | ((last_byte & 0xFF) << 0))

    elif fsm_state == FSM_STATE_CAMERA_CONTROL:
        fsm_state = FSM_STATE_NONE
        # Ignore...

    elif fsm_state == FSM_STATE_LED_CONTROL_0:
        fsm_state = FSM_STATE_LED_CONTROL_1
        if byte & 0x80: red_led.on()

    elif fsm_state == FSM_STATE_LED_CONTROL_1:
        fsm_state = FSM_STATE_LED_CONTROL_2
        if byte & 0x80: green_led.on()

    elif fsm_state == FSM_STATE_LED_CONTROL_2:
        fsm_state = FSM_STATE_NONE
        if byte & 0x80: blue_led.on()

    last_byte = byte

# Main Loop

pri_color_code_mode = color_code_mode % 4

def bits_set(code):
    count = 0
    for i in range(7):
        count += 1 if (code & (1 << i)) else 0
    return count

def color_code(code):
    for i in range(len(e_lab_color_code)):
        if code & (1 << i):
            return e_lab_color_code[i]
    return False

def fb_merge_cb(blob0, blob1):
    if not pri_color_code_mode:
        return blob0.code() == blob1.code()
        return True if (blob0.code() == blob1.code()) else (color_code(blob0.code()) and color_code(blob1.code()))

def blob_filter(blob):
    if(pri_color_code_mode == 0):
        return True
    elif(pri_color_code_mode == 1): # color codes with two or more colors or regular
        return (bits_set(blob.code()) > 1) or (not color_code(blob.code()))
    elif(pri_color_code_mode == 2): # only color codes with two or more colors
        return (bits_set(blob.code()) > 1)
    elif(pri_color_code_mode == 3):
        return True

clock = time.clock()
    img = sensor.snapshot()
    blobs = list(filter(blob_filter, img.find_blobs(e_lab_color_thresholds, area_threshold = min_block_area, pixels_threshold = fb_pixels_threshold, merge = True, margin = fb_merge_margin, merge_cb = fb_merge_cb)))

    # Transmit Blobs #

    if blobs and (max_blocks > 0) and (max_blocks_per_signature > 0): # new frame
        dat_buf = struct.pack("<h", 0xAA55)
        sig_map = {}
        first_b = False

        for blob in sorted(blobs, key = lambda x: x.area(), reverse = True)[0:max_blocks]:
            bits = bits_set(blob.code())
            sign = get_signature(blob, bits)

            if not sign in sig_map:
                sig_map[sign] = 1
                sig_map[sign] += 1

            if sig_map[sign] <= max_blocks_per_signature:
                dat_buf += to_object_block_format(blob, bits)

            if dac and not first_b:
                x_scale = 255 / (img.width()-1)
                y_scale = 255 / (img.height()-1)
                dac.write(round((blob.y() * y_scale) if analog_out_mode else (blob.x() * x_scale)))
                first_b = True

        dat_buf += struct.pack("<h", 0x0000)
        write(dat_buf) # write all data in one packet...

    else: # nothing found
        write(struct.pack("<h", 0x0000))

        if dac:

    # Parse Commands #

    for i in range(available()):

    num_blobs = min(len(blobs), max_blocks)
    print("%d blob(s) found - FPS %f" % (num_blobs, clock.fps())) (13.4 KB)

Hi Again,
Thanks kwagyeman for your reply, but unfortunately till now I am not able to get your OPenmv cam to work neither with Arduino UNO or Mega.
I have bought Mega specially for your camera according to your reply in this post.

I also tried to connect to this cam using TTL serial scripts for arduino & serial jpeg but all failed, please help.

all what I need to do is to read a Qrcode then take a snapshot for some products and save them on SD card then send an email with attachments and repeat these steps in a loop.
I bought your cam to use Qrcode reading and to take a snapshot after reading it on motion detected.

Thanks in advance!

Hi Mustafa,

I’m on vacation for Thanksgiving right now. I will try to help you as much as I can… but, I don’t have an Arduino with me. More over, you’re basically asking me to do your project for you and you don’t give me any feedback on what was wrong previously. I can’t / don’t really want to help you in these situations…

That said, let’s start with something simple…

Put this script on your Arduino Mega:

And put this code on your OpenMV Cam:

# UART Control
# This example shows how to use the serial port on your OpenMV Cam. Attach pin
# P4 to the serial input of a serial LCD screen to see "Hello World!" printed
# on the serial LCD display.

import time
from pyb import UART

# Always pass UART 3 for the UART number for your OpenMV Cam.
# The second argument is the UART baud rate. For a more advanced UART control
# example see the BLE-Shield driver.
uart = UART(3, 9600, timeout_char = 1000)

    uart.write("Hello World!\r")
    if uart.any():

Then connect the TX pin on the OpenMV Cam (P4) to the RX IN 1 pin on the Mega. And the RX pin on the OpenMV Cam (P5) to the TX Out 1 pin on the Mega. Finally, make sure both devices share a common ground wire.

If everything is working you should see text from the OpenMV Cam appear on the Arduino Serial Monitor at the 9600 Baud Rate. If you type anything into that terminal you should then see if on the OpenMV Cam.

Finally, is there a reason you need the Arduino? One of the purposes of the OpenMV Cam is to be able to do things without the Arduino. What is the Arduino doing in this situation?

Firstly, thanks Kwagyeman, for your reply, and happy Thanks Thanksgiving and sorry for taking time of your vacation.
Secondly, I did not want you to do my job for me, but I have made a lot of experiments before I get here and ask for support.
Thirdly, I have followed your reply steps strictly, but I get a blank data in Arduino serial monitor, the stroller moves as if something is being written but no chars it seems to be just a spaces!!!

Finally, you asked me why I want to use Arduino in my project, I need to do the following:
1- read QR code from a queue then if it is read
2- take several snapshots on Motion detection then
3- save these images to a network location on a server then
4- send every group of images under one QR code as attachments in an email message

this is why I am using arduino with OpenMV cam, but till now I could not get the images from the camera!!!
the only case I see images of that cam is when using your software connected using USB to the cam board.

This is the whole story Kwagyeman, Now if you i did not find any solution to this issue I will try to get another camera which can work properly with Arduino and help me to do the job.

So, if you have some thing to help me according to the above info, please do.

Okay, I don’t know what’s going wrong with the Arduino communications part - the steps I gave you above should work.

Anyway, so, question… how do you plan to send images to a server? This is not easy. What’s going to do that part? I can give you a script that detects QR Codes / motion and saves images to the SD card. But, beyond that I’m not going to be very helpful.

I suggest you start with simple examples to gain experience. Kwagyeman is giving exceptionnal support for free but he cannot transfer knowledge…that takes time and determination. If you are facing a tight deadline you should consider hiring an experienced embedded system developer, good luck

Dear All,
First of all, I appreciate all the support coming from you all.
Secondly, I am not a Newbie in programming, I have implemented several IOT projects using arduino.
I am just newbie to OpenMV cam & its coding script python, this is why I tried to get help here because here is the official manufacturer’s site.

Anyway, about the part of sending images to a server I have done that, it is easy for me as I am an experienced developer in different fields already.
I have not wanted OR asked from anyone TO DO MY JOB FOR ME AT ANY TIME !!!
I was just clarifying what is the process I am doing in my project, I wanted only a help in connecting the cam to arduino and make it take a snapshot which did not happen unfortunately at any one of the experiments on the openMv cam, maybe the error is in the cam unit itself !!! :confused: :confused: :confused:

Thanks to all, I think I am going to get another useful cam which is operative with arduino :frowning: and save us all the headache…

So if you are an experienced developper, you should make the ‘‘pre-baked’’ code, that kwagyeman posted last tuesday, work in minutes… :wink:
If its not working, you could try on a different port on OpenMV (its uart 3 in the example) or at least get a reading on the IDE on your PC.

You can certainly try other options, like JeVois or openCV on a Raspberry Pi but I can garantee you that after having personnaly worked on all these platform, that OpenMV is the best option for small vision embedded project.