TB6612 Library - Stepper motor only turns in one direction

As the title states, I can only get my stepper motor to turn in one direction using the TB6612 library (add tb6612 library, DC motor and stepper example · openmv/openmv@05b1e62 · GitHub). Loading the example code works perfectly and I’m able to adjust the step count, RPM, etc. Everything but the actual direction of the motor. Outside of swapping the wires on the board, nothing that I’ve tried seems to work. I’m current using a Big Easy Driver from Sparkfun (https://www.sparkfun.com/products/12859) and that may have something to do with it, but I’m not sure. I do have a couple of OpenMV Motor Shields on the way. The end goal is to have the camera detect a blob, turn two motors in one direction, and then immediately turn them backwards.

I know there has to be a very easy and obvious answer, but I’m clearly missing something. I’m teaching myself MicroPython so everything is still a bit new to me. I did search the forum but there’s no real mention of stepper motor direction control.

I didn’t write that library nor have I ever used it. I assume it works however.

The library assumes you have directly control of the H-bridge. The big easy driver looks like a stepper motor driver so I think you should write your own python code to talk to it.

Thank you for taking the time to respond. I appreciate your quick reply. It took me a while to get back on this project after posting.

Yeah, I was going about it all wrong. I finally just controlled the driver with an Arduino because it’s what I already knew how to do and just used the OpenMV to control the STEP and DIR pins. Once I get a bit more familiar with MicroPython, I’ll get rid of the Arduino. The code technically works, I just have to adjust the delays and everything once the system is in place. I’m still not entirely sure how the ‘if…else’ functions in OpenMV work when used with the blob detection. This thread (Project Idea: Automatically point camera at target - Project Discussion - OpenMV Forums) was incredibly helpful, however. I just don’t quite understand how the ‘for blob in img.find.blobs…’ line works. I can’t get a blob count if blob.count() is placed before it, but the count never prints a ‘0’ in the serial monitor, it only prints ‘1’ when there is a blob. The way that everything is written makes me think that nothing is happening unless there is a blob, so the serial will never show that ‘0’ when there are no blobs. Is that correct? I may just be missing something obvious.

Please excuse the immense amount of comments in the code. I tried to make it as clear as possible for future me to know what everything does.

# Automatic Grinder - Weld Bead Finder
# After a blade is placed in the clamps, the camera detects the weld bead and engages
# both grind motors via two stepper motors. The motors move forward a set distance, 
# grind both sides of the blade and retract. If there is still a visible seam on
# either side of the blade, the camera detects it and engages the motors again to get
# rid of the remaining seam lines. This is repeated until the camera no longer detetcs
# a seam. The operator then removes the blade from the clamps.

import sensor, image, time, pyb, math

x = int
# LEDs for visual confirmation of the code working without motors attached
red_led = pyb.LED(1)
green_led = pyb.LED(2)
# Pin0 outputs to Pin3 on the Arduino, which controls the STEP pin on the driver
pin0 = pyb.Pin("P0", pyb.Pin.OUT_PP)
# Pin1 outputs to Pin8 on the Arduino, which controls the DIR pin on the driver
pin1 = pyb.Pin("P1", pyb.Pin.OUT_PP)

thresholds = (0, 22, -23, 8, -128, 19)

# The 'Thresholds' will have to be changed once the system is fully installed due
# to a difference in the lighting environment.

sensor.reset()                      # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE). RGB seems to work better for this application
sensor.set_framesize(sensor.VGA)   # Set frame size to QVGA (320x240)
sensor.set_windowing((80, 320))    # The actual display window size. This is the approximate width of the blade
sensor.skip_frames(time = 2000)     # Wait for settings take effect.


while(True):

    # Initializes the camera
    img = sensor.snapshot() 
    
    # There is always a rectangle drawn to show the "target area". This will be used
    # with the LCD Sheild for setup and adjustments.
    img.draw_rectangle((0,140,80,40), color=255)
    
    # If the camera sees a weld, it draws a blob around the weld
    for blob in img.find_blobs([thresholds],roi = (0,140,80,40), area_threshold=500, merge=True):
         img.draw_rectangle(blob.rect(), color=255, fill = True)
         count = blob.count()
         print(count)
         
         # If the camera doesn't detect a weld (if the number of blobs is zero), the
         # grind motors do nothing. This will be later tied to some visual feedback for the operator
         if (count == 0):
           pin0.value(False)
           green_led.off()
           
         # Otherwise, if the camera does detect a weld (if the number of blobs is greater than zero),
         # the grind motors engage. This will later be tied to some visual feedback for the operator
         else: 
            if (count > 0):
            # The grind motors start to move forward (Green LED ON)
             pin0.value(True)             
             green_led.on()
             time.sleep(1000)
             
            # The grind motors stop (Green LED OFF)
             pin0.value(False)
             green_led.off()
             time.sleep(1000)
             
            # The grind motors are moved out of the way (Red LED ON)
             pin1.value(True)
             pin0.value(True)
             red_led.on()
             time.sleep(1000)
             
            # The grind motors stop at their original starting point (Red LED OFF)
             pin1.value(False)
             pin0.value(False)
             red_led.off()

find_blobs returns a list of blobs. When you put it in the for loop that just causes the loop to iterate over the list immediately versus the list being store in a variable you can access.

If you assign the result of find blobs to a variable then you can get the length of the list and also iterate over it using the for loop.

Here’s the newest (functioning!) code for the sake of documenting the process. I ditched the Arduino because I finally figured out how to use the PWM functions with the Big Easy Driver.

So it technically works. When the camera sees a blob of a specific area, it turns the stepper motor and the blue LED turns on for some visual feedback. There’s got to be some safety features and a reverse command added, but for a proof of concept, this is great. The only thing that I need to fix really is the sleep command. When the camera is connected VIA USB, the blobs show up in the IDE and on the LCD. There’s a pause because of the sleep command, but you know what blob the camera is seeing. When the camera is powered via an outside 5v power supply and not connected via USB, the blobs don’t appear on the LCD at all. It’s as if being connected to the computer slows it down just enough for the blob to appear before the loop resets.

All of this to say that I don’t think the ‘time.sleep’ and ‘Timer.OC_FORCED_INACTIVE’ lines are necessary. It’s only adding a delay that makes it difficult to actually see what’s happening. The motor is only going to move at the frequency of Timer 4 (2400hz) and stop turning anyway, so do I even need a sleep command? I could be wrong.

I also designed an enclosure based on the clear one that’s available for the camera with the LCD shield attached. I’m going to refine it a bit more, but I’ll post it somewhere and link it here when It’s finished.

import sensor, image, time, pyb, math, lcd
from pyb import Pin, Timer

blue_led = pyb.LED(3)
pin5 = pyb.Pin("P5", pyb.Pin.OUT_PP)
tim = Timer(4, freq=2400)

thresholds = (0, 22, -23, 8, -128, 19)

sensor.reset()                      # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE). RGB seems to work better for this application
sensor.set_framesize(sensor.LCD)   # Set frame size to QVGA (320x240)
sensor.set_windowing((80, 160))    # The actual display window size
sensor.skip_frames(time = 2000)     # Wait for settings take effect.
lcd.init()


while(True):
    img = sensor.snapshot()
    target = img.draw_rectangle((0,60,80,40), color=255)
    lcd.display(img)
    lcd.display(target)
    img.draw_rectangle((0,60,80,40), color=255)
    blob_list = img.find_blobs([thresholds],roi = (0,60,80,40), area_threshold=400, merge=True)
    blob_count = len(blob_list)
    tim = Timer(4, freq=2400) # Frequency in Hz
    pin5.value(False)

    for blob in img.find_blobs([thresholds],roi = (0,60,80,40), area_threshold=400, merge=True):
         img.draw_rectangle(blob.rect(), color=255, fill = True)
         if (blob_count == 0):
           print(blob_count)
           tim.channel(3, Timer.OC_FORCED_INACTIVE)

         else:
            if (blob_count > 0):
             blue_led.on()
             tim.channel(3, Timer.PWM, pin=Pin("P9"), pulse_width_percent=50)
             print(blob_count)
             time.sleep(800)
             pin5.value(True)
             tim.channel(3, Timer.OC_FORCED_INACTIVE)
             blue_led.off()

If you are using the LCD shield make sure you are not using a pin it uses for the motor pwm.