OpenMV camera and IDE crashing for pose estimation project

Howdy,

I am using an Arduino Uno as the controller to an OpenMV camera microcontroller which extracts the corner locations and IDs of april tag markers detected in camera images and passes that into a custom pose estimation algorithm that I have written. Upon entering the algorithm the IDE and camera will freeze and after some time the IDE will reset, the camera requires a power cycle to be used again (unplug and plug usb cable).

I am using the OpenMV Arduino RPC Interface tutorial to create the controller and remote scripts.

The april tag detection works seamlessly and I am able to obtain the ID and corners without any trouble.

However, upon creating a PoseEstimator object and using the class to determine_pose() the IDE and microcontroller will freeze. After a few seconds, the code terminates and the microcontroller disconnects from the IDE. (I would like to attach code, but I am a new user)

I have also checked the heap memory by using the machine.info(1) command. Here is what it provided:

ID=3e001f00:15513034:36373730
DEVID=0x0450
REVID=0x2003
S=480000000
H=240000000
P1=120000000
P2=120000000
_etext=81ee640
_sidata=81ee648
_sdata=30000000
_edata=300001a8
_sbss=300001a8
_ebss=30002c90
_sstack=0
_estack=10000
_ram_start=30000000
_heap_start=30002c90
_heap_end=3003bc90
_ram_end=3003c000
qstr:
  n_pool=3
  n_qstr=128
  n_str_data_bytes=1140
  n_total_bytes=2212
GC:
  228096 total
  25408 : 202688
  1=498 2=57 m=47
LFS free: 3609853952 bytes
GC memory layout; from 30004190:
00000000: MDMLhhhhhhhhLhLhSTAhhhhhhhhhhhThhTh=SDhhhhhh=======h=====hLhhSh=
00000400: DShSh=hShShSShLhLhhh=hhhLhLhSh=LTh=====h==h==Th==h=h=Th=====h===
00000800: =TT=Sh=h=====SDSShhSh=DShSh=hShShSShLhLhhh=h=DLhLhSh=Lhh=====h==
00000c00: ===hhhShSh=LhLhLhLhLhLhLhLhLhLhBDLhhMDhLBh===========hMDThMLDDhL
00001000: hT=DBBBBBLB=h===T=h===hhhhSh=======h==========hBhBLDBBBB=hB=BBBL
00001400: B=hB=BBh===LDBBBh=======================h===========hLh==B=hh===
00001800: BBh===========LDBLLBBhLh===B=h====hDBLhB=h=BLBh==h===Bh=======hD
00001c00: BLB=BBBB=hhh==h===DBh=B=LLBh==h===DBLB=hLBh==DBBhLhh===LhB==Sh=L
00002000: hh==h===DBh=B=hLBh==h===LLhLhLhLhLhhhDBh===h===Lh=====h=======B=
00002400: BBBh=Dh===h===BBBLBh==h=================h===LDBBBhBh==h===LDBBBh
00002800: BLhLhLhh=hLLh==LhLhLhLhLhLhLhLhBBBB=BBLh===hDh=====BBBLBBBBB=BBh
00002c00: h===Lhh=====h==============h==================LhShLhLhSSLh==MDhh
00003000: LhLhLhLhLhh=hLDhBh=Lh===h===hLhSh=BTDBB.......hhDBBBh===h=h===h=
00003400: =h==BBBBBBBBh=BBBBBBBBBShBh===h=hhShShhhhh==h=DShShSh=======hh==
00003800: h=ThTShhTh=h==BBhh=====h==h=Thhh==hhh==Thh==Thhh==Thhhh==Th=====
00003c00: ==h========h=====h========hhh==h==h=================h=====hhh==T
00004000: hh==============h=================h=======================h==hh=
00004400: =h=hhh==hhhh==hh===============h==h=====hhhTh==...............hh
00004800: =======hh==hhhh==h==hAhhhhAhh..hTAhh==Ah..............Ah..hT....
00004c00: ......Th========.h====================================h=========
00005000: ==h==================...........................................
00005400: ....h=======....................................................
       (3 lines all free)
00006400: ..........................................................h=====
00006800: ==..............................................................
       (7 lines all free)
00008800: ...............................h=.................h=...........h
00008c00: =...............................................................
       (7 lines all free)
0000ac00: ......................................h=======..................
       (2 lines all free)
0000b800: ..............................Sh=...............................
0000bc00: .....................................................Sh=........
0000c000: .....Sh=........................................................
0000c400: ..Sh............................................................
0000c800: .......................Sh=..............Sh......................
0000cc00: ...........h==========..........................................
0000d000: ....................................................Sh.........S
0000d400: h.........Sh....................................................
       (2 lines all free)
0000e000: .......h======................h===========================h=====
0000e400: h=====h====h===.................................................
0000e800: ...............h=======================================h========
0000ec00: ======================================h==================h======
0000f000: ====h============h====h==h===h====h=======h========h============
0000f400: =============h=====h==h===================......................
       (160 lines all free)
00037800: ................................................

I was using the pose estimation code on the OpenMV camera previously without any issues, so I am confused as to what could be causing the crashes now.

Please let me know if I can provide any additional information! I have attached the main code snippets below!

Interface Script

import sensor
import image
import time
import math
import rpc
import sensor
import struct
import PoseEstimator
import Camera
import json
import machine

sensor.reset()
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.set_framesize(sensor.QQVGA)
sensor.skip_frames(time=2000)
clock = time.clock()

# * cs_pin - Slave Select Pin.
# * clk_polarity - Idle clock level (0 or 1).
# * clk_phase - Sample data on the first (0) or second edge (1) of the clock.
#
# NOTE: Master and slave settings much match. Connect CS, SCLK, MOSI, MISO to CS, SCLK, MOSI, MISO.
#       Finally, both devices must share a common ground.
#
interface = rpc.rpc_spi_slave(cs_pin="P3", clk_polarity=1, clk_phase=0)

# Helper methods used by the call backs below.

def draw_detections(img, dects):
    for d in dects:
        c = d.corners()
        l = len(c)
        for i in range(l):
            img.draw_line(c[(i + 0) % l] + c[(i + 1) % l], color=(0, 255, 0))
        img.draw_rectangle(d.rect(), color=(255, 0, 0))

################################################################
# Call Backs
################################################################

def pose_estimation(data):
    clock.tick()

    verbose = False # Keep false for actual demonstration to free up compute space
    detect_april = True
    detect_led = False

    position = [0,0,0]
    quaternion = [0,0,0,1]
    camera = Camera.Camera(json.load(open('camera_properties.json',)))

    tags = sensor.snapshot().find_apriltags()

    if not tags:
        return struct.pack(
            "<hfffffff",
            0, # No detections.
            position[0],
            position[1],
            position[2],
            quaternion[0],
            quaternion[1],
            quaternion[2],
            quaternion[3],
        )
    ## Draw detections on frame buffer for debugging
    if verbose: draw_detections(sensor.get_fb(), tags)

    ## Create instance of Pose Estimator with camera props, look up table information, id, and corners
    ## set flag : detect_led to True for detection with leds
    ## set flag : detect_april to True for detection with april tags
    for tag in tags:
        print(tag.id(),tag.corners())
        pose_estimator = PoseEstimator.PoseEstimator(json.load(open('camera_properties.json',)),
                                                     json.load(open('truth_table.json',)),
                                                     detect_led,
                                                     detect_april,
                                                     tag.id(),
                                                     tag.corners())


    ## Determine Pose
    pose_estimator.determine_pose()
    print(pose_estimator.R)
    print(pose_estimator.t)

    fps = clock.fps()
    # Return pose estimate from NLLS Algorithm
    return struct.pack(
        "<hfffffff",
        1, # Obtained detections.
        position[0],
        position[1],
        position[2],
        quaternion[0],
        quaternion[1],
        quaternion[2],
        quaternion[3],
    )

# Register call backs.

interface.register_callback(pose_estimation)

# Once all call backs have been registered we can start
# processing remote events. interface.loop() does not return.

interface.loop()

This is the determine_pose() function that resides within the PoseEstimator class.

    def determine_pose(self):
        print("Begin Pose Estimation")
        machine.info(1)
        maxIterations = 50
        epsilon = 1E-6
        i = 0
        print("Populate Matrices")
        self.calculate_rectified_image_projection()
        self.calc_A1()
        self.calc_A2()
        self.calc_K()
        self.calc_C()
        self.calc_W()

        print("Compute NLLS")
        while(i < maxIterations):
            print("Iter: ",i)
            i = i + 1
            self.calc_qExtended()
            self.calc_F()
            self.calc_J()
            self.calc_H()
            self.calc_delq()
            self.correct_estimate()
            self.calc_convergence()
            if self.convergence < epsilon :
                print("Reached Tolerance")
                break

        print("Calculate Pose")
        self.calc_att()
        self.R = self.R.transpose()
        self.calc_pos()
        self.correct_results()
        self.reproject_coordinates()
        self.calc_reprojection_error()
        self.calc_321_euler_angles()
        print("q =", self.qEstimate)
        print("R =", self.R)
        print("t =", self.t)
        self.print_321_euler_angles()

PoseEstimator Class

from ulab import numpy as np
from ulab import scipy as sp
import sensor, image, time, math
import Camera, TruthTable
import machine

def skew(q):
    if(q.size != 3):
        return -1
    q_skew = np.zeros((3,3))
    q_skew[0][1] = -1.0*q[2][0]
    q_skew[0][2] = q[1][0]
    q_skew[1][0] = q[2][0]
    q_skew[1][2] = -1.0*q[0][0]
    q_skew[2][0] = -1.0*q[1][0]
    q_skew[2][1] = q[0][0]
    return q_skew

class PoseEstimator:
    # properties
    def __init__(self, cam_props_dict, truth_table_dict, detect_led, detect_april, id, corners):
        ## Camera and Truth Table Dictionaries
        print("Read JSON Files")
        self.cam = Camera.Camera(cam_props_dict)
        self.TPODS_Truth_Tables = TruthTable.TruthTable(truth_table_dict)
        self.cam_props_dict = cam_props_dict
        self.truth_table_dict = truth_table_dict
        self.truth_table_led = self.TPODS_Truth_Tables.LED_TRUTH_TABLE
        self.truth_table_april = self.TPODS_Truth_Tables.APRIL_TRUTH_TABLE

        ## Update marker id and corners based on feature flag (led or april)
        print("Initialize Marker Id and Corners")
        if detect_led:
            self.led_detected = True
            self.id = id
            self.corners = corners
        elif detect_april:
            self.april_detected = True
            self.id = id
            self.corners = corners
        else:
            self.led_detected = False
            self.led_id = None
            self.led_corners = None
        self.numFeatures = len(self.corners)
        ## Pose Estimator Variables
        print("Initialize Pose Estimator Variables")
        self.rectImgProj = np.zeros((self.numFeatures,2))
        self.reprojectedCoords = np.zeros((self.numFeatures,2))
        self.A_1 = np.zeros((self.numFeatures*2, 9))
        self.A_2 = np.zeros((self.numFeatures*2, 3))
        self.K = np.zeros((self.numFeatures*2,9))
        self.C = np.zeros((self.numFeatures*2,12))
        self.qExtended = np.zeros((12,1))
        self.F = np.zeros((self.numFeatures*2,1))
        self.W = np.zeros((self.numFeatures*2,self.numFeatures*2))
        self.w_val = 1
        self.Jf = 1.0
        self.Ji = 0.0
        self.H = np.zeros((self.numFeatures*2,3))
        self.delq = np.empty((3,1))
        self.qInitial = np.empty((3,1))
        self.qFinal = np.empty((3,1))
        self.qEstimate = np.empty((3,1))
        self.convergence = 10.0
        self.R = np.empty((3,3))
        self.t = np.empty((3,1))
        # Euler Angle (3-2-1) Sequence
        self.phi_1 = 0.0
        self.theta_2 = 0.0
        self.psi_3 = 0.0
        self.errors = 0.0

    def calculate_rectified_image_projection(self):
        #print("calculate_rectified_image_projection()...")
        fx = self.cam.fx
        fy = self.cam.fy
        cx = self.cam.icx
        cy = self.cam.icy
        # TODO: Implement the pixel adjustment on detected features
        for i in range(self.numFeatures):
            self.rectImgProj[i] = [(self.corners[i][0]-cx)/fx, (self.corners[i][1]-cy)/fy]
        #print(self.rectImgProj)

    def reproject_coordinates(self):
        #print("reproject_coordinates()...")
        fx = self.cam.fx
        fy = self.cam.fy
        cx = self.cam.icx
        cy = self.cam.icy

        Rt = self.R.transpose()
        r11 = Rt[0][0]
        r12 = Rt[0][1]
        r13 = Rt[0][2]
        r21 = Rt[1][0]
        r22 = Rt[1][1]
        r23 = Rt[1][2]
        r31 = Rt[2][0]
        r32 = Rt[2][1]
        r33 = Rt[2][2]
        t_1 = self.t[0]
        t_2 = self.t[1]
        t_3 = self.t[2]

        for i in range(self.numFeatures):
            if(self.numFeatures == 3):
                truth = self.truth_table_3_LED
                x = truth[self.LED_3_id][i][0]
                y = truth[self.LED_3_id][i][1]
                z = truth[self.LED_3_id][i][2]
                self.reprojectedCoords[i][0] = cx + fx*(r11*x + r12*y + r13*z + t_1)/(r31*x + r32*y + r33*z + t_3)
                self.reprojectedCoords[i][1] = cy + fy*(r21*x + r22*y + r23*z + t_2)/(r31*x + r32*y + r33*z + t_3)
            elif(self.numFeatures == 4 and self.april_detected):
                truth = self.truth_table_april
                x = truth[self.id][i][0]
                y = truth[self.id][i][1]
                z = truth[self.id][i][2]
                self.reprojectedCoords[i][0] = cx + fx*(r11*x + r12*y + r13*z + t_1)/(r31*x + r32*y + r33*z + t_3)
                self.reprojectedCoords[i][1] = cy + fy*(r21*x + r22*y + r23*z + t_2)/(r31*x + r32*y + r33*z + t_3)
            elif(self.numFeatures == 6):
                truth = self.truth_table_6_LED
                x = truth[self.LED_6_id][i][0]
                y = truth[self.LED_6_id][i][1]
                z = truth[self.LED_6_id][i][2]
                self.reprojectedCoords[i][0] = cx + fx*(r11*x + r12*y + r13*z + t_1)/(r31*x + r32*y + r33*z + t_3)
                self.reprojectedCoords[i][1] = cy + fy*(r21*x + r22*y + r23*z + t_2)/(r31*x + r32*y + r33*z + t_3)

    def calc_A1(self):
        #print("calc_A1()...")
        for i in range(self.numFeatures):
            if (not(self.april_detected) and self.numFeatures == 3):
                truth = self.truth_table_3_LED
                pointB = truth[self.LED_3_id][i]
                pointR = self.rectImgProj[i]
                self.A_1[2*i][0] = pointB[0]
                self.A_1[2*i][2] = -pointR[0]*pointB[0]
                self.A_1[2*i][3] = pointB[1]
                self.A_1[2*i][5] = -pointR[0]*pointB[1]
                self.A_1[2*i][6] = pointB[2]
                self.A_1[2*i][8] = -pointR[0]*pointB[2]
                self.A_1[2*i+1][1] = pointB[0]
                self.A_1[2*i+1][2] = -pointR[1]*pointB[0]
                self.A_1[2*i+1][4] = pointB[1]
                self.A_1[2*i+1][5] = -pointR[1]*pointB[1]
                self.A_1[2*i+1][7] = pointB[2]
                self.A_1[2*i+1][8] = -pointR[1]*pointB[2]
            elif(self.numFeatures == 4 and self.april_detected):
                truth = self.truth_table_april
                pointB = truth[self.id][i]
                pointR = self.rectImgProj[i]
                self.A_1[2*i][0] = pointB[0]
                self.A_1[2*i][2] = -pointR[0]*pointB[0]
                self.A_1[2*i][3] = pointB[1]
                self.A_1[2*i][5] = -pointR[0]*pointB[1]
                self.A_1[2*i][6] = pointB[2]
                self.A_1[2*i][8] = -pointR[0]*pointB[2]
                self.A_1[2*i+1][1] = pointB[0]
                self.A_1[2*i+1][2] = -pointR[1]*pointB[0]
                self.A_1[2*i+1][4] = pointB[1]
                self.A_1[2*i+1][5] = -pointR[1]*pointB[1]
                self.A_1[2*i+1][7] = pointB[2]
                self.A_1[2*i+1][8] = -pointR[1]*pointB[2]
            elif(not(self.april_detected) and self.numFeatures == 6):
                truth = self.truth_table_6_LED
                pointB = truth[self.LED_6_id][i]
                pointR = self.rectImgProj[i]
                self.A_1[2*i][0] = pointB[0]
                self.A_1[2*i][2] = -pointR[0]*pointB[0]
                self.A_1[2*i][3] = pointB[1]
                self.A_1[2*i][5] = -pointR[0]*pointB[1]
                self.A_1[2*i][6] = pointB[2]
                self.A_1[2*i][8] = -pointR[0]*pointB[2]
                self.A_1[2*i+1][1] = pointB[0]
                self.A_1[2*i+1][2] = -pointR[1]*pointB[0]
                self.A_1[2*i+1][4] = pointB[1]
                self.A_1[2*i+1][5] = -pointR[1]*pointB[1]
                self.A_1[2*i+1][7] = pointB[2]
                self.A_1[2*i+1][8] = -pointR[1]*pointB[2]
        #print(self.A_1)

    def calc_A2(self):
        #print("calc_A2()...")
        for i in range(self.numFeatures):
            pointR = self.rectImgProj[i]
            self.A_2[2*i][0] = 1.0
            self.A_2[2*i][2] = -pointR[0]
            self.A_2[2*i+1][1] = 1.0
            self.A_2[2*i+1][2] = -pointR[1]
        #print(self.A_2)

    def calc_K(self):
        #print("calc_K()...")
        A_1 = self.A_1
        A_2 = self.A_2
        A_2_T = A_2.transpose()
        self.K = A_1 - np.dot(A_2,np.dot(np.linalg.inv(np.dot(A_2_T,A_2)),np.dot(A_2_T,A_1)))
        #print(self.K)

    def calc_C(self):
        #print("calc_C()...")
        K = self.K
        self.C = np.zeros((2*self.numFeatures,12))
        for i in range(2*self.numFeatures):
            self.C[i][0] = K[i][0] - K[i][4] - K[i][8]
            self.C[i][1] = K[i][4] - K[i][0] - K[i][8]
            self.C[i][2] = K[i][8] - K[i][0] - K[i][4]
            self.C[i][3] = 2.0*(K[i][1]+K[i][3])
            self.C[i][4] = 2.0*(K[i][2]+K[i][6])
            self.C[i][5] = 2.0*(K[i][5]+K[i][7])
            self.C[i][6] = 2.0*(K[i][7]-K[i][5])
            self.C[i][7] = 2.0*(K[i][2]-K[i][6])
            self.C[i][8] = 2.0*(K[i][3]-K[i][1])
            self.C[i][9] = K[i][0]
            self.C[i][10] = K[i][4]
            self.C[i][11] = K[i][8]
        #print(self.C)

    def calc_qExtended(self):
        #print("calc_qExtended()...")
        self.qExtended = np.zeros((12,1))
        q_1 = self.qInitial[0][0]
        q_2 = self.qInitial[1][0]
        q_3 = self.qInitial[2][0]
        self.qExtended[0][0] = q_1**2.0
        self.qExtended[1][0] = q_2**2.0
        self.qExtended[2][0] = q_3**2.0
        self.qExtended[3][0] = q_1*q_2
        self.qExtended[4][0] = q_1*q_3
        self.qExtended[5][0] = q_3*q_2
        self.qExtended[6][0] = q_1
        self.qExtended[7][0] = q_2
        self.qExtended[8][0] = q_3
        self.qExtended[9][0] = 1.0
        self.qExtended[10][0] = 1.0
        self.qExtended[11][0] = 1.0
        #print(self.qExtended)


    def calc_F(self):
        #print("calc_F()...")
        self.F = np.dot(self.C,self.qExtended)
        #print(self.F)

    def calc_W(self):
        #print("calc_W()...")
        self.W = np.eye(2*self.numFeatures)*self.w_val

    def calc_J(self):
        #print("calc_J()...")
        self.Jf = (1.0/2.0)*np.dot(self.F.transpose(),np.dot(self.W,self.F))
        self.Jf = self.Jf[0][0]
        #print(self.Jf)

    def calc_H(self):
        #print("calc_H()...")
        self.H = np.zeros((2*self.numFeatures,3))
        q_1 = self.qInitial[0][0]
        q_2 = self.qInitial[1][0]
        q_3 = self.qInitial[2][0]
        C = self.C
        for i in range(2*self.numFeatures):
            self.H[i][0] = 2.0*(C[i][0]*q_1 + C[i][3]*q_2 + C[i][4]*q_3 + C[i][6])
            self.H[i][1] = 2.0*(C[i][1]*q_2 + C[i][3]*q_1 + C[i][5]*q_3 + C[i][7])
            self.H[i][2] = 2.0*(C[i][2]*q_3 + C[i][4]*q_1 + C[i][5]*q_2 + C[i][8])
        #print(self.H)

    def calc_delq(self):
        #print("calc_delq()...")
        H = self.H
        HT = H.transpose()
        W = self.W
        F = self.F
        self.delq = np.dot(np.linalg.inv(np.dot(HT,np.dot(W,H))),np.dot(HT,np.dot(W,F)))
        #print(self.delq)

    def correct_estimate(self):
        #print("correct_estimate()...")
        self.qFinal = self.qInitial - self.delq
        self.qInitial = self.qFinal
        self.qEstimate = self.qFinal

    def calc_convergence(self):
        #print("calc_convergence()...")
        self.convergence = abs((self.Jf-self.Ji)/(self.Ji + 1.0))
        self.Ji = self.Jf

    def calc_att(self):
        #print("calc_att()...")
        eye3 = np.eye(3)
        self.R = np.dot(np.linalg.inv(eye3 + skew(self.qEstimate)),(eye3-skew(self.qEstimate)))
        #print(self.R)

    def calc_pos(self):
        #print("calc_pos()...")
        ## TOOOK OUT THE TRANSPOSE() FOR r_vec
        r_vec = self.R.flatten()
        self.t = (-1.0)*np.dot(np.linalg.inv(np.dot(self.A_2.transpose(),self.A_2)),np.dot(self.A_2.transpose(),np.dot(self.A_1,r_vec)))
        #print(self.t)

    def calc_reprojection_error(self):
        #print("calc_reprojection_error()...")
        self.errors = 0
        numMsmts = 0
        if(self.reprojectedCoords.size != len(self.corners)):
            self.errors = -1
        for i in range(self.numFeatures):
            self.errors = self.errors + np.sqrt((self.reprojectedCoords[i][0]-self.corners[i][0])**2.0 + (self.reprojectedCoords[i][1]-self.corners[i][1])**2.0)
        self.errors = self.errors/self.numFeatures
        print(self.errors)

    def correct_results(self):
        #print("correct_results()...")
        if (self.t[2] < 0):
            print("correcting for -t")
            self.t = -2.0*self.t
            angle = math.pi
            R = np.empty((3,3))
            R[0][0] = math.cos(angle)
            R[0][1] = -1.0*math.sin(angle)
            R[2][2] = 1.0
            R[1][0] = math.sin(angle)
            R[1][1] = math.cos(angle)
            self.R = np.dot(R,self.R)
        else:
            self.R = self.R.transpose()

    def determine_pose(self):
        print("Begin Pose Estimation")
        maxIterations = 50
        epsilon = 1E-6
        i = 0
        print("Populate Matrices")
        self.calculate_rectified_image_projection()
        self.calc_A1()
        self.calc_A2()
        self.calc_K()
        self.calc_C()
        self.calc_W()

        print("Compute NLLS")
        while(i < maxIterations):
            print("Iter: ",i)
            i = i + 1
            self.calc_qExtended()
            self.calc_F()
            self.calc_J()
            self.calc_H()
            self.calc_delq()
            self.correct_estimate()
            self.calc_convergence()
            if self.convergence < epsilon :
                print("Reached Tolerance")
                break

        print("Calculate Pose")
        self.calc_att()
        self.R = self.R.transpose()
        self.calc_pos()
        self.correct_results()
        self.reproject_coordinates()
        self.calc_reprojection_error()
        self.calc_321_euler_angles()
        print("q =", self.qEstimate)
        print("R =", self.R)
        print("t =", self.t)
        self.print_321_euler_angles()

    def calc_321_euler_angles(self):
        self.phi_1 = (180.0/math.pi)*np.arctan2(self.R[1][2],self.R[2][2])
        self.theta_2 = (180.0/math.pi)*np.asin(self.R[0][2])
        self.psi_3 = (180.0/math.pi)*np.arctan2(self.R[0][1],self.R[0][0])

Hi, what’s happening is a system crash.

There will be a line of code that is 100% causing this. Can you narrow down your in your code to what that line is? If so, and if you can produce a simple script that generates the issue and it’s a call into our API I can then debug and fix it.

If the code was working previously and then broke on a firmware update this will also help. We do have regressions on things. So, I can find the bug more easily if this is the case.