i've follow the cmsisnn tutotrial,but don't work

i just followed the tutotrial step by step,after some adjustment still don’t work.the latest error is: current_blob = self.bottom_blob[current_layer][0]
IndexError: list index out of range.

    current_blob = self.bottom_blob[current_layer][0]
IndexError: list index out of range

I do’n’t know how to fix it.(I’m doing a project to recognize numbers,but the lenet.network can’t be run on M7,so i want to train a smalled network to use on openmv)

Is there someone who can help me about this?

Hi, is this in the script that generates the CNN? It will be hard to debug what is wrong without some context. Can you please provide info on how to re-produce?

First i want to realise the convertion.so i just complie caffe and train the mnist examples and get the imdb, protxt file and lenet_iter_10000.caffemodel .After this i run the nn_quantizer.py and come the error:IndexError: list index out of range,never generated the pkl file.i really want show the Running results screenshot but i don’t know how to upload picture :blush:

here’s the full error information:

Traceback (most recent call last):
  File "C:\Users\caipanhe\Downloads\caffe-master\examples\mnist\nn_quantizer.py", line 608, in <module>
    my_model.get_graph_connectivity()
  File "C:\Users\caipanhe\Downloads\caffe-master\examples\mnist\nn_quantizer.py", line 226, in get_graph_connectivity
    current_blob = self.bottom_blob[current_layer][0]
IndexError: list index out of range

Can you attach the prototxt and the model ?

Is that our code?

yeah,i change the file path

yeah,i got the code file from this url:https://github.com/openmv/openmv/tree/master/ml/cmsisnn
i rerun this yesterday,and came a new error:

 Cannot copy param 0 weights from layer 'conv1'; shape mismatch.  Source param shape is 20 1 5 5 (500); target param shape is 20 3 5 5 (1500). To learn this layer's parameters from scratch rather than copying from a saved net, rename the layer.

when i used it to quantize cifar10 caffemode and attach the cifar10_train_and_test.prototxt,it worked and generated the pkl file.So i guess the quantization can’t work on lenet caffemode cause the train images are grayscale?

No it works on RGB and Grayscale, it seems you may have trained it on GS and the prototxt is for RGB. I can’t help without the model and prototxt

Sorry to necro this thread. I’ve been running into the same problem.
I am running the nn_quantizer script on a slightly modified LeNet provided in the examples of Caffe. I’ve converted the .caffemodel file to .caffemodel.h5 via the convert_caffemodel script in caffe/python.

current_blob = self.bottom_blob[current_layer][0]
IndexError: list index out of range

My model definition prototxt:

name: "Siknet"
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_train_lmdb"
    batch_size: 64
    backend: LMDB
  }
}
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_test_lmdb"
    batch_size: 100
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    pad: 2
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 48
    pad: 2
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 128
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}

while running MNIST dataset I got same error.

File “nn_quantizer.py”, line 229, in get_graph_connectivity
current_blob = self.bottom_blob[current_layer][0]
IndexError: list index out of range

How to solve it.

Just need to replace
name: “mnist” → name: “data”
in lenet_train_test.prototxt file

thanks

Hi, I’ve got tensorflow almost ready to add to our firmware. I was able to get the person detector network by google working.