lshift and rshift in nn_quantizer

Hi,
When quantifying my own model, I was confused about what l_shift and r_shift.

I have converted the float32 number to an 8-bit q format fix-number.
and get int_bits and dec_bits of weights and bias.
maybe 2 numbers is enough: weights_dec_bits and bias_dec_bits. (weights_int_bits = 7-weights_dec_bits; and bias_int_bits = 7-bias_dec_bits;)

However, OpenMV’s network stores l_shift and r_shift:
https://github.com/openmv/openmv/blob/master/src/omv/nn/nn.c#L150
OpenMV read l_shift and r_shift in CONV and IP layers.

I saw the code in https://github.com/openmv/openmv/blob/master/ml/cmsisnn/nn_quantizer.py#L564
But I don’t know what l_shift and r_shift are, and how to calculate it.
Could you help me? :smiley:

This is unbelievably complex. Basically, just read the ARM quantizer script. We basically did not get anywhere until they released it. https://github.com/openmv/openmv/blob/master/ml/cmsisnn/nn_quantizer.py

I will study this code again, in fact, it is really difficult to understand.

I also read the paper:
Efficient Neural Network Kernels for Arm Cortex-M CPUs
Https://arxiv.org/abs/1801.06601

But it seems that I don’t not find the answer.

See issues here:

Thanks. :smiley: