Need explanation of nn.c

hello, here I come again. Thanks for your patience for the last several posts.
My nn model still cannot work, so I run into openmv/src/omv/nn/nn.c and want to figure out how it works.
In the nn_transform_input method,

for (int y=0, i=0; y<data_layer->h; y++) {
            int sy = (y*y_ratio)>>16;
            for (int x=0; x<data_layer->w; x++, i++) {
                int sx = (x*x_ratio)>>16;
                int p = (int) IMAGE_GET_GRAYSCALE_PIXEL(img, sx+roi->x, sy+roi->y);
                input_data[i] = (q7_t)__SSAT((((p - (int) data_layer->r_mean)<<7) + (1<<(input_scale-1))) >> input_scale, 8);
            }
        }

can you give an example of

input_data[i] = (q7_t)__SSAT((((p - (int) data_layer->r_mean)<<7) + (1<<(input_scale-1))) >> input_scale, 8);

?
Is this a normalization?
Thanks!

Yes it removes the mean from every channel and scales the input in one step.

what’s the range of p? in pixel metric like 0-255?
If so, input_data = ((p - mean) << 7) + (1 << (input_scale - 1)) >> input_scale equals ((242 - 90) << 7) + (1 << 8)) >> 9 when p=242, mean=90, input_scale=9.
The term including p supposes to be variance term and term of 1 << (input_scale - 1) is the bias term (I guess). But the former looks much larger than the latter term, which leads the bias term no effect. right? This is why I want you to give me an example.

Yes the input range is 0-255 as I’ve told you before and you could always print the input/output values, build the code and see the values. BTW the scaling part comes from ARM ML examples, you may find more help there: