Normalization of Floats in TFLite

I am just getting my feet wet with ML, so I maybe missing some obvious things. It took me a while to figure out that OpenMV handles normalization when a model input type is float32. If I am reading this right, ( https://github.com/openmv/openmv/blob/master/src/omv/py/py_tf.c#L263 ) it looks like float input get converted from 0-255 to 0-1.

Everything works great when I normalize my training data the same way.

I noticed that the default preprocessing in Keras for mobilnet normalize from 0-255 to a value of -1 to 1.
Here is the preprocessing function: tf.keras.applications.mobilenet_v2.preprocess_input  |  TensorFlow v2.9.1
And the code that eventually gets call:
tensorflow/imagenet_utils.py at dc08ad80c8e65ac3e245035213a5cef861206aa8 · tensorflow/tensorflow · GitHub

Does this mean that if models are not trained using data that was normalized to be between 0 - 1, that they will not work?

I was following the Keras Transfer Learning tutorial, Transfer learning and fine-tuning  |  TensorFlow Core and it has you baking the preprocessing right into the model itself. I ended up double normalizing the data during inference which totally confused me.

You should use Edge Impulse for deep learning training: Edge Impulse

They let you write keras code but take care of the normalization process for you.

-1 to 1 is the float range… but, values of 0-255 are only normalized to 0 to 1. You’d need to pass a negative value in to get the negative half.