Hello,
I have created my own tflite model for audio classification and I am trying to use openmv’s example from the micro_speech_2.py to upload my tflite model and run it. I am using the Nicla Vision for this. However, I am getting a few errors. The output seems to always be the index 2 no matter what the audio is. I have double checked to make use the labels are correct as well so I am not sure what the issue is. I am also wondering if the mirco_speech library is only for models to classify “yes” or “no” maybe that is part of the problem? My model is trained to classify clap, no clap, silence or saying hello. Any tips would be much appreciated!
I am also working on an IMU model for the nicla vision. I am wondering if there is anyway to use tflite to load the model to classify different gestures in openmv. I am trying to use tf.lite.Interpreter but I keep getting a module (tf) has no attribute (lite) so I am not sure how to proceed. thank you in advance!