Hi good day, I am trying to run inference using a pretrained tensorflow lite model. When I load in the data from a text or json file and then save as an array, I am able to execute the code completely and get result. However, when I send the same data via uart to the openmv and store as an array, the openmv disconnects automatically during the stage of extracting features from the data and before the code is completely executed. Any idea how to fix this issue?
Not sure what you are trying to do. It would be helpful to post a reduced code snippet that shows the error.
That’s the thing, it doesn’t give any error it just disconnects while executing the code and reconnects almost immediately after. The first image is result when it runs completely. The second image is a snapshot of how I load the data into openmv. The third image is the result when it disconnects before completely running the code, and the fourth image is the snapshot of the code that loads the data. The only difference between the two codes is how the data is loaded (one over uart and the other from a text file)…but the data is successfully loaded in both cases so not sure what the problem is. I don’t know if this gives better context to the problem.

Hi, you can just post the code in the forums. Use the code brackets option.
Anyway… to debug this you need to give me a minimal example that causes the issue. So, if you could, please keep removing things from the code until you can find the minimal issue on why it breaks. I cannot debug a giant code dump. You need to basically find a function or etc. for me that doesn’t work as advertised.
Finally, the numpy library is a third party library. If you google about issues with it you may find someone else hit the same bug.
Sorry I don’t understand what you mean by posting in the forums and using the code bracket option
There’s a button in the text box when replying that inserts code tags so that code is formatted correctly in a reply.
Thanks for the clarification, however its not that a function doesn’t work as expected because I can run these lines of code in a different script and the code executes completely. I wonder if it is a memory issue? like the openmv runs out of memory? Anyway, in the current script the disconnection occurs when it is executing the function below and specifically in the line that makes prediction (tf.regression)…the disconnect happens before this line finishes executing.
def classification(features,classifier,peakX,peakY,peakZ):
fs=100 # sampling rate
dt=1/fs
t_ADC = numpy.linspace(0, (adc.shape)[1]*dt-dt, num = (adc.shape)[1]) # timefeatures = features.tolist() new_features = array.array("f",features) preds = tf.regression(classifier,new_features) #print(preds) preds = preds[0] #print(preds) peak_res = numpy.max([peakX, peakY, peakZ]) #print("Peak resultant acc: ",peak_res) if ((t_ADC[-1]) > 30.0): # Classify as Freight-train if duration more than 30 sec cresult = 'Freight Train' elif (preds >= 0.5): #sigmoid activation fcn (greater than 0.5 = impact, otherwise = train) if (peak_res > 100): cresult = 'Impact' confidence_level = preds*100 print("confidence level: ",confidence_level) # img = sensor.snapshot() else: cresult = 'Other' else: cresult = 'Train/other' print("Result: ", cresult)
Are you using a supported tf op? The tf library doesn’t… handle unsupported ops with errors. It just deferences a null pointer and crashes the system.
I enabled all the ops I could in what’s in main… but, it’s not in our firmware yet:
If you are using an unsupported op then you get a crash.
So this supposed to be the op for making predictions on 1D arrays, I remember someone added it earlier this year…is it no longer supported? That would be weird because like I mention it executes in another script I have.
I think I fixed it, the tf.regression was upgraded to take in numpy array as input so I think that’s where the error (crash) came from.