# Image Statistics

Hi,

I have a question regarding the the image.get_statistics function and the mean value it returns. Im currently using the the H7 with a Flir Lepton 3.5 connected to it. I’m trying to assess how accurate the thermal camera is. To do this I have set the cameras temperature range to its maximum (which I believe is -10 to 140 degrees celsius) and I have also set the greyscale min value to 0 and the greyscale max value to 255. Im doing this because I want the camera to consider the entire image to be a hot spot and thus return me a temperature value for the entire Image.

I have returned the raw thermal images without any bounding box etc to my host device which I have then used Numpy to calculate the average greyscale value for each image and converted that back to a temperature value. I have found that the temperature value I calculate on my host device is consistently below the temperature returned by the camera by approximately 3 degrees celsius.

Any ideas why this is the case? I am basically doing the very same thing in both cases - getting the average greyscale value for the entire image and converting it to a temperature value using the map_g_to_temp(g) function you provide in some of the Lepton example code. I would have expected that the results should have been the same.

I have included an image to illustrate my point. The green line at the bottom shows the difference between each data point on the blue and red lines.

Any help would be appreciated.

Hi, with a range of -10 to 140 Celsius the scaling function would be:

``````def map_g_to_temp(g):
return ((g * 150) / 255.0) - 10
``````

Which is about 0.588 C per pixel value. So, 3 C would not be a rounding error.

Stats are computed using a histogram:

``````memset(out->LBins, 0, out->LBinCount * sizeof(uint32_t));

int pixel_count = roi->w * roi->h;
float mult = (out->LBinCount - 1) / ((float) (COLOR_GRAYSCALE_MAX - COLOR_GRAYSCALE_MIN));

if ((!thresholds) || (!list_size(thresholds))) {
// Fast histogram code when no color thresholds list...
for (int y = roi->y, yy = roi->y + roi->h; y < yy; y++) {
uint8_t *row_ptr = IMAGE_COMPUTE_GRAYSCALE_PIXEL_ROW_PTR(ptr, y);
for (int x = roi->x, xx = roi->x + roi->w; x < xx; x++) {
int pixel = IMAGE_GET_GRAYSCALE_PIXEL_FAST(row_ptr, x);
((uint32_t *) out->LBins)[fast_floorf((pixel - COLOR_GRAYSCALE_MIN) * mult)]++;
}
}
} else {
// Reset pixel count.
pixel_count = 0;
for (list_lnk_t *it = iterator_start_from_head(thresholds); it; it = iterator_next(it)) {
color_thresholds_list_lnk_data_t lnk_data;
iterator_get(thresholds, it, &lnk_data);

for (int y = roi->y, yy = roi->y + roi->h; y < yy; y++) {
uint8_t *row_ptr = IMAGE_COMPUTE_GRAYSCALE_PIXEL_ROW_PTR(ptr, y);
for (int x = roi->x, xx = roi->x + roi->w; x < xx; x++) {
int pixel = IMAGE_GET_GRAYSCALE_PIXEL_FAST(row_ptr, x);
if (COLOR_THRESHOLD_GRAYSCALE(pixel, &lnk_data, invert)) {
((uint32_t *) out->LBins)[fast_floorf((pixel - COLOR_GRAYSCALE_MIN) * mult)]++;
pixel_count++;
}
}
}
}
}

float pixels = IM_DIV(1, ((float) pixel_count));

for (int i = 0, j = out->LBinCount; i < j; i++) {
out->LBins[i] = ((uint32_t *) out->LBins)[i] * pixels;
}
``````

And then that histogram is used to compute the stats:

``````float mult = (COLOR_GRAYSCALE_MAX - COLOR_GRAYSCALE_MIN) / ((float) (ptr->LBinCount - 1));

float avg = 0;
float stdev = 0;
float median_count = 0;
float mode_count = 0;
bool min_flag = false;

for (int i = 0, j = ptr->LBinCount; i < j; i++) {
float value_f = (i * mult) + COLOR_GRAYSCALE_MIN;
int value = fast_floorf(value_f);

avg += value_f * ptr->LBins[i];
stdev += value_f * value_f * ptr->LBins[i];

if ((median_count < 0.25f) && (0.25f <= (median_count + ptr->LBins[i]))) {
out->LLQ = value;
}

if ((median_count < 0.5f) && (0.5f <= (median_count + ptr->LBins[i]))) {
out->LMedian = value;
}

if ((median_count < 0.75f) && (0.75f <= (median_count + ptr->LBins[i]))) {
out->LUQ = value;
}

if (ptr->LBins[i] > mode_count) {
mode_count = ptr->LBins[i];
out->LMode = value;
}

if ((ptr->LBins[i] > 0.0f) && (!min_flag)) {
min_flag = true;
out->LMin = value;
}

if (ptr->LBins[i] > 0.0f) {
out->LMax = value;
}

median_count += ptr->LBins[i];
}

out->LMean = fast_floorf(avg);
out->LSTDev = fast_floorf(fast_sqrtf(stdev - (avg * avg)));
break;
``````

All I can think of is that the floor operations used above aren’t as accurate. These methods weren’t designed to be really incredibly accurate. Just fast.

Do you have the ability to edit the firmware? You can tweak them then to fix this. Otherwise, can you give me a target image in BMP file format and tell me what the expected output should be for get_stats()? I can then tweak the method until that’s correct.

Hi Nyamekye,

It may be a foolish question on my part but where did you get the figure of .588 C ?
Also I will get you an image in BMP format as soon as I can.

Gar

See the top of my post and the mapping function: ((g * 150) / 255.0

Hi,

Update I appear to have found the issue why there was such a difference. In the graph I included in my original post the blue line represented images that had a bounding box around the perimeter of the image whereas the red line represented the temperature calculation on an image with no bounding box.

This has raised another question from me, if the camera finds a hotspot in the middle of the image for example and draws a boundary box around it, is the reported temperature of that bounding box including the white pixels used to outline the bounding box or is the temperature only calculated based off the pixels inside the bounding box. Obviously if it includes the pixels used to draw the boundary box then the average temperature would be increased?

You should calculate the temperature before drawing the bounding box on the image.

The bounding box is an ROI, which when you draw it you modify the source image. The example script draws the bounding box after getting the temp I think.

Hi,

I have looked at the lepton_get_object_temp.py file which shows the following code:

``````for blob in img.find_blobs(threshold_list, pixels_threshold=200, area_threshold=200, merge=True):
img.draw_rectangle(blob.rect())
img.draw_cross(blob.cx(), blob.cy())
stats = img.get_statistics(thresholds=threshold_list, roi=blob.rect())
img.draw_string(blob.x(), blob.y() - 10, "%.2f C" % map_g_to_temp(stats.mean()), mono_space=False)
``````

Am I correct in saying blob.rect() represents the hot spot without the boundary box which is passed to the get_statistics() function.
Im confused if when I pass blob.rect() to the get_statistics function that it includes the boundary box drawn by the previous draw_rectangle() function.

Thanks,
Gar

You should move the get stats method before drawing on the image. The draw is corrupting the image data.

There’s just one frame buffer. You are overthinking what is happening.

So am I correct in saying then that the example code in the IDE which draws before generating the stats is incorrect, or at least for my use case where I need the temperature. I understand that you feel I’m overthinking what’s happening, but from my point of view I need to understand if the data I have been gathering is now incorrect.

Thanks,
Gar

Yes, that would be a bug in the example code. It should be switched around.

Ok thanks for your help, it’s much appreciated

Is it possible to have the image detection running on a video stream (to execute code when an image is detected) without drawing the boxes on the underlying video footage?

Yes. Just don’t execute the draw commands.

Perfect, thank you sir.