Doubt with gayscale to bitmap

Hello, I am testing methods to change from grayscale to binary and I have found two methods, this is my example code:
sensor.reset()
sensor.set_framesize(sensor.QQQVGA)
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.skip_frames(time = 2000)
clock = time.clock()

while(True):

    clock.tick()
    img = sensor.snapshot()
    value = img.get_histogram().get_threshold().value()
    img.grayscale_to_binary(value)
    #img.binary([(0,value)],to_bitmap=True))
    print(value, sensor.get_fb())

If I uncomment img.binary the code works perfectly. With grayscale_to_binary I get this error: AttributeError: ‘Image’ object has no attribute ‘grayscale_to_binary’.
So I have two questions:
1º What is the difference between one method and the other. And why one doesn’t work for me
2º Why my size is 720 when the logical thing would be (80*60)/8=600

There’s no grayscal_to_bitmap() method in our API.

As for the size, it’s 80*60/32. It should be 150 longs or 600 bytes.

Sorry for the confusion. I meant what is the difference between
image.grayscale_to_binary(value) and img.binary([(0,value)],to_bitmap=True)
Because I have seen that image.grayscale_to_binary(value) does not transform the image to binary type and img.binary if it does with to_bitmap=True it would be to bitmap type.(Picture 1 and 2)
This is my code:

import sensor, image, time

sensor.reset()
sensor.set_framesize(sensor.QQQVGA)
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.set_windowing(32,24)
sensor.skip_frames(time = 2000)
clock = time.clock()

while(True):
for i in range(100):
clock.tick()
img = sensor.snapshot()
value = img.get_histogram().get_threshold().value()
#image.grayscale_to_binary(value)

    img.binary([[(0,value)],to_bitmap=True)#, invert=1)

    print(value, sensor.get_fb())
    print(img.bytearray())
    sensor.flush()

In the output of sensor.get_fb we can see that with image.grayscale_to_binary(value) is grayscale type and also that in the image that shows the ide I do not see binary type, I still see it in grayscale (Picture 1 and 2).

With respect to the measures, I have used windowing to make the dimensions divisible by 8.
For example if I use sensor.set_windowing(32,24) the size value in the fb is (32*24/8)=96. (Picture 3) But if for example I set it to (16,12) the size value that fb returns me is 48. Following the previous logic it should be 24 (Picture 4). I wanted to know the limits of the bitmap conversion to try to handle it better because I think this is the problem.

Hi, grayscale to binary is just a function to change a scaler value to a binary scaler value: image — machine vision — MicroPython 1.19 documentation

It’s not part of the Image object. It’s part of the image module. So, it doesn’t actually care about the image object and doesn’t operate on it.

E.g. grayscale to binary just does:

return 1 if x/255.0 > 0.5 else 0

The first part is solved, thank you very much. I’ll put you a little bit in context so that you understand my problem.
What I really want with the program is the value of the pixels through the bytearray.
If I have this line: img.binary([(0,value)],invert=1)
When I print img.bytearray()
All values are between 0 and FF, they are correct.
When I write img.binary([(0,value)],invert=1,to_bitmap=True)
Already the values of the bytearray give different results. How can I get the binarized information in bitmap format without having these problems and then try to reconstruct the image.
The problems then are 2 why it seems that the size is not set correctly. And why the bytearray is different from 1 or 0.

When you make it into a bitmap it’s stored as 1 bit per pixel in 32-bit longs. So, you won’t see 0 and 1 per byte anymore but instead 8 pixels at once.

Use get_pixel() if you want single pixel access still. However, generally, if you ever use get_pixel or set_pixel in your program you are doing something wrong as you should really never am trying to be doing single pixel access in Python.