Hello, I am testing methods to change from grayscale to binary and I have found two methods, this is my example code:
sensor.reset()
sensor.set_framesize(sensor.QQQVGA)
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.skip_frames(time = 2000)
clock = time.clock()
If I uncomment img.binary the code works perfectly. With grayscale_to_binary I get this error: AttributeError: ‘Image’ object has no attribute ‘grayscale_to_binary’.
So I have two questions:
1º What is the difference between one method and the other. And why one doesn’t work for me
2º Why my size is 720 when the logical thing would be (80*60)/8=600
Sorry for the confusion. I meant what is the difference between
image.grayscale_to_binary(value) and img.binary([(0,value)],to_bitmap=True)
Because I have seen that image.grayscale_to_binary(value) does not transform the image to binary type and img.binary if it does with to_bitmap=True it would be to bitmap type.(Picture 1 and 2)
This is my code:
while(True):
for i in range(100):
clock.tick()
img = sensor.snapshot()
value = img.get_histogram().get_threshold().value() #image.grayscale_to_binary(value)
In the output of sensor.get_fb we can see that with image.grayscale_to_binary(value) is grayscale type and also that in the image that shows the ide I do not see binary type, I still see it in grayscale (Picture 1 and 2).
With respect to the measures, I have used windowing to make the dimensions divisible by 8.
For example if I use sensor.set_windowing(32,24) the size value in the fb is (32*24/8)=96. (Picture 3) But if for example I set it to (16,12) the size value that fb returns me is 48. Following the previous logic it should be 24 (Picture 4). I wanted to know the limits of the bitmap conversion to try to handle it better because I think this is the problem.
The first part is solved, thank you very much. I’ll put you a little bit in context so that you understand my problem.
What I really want with the program is the value of the pixels through the bytearray.
If I have this line: img.binary([(0,value)],invert=1)
When I print img.bytearray()
All values are between 0 and FF, they are correct.
When I write img.binary([(0,value)],invert=1,to_bitmap=True)
Already the values of the bytearray give different results. How can I get the binarized information in bitmap format without having these problems and then try to reconstruct the image.
The problems then are 2 why it seems that the size is not set correctly. And why the bytearray is different from 1 or 0.
When you make it into a bitmap it’s stored as 1 bit per pixel in 32-bit longs. So, you won’t see 0 and 1 per byte anymore but instead 8 pixels at once.
Use get_pixel() if you want single pixel access still. However, generally, if you ever use get_pixel or set_pixel in your program you are doing something wrong as you should really never am trying to be doing single pixel access in Python.