I need to calculate the x and y location of an object in real-time. Here i know the maximum coverage area of the camera and also depth.
How do you plan to track the object? By what feature?
I am using openmv 7 cam to find the multi-blobs. The camera will be in front of a plant. If there is a fruit in the plant, the camera detects that and this is okay. I need to pick the fruit as we move the cam up,down,left,right and in depth and use a tool to pick the fruit.
Okay, use the cx() and cy() properites of find_blobs() which I noticed you are using in another forum post.
i am moving the camera using a slide to take video. The slide will move for some distance only. Can i take a picture by positioning the camera at centre of slide and then can i draw a line on the picture by moving the camera on slide from slides’s left and right end to find the how much range can be covered using the camera(in other words,to find the roi) ?? i can find that when the camera reaches the left end and right end through switches.
Thanks in advance
Yes, but, the vision part of this is small… you just have to write code that accumulates the movement you see. I.e. compare the location given the past frame to the next frame and accumulate the difference.
How can i draw the line or else can i use the dots to make a line on the picture taken?
Like the image and i need to get the roi.
Actually i know the speed in which the camera is moving(speed of x-slide).
And also i can get the fps.
The draw_line() method?
I don’t understand what you are asking.
We have a x-Slide of length 25cm. The camera will move from left to right end. That is 0 to 25 cm. Camera will be facing a wall. The camera’s coverage area on the wall will be increased with increase in distance,right?. I have to find the area that the camera can cover. When the camera is moved, the centroid of the camera will be moved from left to right. I don’t know the distance between the wall and i need to know the coverage area of the wall. I can get it when i take a snapshot by placing the camera at center(12.5cm) and move the camera to left end and also drawing a line on the snapshot and when i get to know the left end reached,i need to stop the line. And then immediately i start to move the camera to right end and also i start to draw line on right side on the snapshot and i will stop drawing the line when right end reaches. By now, when i am placing the camera at center, i can know the limit and count number of pixels in the x-axis and convert it to centimeter and now i can get the objects position in cm,right? I am using blob.cx to find the centroid of the blob. For each and every distance from the wall, the coverage area varies. Using this can help us to find the x-coordinates in cm of the object’s position too.
Okay, I think I see what you are trying to do.
Um, so, basically, you want the pixels you’ve previously draw in the last frame to appear in the next frame. So, do this (pseudo code):
points =  ... while True: for b in img.find_blobs(...): points.append((b.cx(), b.cy())) points = points[-img.width():] # keep last x number of points where x is the image width - this may require too much ram so use another value if it's too much for p in points: img.set_pixel(p)
The list of points grows until it hits the image size and then only the last number of points are kept. I don’t know if you want store that many values in the heap so if you get memory alloc failures lower the max point list size. If you need to reset anything clear the point list again.
… The above code is not the complete solution but should give you the idea on what you want to do. Note that the above question was answered with general python coding knowledge. All I did was keep a list of a limited size of point objects which are collected from the blobs found every frame.