Let's begin this project study with a simple, and fundamental approach. Exploring computer vision via color. What is orange? No, no, no, not the fruit, but the color. How would one describe this hue to an infant or someone who has never perceived it? That is the fundamental theoretical challenge behind teaching a computer... computer vision.

Shades of Orange
Alright lets define orange first. From art class, you recall that all colors are made of a combination of 3 primary colors; Red, Green, and Blue: RGB. Going down that path lets take what we preceive to be true orange (255, 165, 0) and lets take 3 standard devaitions away from this mean to get the full 99.7%, via the Empirical Rule, range of "orange". However, look at the image to the left; notice all the different shades of oranges. In particular take the example in the top left corner. Would say thats orange or brown? Notice critically, even amongst us human who has seen orange(s) we have discrepancies. This highlight another obstable, as you create more and more ranges for different means of colors, there will be shades that live in two or three or even four color "worlds" (ranges). So how would you help you the computer differeniate which color realm that it is? Maybe this isn't the best and ideal path to teaching computer vision. Perhaps a better approach would be teaching the computer where objects are respective to it and other objects.
Edge Detection: The Shift to Structure
So, since color is a headache, we switch to a much smarter strategy: Edge Detection. The entire theory is basically identical to how a blind person "sees." Think about closing your eyes and running your finger along a table. You don't see the edge, you feel the sudden, rapid shift where the table ends and empty space begins. That's what we're teaching the computer! We hunt for that sharp change, which we call the gradient (just a fancy word for the rate of change in brightness) between one pixel and the next. But first we need to convert to the footage to grayscale.
Why Grayscale?

The requirement to convert to grayscale isn't just some dusty rule—it’s absolutely vital for efficiency and making sure the algorithm works properly! Think about it: when the computer looks at an RGB image, it sees three separate, high-maintenance layers of data: Red, Green, and Blue. If our Edge Detection algorithm had to calculate the complexity (the gradient) across all three layers simultaneously and then try to mash those three results together, the entire process would be slow, riddled with noise, and computationally very complicated. Grayscaling is the key! It slashes that complexity by merging all three color channels into a single, clean channel that represents only intensity (luminosity). This single, simplified data stream allows the edge-finding algorithm to calculate the brightness change much faster and way more reliably.
Adoption to the Real World
Up until now the method of edge detection only tackles one issue and that is where objects are in relation to each other, in that given frame. However, it is just as important to determine where the object(s) are in relation to the sensor or in our case an autonomous drone. As the end goal of this project, was not only explore the ideas of automatous navigation but rather to implement this code to a drone, with sensors so it can self-navigate. In order to obtain this final goal it is critical for us to learn about parallax. Given the distance between two known "eyes" (cameras) and pixel disparty between the left and right images the onboard computer can trianglate the distance between object and it.
