Photography a Technology: Evolution and Concepts.
To know about any art digital or non-digital it is highly essential to see the roots or to see from where the concept has evolved. This helps the learner to align his thinking process to the evolution of technology and current methodical tools. Photography is no exception to this and so I would like to shed some light so as to give the readers a better insight of photography. The word photos is from ancient Greek and Latin which means quantum of light, or visible spectrum. Photo-graphy as such means the study of light optics which result into images. Now images were made differently in former times and were a form of art.
The art of having pictures evolved from cave paintings to canvas paintings to paper portraits. The realm of canvas and paper was long lasting and is still highly appreciated but people wanted greater accuracy and more detailed versions closer to reality. Now most important thing common to all the above pictorial arts (painting, drawing .. photography) is the existence and recognition of light and colors. These are the foundation stones of depicting anything. To depict you need to recognize the variation of shape, color, size, distance and focal dimensions.
To have the basic knowledge of photography or any pictorial-depictive art it is vital to know how human brain recognizes it. This is important because the human optic lobe and camera work in similar ways. But the dissimilarities and limitations of a camera sensor can only be understood if you know how human eye works and sees. Before getting in the technicalities it is essential to know that the primary element for vision to work is the presence of light. We can see because there is light and camera sensors catch light. Our general colloquial habit makes us say we see an image but in truth we see light and our brain makes an image.
Human eye and Camera Sensor:
So let us have a brief look in the brain and the camera sensor. Vision in humans is possible because of the optic lobe in the brain which is a chemical sensor placed behind the cerebellum of the brain. The eyelids are like lens covers and the pupils are lenses. The dilation (contraction or relaxation) of pupils happens when you move eyes from closed to long distance vision or if they are exposed to extreme changes in light conditions. The dilation is basically zooming of your lens. This whole mechanism helps human eye to bring in the light. Now that the light is in the pupil (lens in case of camera) it falls on the coronary retina which is imitates the convex lens aperture in a camera. This regulates the amount of light reaching the optic lobe. The optic lobe is like the camera sensor.
The optic lobe is stimulated when the incident light falls on an object and is reflected in your eyes. Once the reflected light enters the corona and the optic lobe it then secretes chemicals corresponding to light frequencies and hence joining the various light frequencies to make an image. The retinal cone distributes the light across the concave eyeball. There are two types of cells in human eye viz. rod shaped and cone shaped cells. The cone shaped cells contain m-RNA which has color definitions from genetic material and this enables us to see colours. Light has many wavelengths but the once which we can see are only seven viz. Red, Blue, Green (basic) and Yellow, Orange, Violet, Brown (combinations).
On the other hand in a camera light comes in through the lens and passes the lens convex aperture and reaches a mirror which reflects the light on the sensor (or film). Earlier with film cameras the light made an impression on a film plate which had silver nitrate and stored the incident light to form an image. The digital sensors have predefined color definitions. Different light frequencies have different colors and these definitions fill pixels with the colors depending on the corresponding incident frequency.
These facts play an important role in understanding how you see and then how a camera sees this world. The average human retina has five million cone receptors on it. Since the cones are responsible for colour vision, you might suppose that this equates to a five megapixel equivalent for the human eye. But there are also a hundred million rods that detect monochrome contrast, which plays an important role in the sharpness of the image you see. And even this 105 mega-pixels is an underestimate because the eye is not a still camera. So always remember your eyes are far superior to any camera sensor and the skill is to bring what you see to a limited methodical tool i.e. the camera. With the high end technology 60 megapixels is maximum any camera can offer at the moment. Even the camera that offers it is a telescope sensor used to process space images which are relatively still.
So see the world with open eyes and let the camera provide the innovative vision. In the next article I shall focus more on the technical aspects of the digital canvas which the world calls "the camera" !!.
Sincere Regards,
Rohan.