Photography is poised on the edge of another major transition. Its impact could perhaps be even greater than that caused by the arrival of digital cameras. When we look back 25 years from now, the real digital revolution in photography won’t have been the digital camera. It will be the upcoming progression to computational photography.
Current digital cameras pretty much work the same way as film cameras do. Digital cameras do have processors, and some image-processing occurs in the camera. But compared to what is currently possible with today’s computers, in-camera image processing is relatively minimal.
Computational photography combines innovative uses of image sensors and complex software to create images that no camera can readily capture.
For example: Imagine that after taking a shot you can choose your viewpoint, refocus an image, choose the depth of field you want, extend your image dynamic range, change the instant of capture and more. Sure, all of this manipulation can be done on your desktop or laptop. But eventually you’ll be able to do it all in your camera, while you’re still in the field.
Some of this is here now. Software for High Dynamic Range Imaging (HDRI) is now widely available and incorporated into Photoshop. Refocus and depth-of-field software is out there, but few are using it yet.
Some of these possibilities (the ones available now) can work with existing camera designs. Other functions will require new camera designs and developments. For example, cameras for computational photography may need custom sensors that can capture at very high speeds and frame rates. Or, they may need multiple sensors and lens systems, or camera-controlled lens focus racking during exposure. Some designs may include integrated GPS 3D positioning and pointing monitoring or multi-spectral sensors to capture from UV to IR in one hit.
Imagine the scenario: you go out to a beautiful location, perhaps a treed landscape with rock outcroppings. You set up your camera gear and press the shutter three times from three different positions across a rough sideways line. You move to another part of the scene and do the same. Perhaps you do this a third time. Each of those shutter depressions triggered a number of exposures at differing exposures and focal point locations. The camera has converted these to a HDRI, multi-focal image format. Full GPS information is recorded in the file. When you return home, your camera wirelessly uploads them to your computer and this triggers an automated script which runs a 3D analysis of the scene, building an extremely high resolution 3D model of the scene.
By the time you’ve made a coffee and gone to the toilet this is done. You sit down at your computer and are presented with a 3D scene (this might be real 3D, considering the new displays in the pipeline). Using a touch panel you move through the scene, turning around and looking up and down as you wish. At various points you press a virtual button and a ‘frame’ is saved.
Later, at your leisure, you recover these saved ‘frames’ and then work on each. You will be able to choose the exposure, depth of field and focal position that works best for you. You may even open up some other ‘frames’ from different locations, select objects, and drag and drop them into your scene. You will blend them into place, allowing the software to automatically rotate them to match the lighting. You then select your output form, which could be a 3D scene, a video walk-through, or an image to print. You choose your resolution and off you go.
The futuristic scenario I’ve just described isn’t necessarily that far away. Many of the software pieces have been worked out, as has some of the hardware. The rest will come, probably in less time than the time since digital cameras became readily available.