For those new to it, computational photography is what you get by combining very smart software on your computer with your camera images and a modified workflow that understands the needs of the software. The two most widely used areas of computational photography are panoramas and high dynamic range imaging, or HDRi. But it is not limited to that.
Because there is readily available software to do it, computational photography actually encompasses the following areas:
· Panorama stitching and exposure balancing by blending multiple shots to cover the desired field of view;
· High dynamic range imaging by blending multiple images taken at different exposure settings;
· Increasing depth of field by blending shots taken at different focal points;
· Re-computing depth of field, creating a shallower depth of field and simulating out-of-focus lens effects in a single image; and
· Image noise reduction and control.
Other capabilities that I talked about in my previous post on computational photography, such as post-shot point of view choice, are still in the pipeline.
Panorama stitching has become so mainstream that many compact cameras either do it in camera or have modes to make it an easier process. Cameras that have exposure bracketing make shooting for HDRi easier, though many cameras still provide an inadequate bracket range.
Photoshop is part of most photographers’ lives these days and what it does causes people to pay attention. Photoshop has supported HDRi for some years, though not as fully as other software. Still, it has probably contributed to the rise in interest in HDRi. Now, Photoshop CS4 has added the capability to stack images shot at different focal points to increase the depth of field. My testing of this capability shows that Photoshop does a great job on tasks such as combining two or three shots to gain greater depth of field in situations such as interiors, but falls down in extreme situations such as macro where many images are being combined. Just as with HDRi, Photoshop CS4 does a great job with increasing depth-of-field, but you can probably push it further with other software.
You have always been able to decrease the apparent depth of field in Photoshop by using blur, layers and layer masks. But, again, third-party software takes this capability further.
The capabilities that Adobe chooses to include in Photoshop eventually seem to work their way into the brains of photographers. So with the inclusion of depth-of-field-increasing technology in CS4, I am expecting an increase in the awareness and adoption of this extremely useful approach.
Photoshop truly is a direction setter for photographers. As you would expect of software that is so all encompassing, it does so many things well but you can do better at the extremes with specialized programs. This is also true in computational photography. Photoshop will meet the needs of most photographers, but those who want to push further will extend their capabilities with other software.
I am so convinced that computational photography will become ever more important to an increasing number of photographers that I have added a whole new section on computational photography to one of my sites. I plan to spend December and January testing and writing up reviews and tutorials on a huge range of software in this area.
What we need now is for the camera manufacturers to add a new range of capabilities to their cameras. For example, focus bracketing would be a great help. Perhaps you would be able to set the near and far “must-be-in-focus” points and how many steps in between. Or, the camera could use the aperture and focal length information to calculate how many shots are needed to achieve optimal overlap of sharp zones. Then, on a motor drive setting, the camera could take a burst of shots, refocusing as it goes. Aperture bracketing can already be done by putting the camera in shutter speed priority mode and using exposure bracketing. For those of us who use these options frequently it would be handy if the camera could save these settings and add them to a ready Function menu or such, so we can quickly switch everything necessary to do HDRi, aperture bracketing or focus bracketing in one hit. These capabilities should not just be on the top end models. What I see in discussions among photographers is that while professionals do use these techniques, serious amateurs are probably more into them. So these features should be on the pro models but also on at least the serious amateur models.
We are in a blossoming time for photography as we find new ways to do old things and completely new capabilities we never had before. It is a great time to be a photographer.
Last month I wrote about creating HDR (High Dynamic Range) photographs. This is a process in which you merge several different exposures together to get an expanded amount of detail. There’s been a lot of interest on the web lately about using HDR to create what many people are calling the “Grunge” look. These are images that are processed beyond the ordinary to have a nearly illustrative look.
While HDR still gives you the most flexibility in creating this type of imagery, you can create a similar effect with a single image. You just need to process it with extreme settings.
Please note that I make no claim as to creating this method, and I’m really not sure who did create it, but it’s a look that I found interesting and decided to explore.
I’m using Adobe Photoshop Lightroom for this example, but you could do the same thing in Adobe Camera Raw or, most likely, many of the other RAW converters. I prefer to work with RAW images for this type of editing as there is more information to work with, and the edits are not destructive – I can always go back and process the image in a more normal fashion.
To start with, select the image you want to process. Now, do something you’d never otherwise consider doing:
Move the Recovery slider all the way to the right so it reads 100.
Move the Fill Light slider all the way to the right so it reads 100.
Move the Clarity slider all the way to the right so that it reads 100.
Move the Vibrance slider, you guessed it, all the way to the right.
Right now, the image is looking pretty bad, and you’re probably thinking I’m nuts. But, this is where the magic begins.
Move the saturation slider to the left (I threw you on that one, right?) to bring the saturation way down. It looks best if you leave a little color in the image, so don’t go all the way to -100.
Now, increase the Blacks to build some black back into the image. (You might have to play with the Exposure setting to get something that looks right.)
The final step for me is to use the Vignette control to darken the corners, which really enhances the feel.
While this works great with some images, you’ll need to evaluate what you’re trying to accomplish. The samples shown here were good subjects since they had an old and neglected look to begin with. I certainly wouldn’t try this on a portrait.
Finally, I print the image on a luster or gloss paper. This is one of the few times I’ll use gloss media, but I find that it gives a nice contrast and usually helps to show the fine details in the image
I’ve been seeing more and more interest in high dynamic range images online and in the workshops I teach. Judging by some of the books I’ve checked out, you might think it requires a degree in physics or at least rocket science to create this type of image. Like many things digital though, it doesn’t have to be difficult and it can be a great new way to express yourself.
Cameras, both digital and film, can’t record all of the information we can see with our eyes. While you can automatically adjust what you’re viewing to see details in shadows and highlights at the same time, we often have to make exposure decisions based on what areas of the image contain the most important information, and risk losing highlight or shadow detail as a result.
With high dynamic range (HDR) imaging, you can get around this shortcoming in equipment and go beyond what our eyes see to record something special.
Let’s take a look at how easy this can actually be in practice. To start with, you’ll obviously want a scene with a wide dynamic range. A tripod will make the processing effort much easier, and a camera that lets you control exposure is required.
I shot this series of three images at Joshua Tree National Park at dawn. Using exposure bracketing, I recorded one shot at the suggested exposure to record the midrange detail, another at two stops under to get the most detail possible from the sky, and a final image at two stops over to open up the shadow detail. Photoshop CS2 and CS3 includes a “Merge to HDR” function (found under the File > Automate menu). But I prefer to use Photomatix because it does a better job and gives me more creative options in processing the images.
When you work in HDR, you’re working with a 32-bit file. In other words, you have plenty of information to work with. But Photoshop requires images to be in 8- or 16-bit mode to do any processing work, and many printers can only deal with an 8-bit image.
In Photomatix, I open the three images (Figures 1, 2, and 3) and tell the program to merge them together. The result is not what you’d expect as the preview looks like a dark mess. But, now the magic starts. When I go into the Tone mapping dialog I’m can control how this extra detail is going to be displayed.
You can get as accurate or as creative as you like at this point. For this particular image, I liked the surreal look generated by enhancing the lighting, saturation, and contrast (Figure 4).
For final output, I sent this to my Designjet Z3100 using HP Instant Dry Satin photo paper. With the Gloss Enhancer on this paper I get excellent results with great vibrant color – just like my vision for this image when I processed it.
Photography is poised on the edge of another major transition. Its impact could perhaps be even greater than that caused by the arrival of digital cameras. When we look back 25 years from now, the real digital revolution in photography won’t have been the digital camera. It will be the upcoming progression to computational photography.
Current digital cameras pretty much work the same way as film cameras do. Digital cameras do have processors, and some image-processing occurs in the camera. But compared to what is currently possible with today’s computers, in-camera image processing is relatively minimal.
Computational photography combines innovative uses of image sensors and complex software to create images that no camera can readily capture.
For example: Imagine that after taking a shot you can choose your viewpoint, refocus an image, choose the depth of field you want, extend your image dynamic range, change the instant of capture and more. Sure, all of this manipulation can be done on your desktop or laptop. But eventually you’ll be able to do it all in your camera, while you’re still in the field.
Some of this is here now. Software for High Dynamic Range Imaging (HDRI) is now widely available and incorporated into Photoshop. Refocus and depth-of-field software is out there, but few are using it yet.
Some of these possibilities (the ones available now) can work with existing camera designs. Other functions will require new camera designs and developments. For example, cameras for computational photography may need custom sensors that can capture at very high speeds and frame rates. Or, they may need multiple sensors and lens systems, or camera-controlled lens focus racking during exposure. Some designs may include integrated GPS 3D positioning and pointing monitoring or multi-spectral sensors to capture from UV to IR in one hit.
Imagine the scenario: you go out to a beautiful location, perhaps a treed landscape with rock outcroppings. You set up your camera gear and press the shutter three times from three different positions across a rough sideways line. You move to another part of the scene and do the same. Perhaps you do this a third time. Each of those shutter depressions triggered a number of exposures at differing exposures and focal point locations. The camera has converted these to a HDRI, multi-focal image format. Full GPS information is recorded in the file. When you return home, your camera wirelessly uploads them to your computer and this triggers an automated script which runs a 3D analysis of the scene, building an extremely high resolution 3D model of the scene.
By the time you’ve made a coffee and gone to the toilet this is done. You sit down at your computer and are presented with a 3D scene (this might be real 3D, considering the new displays in the pipeline). Using a touch panel you move through the scene, turning around and looking up and down as you wish. At various points you press a virtual button and a ‘frame’ is saved.
Later, at your leisure, you recover these saved ‘frames’ and then work on each. You will be able to choose the exposure, depth of field and focal position that works best for you. You may even open up some other ‘frames’ from different locations, select objects, and drag and drop them into your scene. You will blend them into place, allowing the software to automatically rotate them to match the lighting. You then select your output form, which could be a 3D scene, a video walk-through, or an image to print. You choose your resolution and off you go.
The futuristic scenario I’ve just described isn’t necessarily that far away. Many of the software pieces have been worked out, as has some of the hardware. The rest will come, probably in less time than the time since digital cameras became readily available.