Monthly Archives: July 2011

I’m sorry but there is just something inherently wrong with camera design

* Unfortunately this post will provide more questions than answers, but they need to be asked

On Sunday I went to the beach and didn’t intend to shoot but it was such an absolutely perfect day with blue skies, white puffy clouds and intense emerald colored water, I knew I just had to shoot even if it was mid-day. It was that pretty.

Normally I wouldn’t shoot mid-day for a number of reasons all us photographers know. But I could see with my eyes how beautiful it was and it was surely something easily captured by my camera. To cut down on the glare of the beautiful emerald water, I placed a B + W circular polarizer on my 17-40 lens in order to keep the glare of the sun on the water out of the shot which actually helps to tame the dynamic range of the shot by not have those specular highlight. One look therough the viewfinder and I saw pure magic. The scene in front of my camera was georgeous.

Snap

 Normally I don’t go right home and download my images, It may wait till late that night or the next day to review them. But I had the time and I was anxious to review the images that just looked so beautiful through the view finder.

So I downloaded them and opened the images and….What? are  you @#$%iung kidding me? That’s what I got?! REALLY?!

 

 

 

 

 

 

 

I mean, it’s OK ,it’s visually and compositionaly interesting..but that’s NOT what I saw.

The clouds are dingy and dull, The sky is a Gray/blue mix. Where is that emerald water? Why are the rocks almost monotone and desaturated looking. I hate it . I would never show this shot to anyone.

 

 

 

 

 

 

 

 

 

So what DID the scene look like? Well… This

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Now on this day I didn’t set up to shoot HDRs. I assessed the conditions and saw no need to and if there isn’t a need I won’t. Was it a full dynamic range scene? Absolutely. But was it beyond what should have been capturable with a single exposure? Absolutely not. So I shot one single exposure of the scene. So how did I end up with the second shot above? Using a little talked about feature of Photomatix pro. The Single Exposure Pseudo HDR with Tone mapping. In the past I haven’t been that fond of Pseudo HDRs. Whether it was making different RAW exposures and combining those or any of the number of Pseudo HDR programs out there, Topaz, Lucis etc. I believed if you wanted to do it. Just do it right, take the exposures.

But in this case I didn’t want to increase the dynamic range at all. The camera captured it fine and is confirmed by the histogram. The exposure is fine too, within LESS than 1/3 of a stop of perfect. It’s not the range from bight to shadow that is all wrong, it’s what’s in between. But by Tone Mapping the single image in Photomatrix Pro I was able to correct for what the camera just could not get right. But WHY do I have to?

This is something I have seen for a good portion of my over 40 years of shooting. In fact in the Mid 90’s I stopped shooting for a full year because of never being able to capture with my camera what I saw. But at the end of the year I saw what had become the problem; The popularity of 1 Hour Photo labs. With their automated processing and no real person looking  at the images to determine the correct exposure of the print. That automation took away from what was shot and what the final print looked like. This became evident to me when I shot a scene of Christmas Light right at dusk and the prints I got back looked like I shot it mid-day. Was I that off? Fortunately for me the lab that developed them, printed on the back the adjustments they made to the image, that was when the light bulb went off when I saw the huge amount of Exposure compensation they applied to that print. I had a perfect exposure. They messed it all up. So I retuned to shooting.

But even with that and even with better labs that were more hands on with quality technicians. There still were problems. I still often would see beautiful things through the viewfinder and never be able to get them on film. I had and still do have high hopes for digital.  Eliminating that person  at the photo lab that knew nothing about the shot in the development of it was a hugely freeing thing for me. I could make sure that what I wanted was what was printed. But still what I shot was not what I had straight out of the camera. And I blame this on the engineers that designed digital cameras because I believe that the basis they used for development of a digital sensor was based on film for the model to how it should react and NOT on our eyes as a model

Now I do realize that the above statement is not wholly true. We do know that  part of the sensor design was based on the fact that the human eye is more sensitive to Green than it is Red or Blue. This is why there are twice as many green photosites on a sensor as there are Blue or Red. I also fully realize that no camera sensor nor film is as sensitive to dynamic range as the human eye is. I get that, I know that. But there is something inherently wrong in the in between, the way our eyes place brightness value and color values in  everything in between. Is it because we try to make sensor linear when in fact our eyes are everything but?

I don’t know the answer to these questions. I’m neither a scientist nor an engineer. I don’t know much about how our eyes physically work nor do I know a lot about optical design nor sensor or film science. I just know something isn’t right.

As I sat on my couch this morning contemplating this article. I looked at the morning light coming into my house Lighting up my Family room and my living room. My camera was beside me as was my laptop. Could I capture an image at just that moment, immediately load it into my  calibrated laptop and would it look like what was before my eyes that moment?

Here lets find out.

The shot

 

Was this what I saw?

No, there is barely any color in the walls and they are a Sandstone Tan

The Red Artwork on the far wall, it’s dark and doesn’t look anything like what I am looking at.

Is there a red pillow on that couch? I clearly see it with my eyes. where did it go.

The sun is really showing the color of the wood in the TV stand to my eyes. It barely looks anything but black except for the foot.

Where did everything I see right now go? There’s nothing wrong with the exposure, the dynamic range again was captured except for some slight blowout on the floor molding  from the sunlight. There was plenty of light coming into the room. Why is it NOT what I see?

 

 

 

 

 

 

 But yet a quick and I mean very quick run through Photomatix Tone Mapping and I got this:

 

This looks EXACTLY like what I see before me. Why does my camera get this sooo wrong yet Photomatix get’s it so right. and again. I don’t think it’s a Dynamics problem. The histograms for these two images are very similar  in each end of the spectrum. It’s what’s in between that just is handled differently

 

 

 

 

 

 

 

 

 

 

 

 

 

 

So Like I said in the begining, this post really doesn’t provide many answers if any. It is more just questions. The only thing I know is that I WILL use single image processing in Photomatix much more often. IF I know that what I got just isn’t what was there at least I have some recourse to the end result

 

Hope that…ummm…Helps?

 

PT

 

Anatomy of a shot – Harbor Lights

So I recently did a shoot at San Diego Harbor, A was looking for a city lights shot with some boats in the scene.

Trying to do this with normal photography provides enough challenges iteself: Capturing the dynamic range between the water and the building lights, Using a fast enough shutter speed to stop any motion in the boats on a water necessitating a higher ISO which can translate into higher noise in the image. Even if you didn’t have to worry about the boat movement, capturing city lights can be difficult because it may take long exposures and digital sensors suffer from some noise problems from the long exposures.

But on top of this I wanted to do HDR’s which added more problems because now not only did I have to worry about the movement of the boats in one image but across 3 images of very different shutter speeds, so even if one of the exposures had a shutter speed fast enough to stop motion , surely I couldn’t get three that did AND then capture that motion all at the same spot.

For this shot and for most city light shot, normally I wouldn’t want to shoot when it is dark. I will try to shoot during the dusk period or sunset to a 1/2 hour later ( Dusk is longer the farther away from  the equator you are). But I had already used up that period trying to get the other shots I wanted for this evening. I did not see this shot till later on the way back to my truck.

It was a difficult shoot and also a lot of work in post but I think I accomplished what I wanted. To show it as it was in person. So let’s break it down and see just what it took.

Of course as always my Canon 5D was mounted on a sturdy tripod. And using the timer mode, AV mode  I fired off three shots.

Because of the darkness and the need to stop the boats in motion on at least one of my frames (hopefully the 0EV one) I set my ISO to ISO 500 and using a Depth of field calculator ( There are some great phone apps for this) I determined that with my 24mm Lens and distance to subject I could shoot as open as f/5 and still maintain a DOF from 6 feet to infinity ( The hyperfocal distance for those that follow this stuff was 12’7″ and everything in my frame was past that distance) being able to shoot that wide open help immensely since I could shoot at a much lower ISO.

So here are the three images I shot, at shutter speeds of 1/6, .6 and 2.5 seconds respectively

 

 

 

 

 

 

 

 

 

 

My next step was to take the three image into Photomatix Pro 4.0 and use it’s powerful De-ghosting tool in the first menu ( See my tutorial on how to do that HERE)

I selected the dinghies only and used the 0EV shot as the de-ghosted one. Even though that image was .6 second, the baywaters were still enough that I didn’t get any motion blur in the dinghies. With that area de-ghsoted I moved onto to tone mapping the image.

Using my usual tone mapping preset recipe of 70 Strength, 70 saturation, high smoothing and -1.20 Gamma. I finished out in Photomatix with this image

 

 

 

 

 

 

 

 

 

 

 

OK, that looks pretty nice, some good detail fairly nice range, lots of color. But the truth was that wasn’t how I saw it. The colors were way too poppy and I didn’t have the detail in the buildings I really wanted. So I need a solution to fix both those problems. I try desautrating the color, different level and curves adjusments but they really didn’t fix what I wanted or if they did they caused others

So to fix my “Color” problem i turned to our old friend… Black & White. Black & white is great for detail and contrast so I am going to turn to that for some help.

In Photoshop, I opened the image and then made a duplicate layer,  That layer I converted to Black & white using a Gradient Map process (* Google it). The result was this:

 

 

 

 

 

 

 

 

 

 

 

Perfect just what I wanted. Now here is where the magic comes in. I am going to duplicate the bottom color layer again and move that above the black  & white layer. Magic time. Now I moved to the Blend mode  and changed it to “Darken” on that top color layer.

Wow, now that was exactly what I was looking for, the colors while much more muted and were faithful to what was actually there. The sky became more of the black it was at that time of night and the detail and intensity of the skyline buildings came back to where it should be and as it was to the eye.

Then after taking the image into Neat Image to clean up a little bit of noise on the boats and then some sharpening of the image with a Low Pass Filter Sharpening ( I will have a quick turtorail on how to do thsi soon) My Image was done, Just what I saw that night

Quick Tip: Tidy up those crops

When we take multiple images into an HDR program, during the registration (alignment) process the program will twist and turn each image to align it with the image below, using either common features or horizontal/vertical lines. When it does so, at the end (if enabled) the program will then crop the image so that there are no ragged edges.

The problem is that those crops may not be a standard size or aspect ratio. Part gets cut off, this may just be a few pixels if the camera was on a steady tripod but can easily equal an inch or two if you attempted a handheld 3 shot image ( I have successfully).

So what needs to be done after you finish the entire HDR process and have your final Tiff/JPEG image is to now take the image into a photo editing software of choice ( you can crop in Photomatix, but I prefer other editors) and crop the image to a standard size.

This standard size can be one of two things: a standard image size that comes out of a camera or a 2:3 aspect ratio image ( or 3:4  for some point and shoots) Or you could crop to standard Frame/Matt sizes that can be very different.

Standard camera sizes in inches would be 6 x 9, 8 x12, 12 x 18, 16 x 24, 20 x 30 etc.

Standard Frame sizes are: 8 x 10, 11 14, 13 x 19, 16 x 20 etc.

Your choice your descion. I always crop to standard Camera sizes and then will do special crops if I decide to have a image a certain frame size. Be aware that you can get some frame sizes in size the same as camera sizes.

When you do crop an image, you really only want to throw away the pixels that aren’t necessary, you want to try to not change the pixels that are left behind. Either making less of them or  creating more than were there in that space originally. If you do that will require Interpolation on the part of software. This interpolation can cause some ( mostly not visible) softness or color shift in our images. We want to minimize that.

So when we do crop try to do so that no pixels inside the crop area are harmed. To do that in Photoshop, set a crop size in the tool bar, Say 12 x 18 and then leave the resolution box blank. This will crop the image to the document size of 12 x 18 but will not alert the pixel within in any way.

All this will tidy up your images and make for an easier time when it comes to print your images, and I DO want you to print your images. Either at home or by a High Quality Lab. You will love the results and the sense of finish that a print brings.

Hope that helps!

PT