Category Archives: Advanced

Maybe we need to RETHINK how we think about Depth of Field

It’s really about Magnification

We know some truths about Depth of Field (DOF). We know there are 3 things (actually 4 but the forth is controversial and not all agree and it would only apply if you switch cameras in between shots)

Anyway, there are 3 things that affect Depth of Field

  • Aperture
  • Distance to Subject
  • Focal Length

We know:

  • The Larger the aperture (lower f/ number),the shallower the Depth of Field
  • The closer we are to our subject, the shallower the Depth of Field
  • The longer the Focal Length, the shallower the Depth of Field

and of course the inverse of all would be true Continue reading »

Breaking the Rules -Why you are not the Rebel you think you are

The following is a repost from my Business and Portfolio Website and blog. Posting it here in the hopes of farther reach

Breaking the Rules

Why you are NOT the Rebel you think you are

The Rules

Not a day goes by that I don’t hear a fellow photographer proclaim: “I know the rules so I can break them” then they also add in “I’m a Rebel”. I mean, I see this so often it’s actually become an annoyance and it’s really turned into just group speak. Someone said it and then everyone just repeats it without actually thinking about what it means. Well, I have. So lets take a little trip and see, what what with this.

But first a note. This article assumes you make photographs as art. The “rules” don’t apply to other photographic genres. Photojournalism is about telling a story, Sports, about capturing the moment, Commercial, about selling the product. While they may possibly contain art, it’s not a necessity. Continue reading »

By Starlight in ON1 Photo 10

By Star Light – Milky Way Astrophotography Composting in ON1 Photo 10  Final Composite Milky Way Image

Soooo, you know those amazing images you see of the Milky Way with that awesome foreground and the caption next to it says…Lit by Starlight. OK ehheemmm. OK sorry to break this to you but, well, They are 100% bull@#$%…Yep. Having spent many a night during a New Moon (No Moon) out in the middle of the desert. I can tell you, the stars don’t light up much. In fact you don’t really know how dark  dark is till you’ve done just that. Continue reading »

Why we use different lenses and why we move

Zoom with my Feet

Zoom with my feet

Why do we have so many lenses or zoom lenses? Why do we move or not move?

I’m often surprised when I hear photographers talk about lenses and well even more surprised when I read photography blog articles about lenses and what they think “does what” when it comes to lenses and why we may choose one focal length over another. After all can’t we just “Zoom with our feet”? So why do we even have more than one lens, cuz really what would be the point? So this is a small excerpt from a larger article I had in mind to write to show; “What Does What” and why we need more than one lens and even why we may not. Continue reading »

Topaz Labs brings Clarity to the Milky Way – Photographing the Milky Way

Milky Way Shoot ABDSPTopaz Labs brings Clarity to the Milky Way – Photographing the Milky Way 

I’ve always loved looking up at a star filled sky in wonder. When I was a kid we would go camping in Canada. It was so dark up there away from all civilization I don’t think there was a star you couldn’t see. I remember seeing satellites fly across the sky when satellites were still new. 

But the truth was/is I’ve lived most of my life in or near big cities, first NYC and Philadelphia and now San Diego. So in my normal everyday life there wasn’t much star gazing. But there was something special about it. 

In my photography there wasn’t much of it either. Sure I did a few long exposures when I was out in the National Parks like Zion or Yellowstone. But that was when I shot film and film didn’t always offer the same possibilities or capabilities, whether it was max ISO or reciprocity. This was especially true when it came to shooting the Milky Way in color and having the stars static. 

Continue reading »

Anatomy of a Shoot – Long Exposures

Anatomy of a Shoot – Long Exposures 

Long exposure shots are all the rage right now. You’ve all seen them; Pier, Cotton Candy Water, B & W. We are not just talking silkening the water with 10 second exposures on waterfalls. We are talking cotton candy, fog or mist look to water (or Clouds) of exposure for 5 minutes or even an hour. It’s the newest thing to catch on in landscapes so if you want to know how, follow me through a recent shoot. Continue reading »

I should have known this…but I didn’t – AEB and Manual Mode

AEB and Manual Mode

I pride myself on knowing my equipment, so this hurts

I was fiddling with my camera because I needed to answer someone’s question about Aperture priority and AEB (Automatic Exposure Bracketing). Now of course I know and had suggested to people that they use AP + AEB to get their 3 exposure bracket. Of course I also knew that you could do AEB and use  Shutter Priority mode, which we don’t suggest for HDR because we want a constant aperture and therefore a constant Depth of Field

What I never realized was that on my Camera (Canon 5D) and other Canons models along with Nikons ( as far as I know, I checked with a Nikon user but would like another confirmation) what I didn’t realize was that AEB was possible in Manual Mode too. On Canon’s in manual, You can choose an aperture and the camera will bracket just as it does in any of the semi-auto modes. I’ll be darned. I should have know this but I didn’t and the 40 years I spent shooting Manual Film Camera, AEB wasn’t even an option on those fully manual mechanical wonders.

Continue reading »

Shooting the Sun – Blobs and Stars

Shooting the Sun 

Caution: Never look directly into the sun, Never meter on the sun, Never point your camera directly at the sun, Never! 

HDR has opened up a lot of shooting possibilities; one of those is shooting in the direction of the sun and not having to settle for a silhouette. But what about shooting the sun itself? Well that is a little harder. 

The first problem is; the dynamic range of the sun to a shadow is beyond even what the human eye can do in one glance.  We would (BUT WE SHOULD’T) look at the sun and then our eyes would need to adjust for dark subject area.  The human eye is capable in one glance of seeing a Dynamic range of about 10,000:1 the sun would be about 100 times that. (For reference, a good LCD monitor DR is about 1,000:1, a print much less than that) The sun is too bright for even the human eye to see. 

And what would the sun look like, to our eyes, even if we did look at it. Would it be a perfect round white ball? Not really, since our eyes really can’t see something that bright a mid day sun would appear as a large diffuse white object in the sky with no clear delineation. 

As the sun is close to the horizon upon rising or setting, because of the atmosphere, diffusion and particles (water and dust) in the air, the brightness of the sun becomes much less, while the dynamic range may still be high the sun itself is closer to being viewable and we are able to capture more definition to the edges of that “Circle”. 

So are we able to “Shoot” the sun? Yes it would be possible to shoot it but we need to use some special means  such as using Neutral Density filters because even at our camera’s maximum  (f/22 ISO 100 1/8000) that may not  get us the “Ball” of the sun. But again is that what we truly want since that would not be “As the eye sees” In fact it may be actually odd 

 Sunsets themselves are not hard to do and can be an easy capture. Midday shots will be the tough ones.

What kind of sun do YOU want? 

We can capture the sun Midday one of two ways, as a large blob or with a star effect. And even though “blob” may not sound that good, it may be in images with a ceratin look, be the right choice. But it is a choice you need to make before shooting because your camera settings will depend on that choice. 

Now you may say; Well a Star effect really isn’t how we see the sun. True but it is how we visualize a bright object if even in our mind. After all, when we drew the sun as a kid we always drew those Points around it, we never just drew a circle. This is because it is an effect we can get when viewing any point source light that may not be as bright as the sun. sSuch as stars (which of course are just as bright as the sun just farther away, or even things like white Christmas light, street lights, headlights etc, when we view them at night 

To get a Star effect we can do it one of two ways; the easy way of buying a Star filter. They are available with 4, 6 and 8 points in many filter sizes. The nice part about these is you can use them with any aperture but the aperture may dictate how long the star points are. Or, we could do it the hard way, which of course, I always choose. We can do it with aperture. 

To get a star pattern on ANY point source light we need to use a very small or tight aperture. Now I wanted to show you some examples of that shooting the sun at different apertures. But of course today in “Sunny”Southern California, it is completely cloud covered. So I will instead use a point source light, a Halogen Lamp, since this effect will happen with any point source light. So for today we will call our hHalogen light Happy Mr. Sunshine. 

To givet a star pattern to a point source light we want to use the smallest aperture available on our lens which in most cases is f/22 (some telephotos go to f/32- f/35) 

Let’s look at the different effects that aperture have on this. Same light same Exposure, just changing Aperture 

Now let’s look at what the effect of exposure is on the star, as we go from underexposure to over exposure, the size of the star increases. We also see as we underexpose the overall scene enough we loose the star effect completely, another reason we may not want to get a “Perfect “exposure on the sun itself 

 

OK so now let’s go real world and a real example.

Shooting for a Star Effect

The effectiveness will depend entirely on atmospheric conditions the day you shoot. If it is a clear blue sky you will have much better definition, add and haze or light cloud cover and you may not get this effect at all.

I’m going to make it easy for you because I really don’t want you looking into the sun trying to figure this out. 

For you initial exposure in your series of exposures for HDR, You first exposure should be f/22 1/400 ISO 100 (If your low ISO on your camera is ISO 200, use 1/800) you could use 1/800 for a tighter pattern if you would like. But a good rule of thumb is to have your sun exposure 3 – 4 stops lower than the Ambient light. This 3- 4 stops lower will work in the middle of the day as well as for sunsets when the sun becomes less bright because so does the ambient light. 

For those of you that want to know, the Ambient light during the day  would work out to f/16, 1/100, ISO 100 so the above f/22, 1/400 ISO 100 works out to 3 stops less exposure. 

For my example shoot I shot this series

 6 Images, 1 stop apart. I knew the sun exposure and just needed to get a reading of the shadow area which I spot read and got f/22 1/13 ISO 100. So I just had to work between those two in 1 stop increments. You need to shoot enough to cover the dynamics of the scene and 1 stop apart which is important in this case. We are going to have  a tough enough time processing this image in the first place we don’t want to have to worry about posterization  or banding around the sun due to too large of steps in between exposures on those areas. 

One word of note; Shooting under these condition are ripe for lens flare. So we can choose to try to minimize it or celebrate it. If you want to minimize it try changing your angle to the sun and also remove any filters form your lens as low quality ones can compound the problem. In this case lens hoods won’t do anything to help lens flare since what we normally would be shading (the sun) is included in the frame) 

 

Here are the exposures 

Now comes the tough part; Processing in Photomatix Pro 4.1. The biggest problem any HDR Processing programs have is areas of extreme contrast (This is why we get halos around edges of building to sky) and areas of white (It’s why we get gray clouds that should be white). So here we are throwing both problems at it at once. 

So we have to do some things that normally we may not normally do or want to do. Those of you that like Grunge or Painterly effects I will tell you right off that you will have a hard time with your normal work flow. Because as much as the normal settings for  Lighting Effects and strength are what give you the effect you like, they will do what they normally do and attempt to make everything a midtone and it will cause a lot of graying on your sun and the sky that surrounds it. 

Why this is a difficult process is because of two things, we want to try to keep a tight center for the sun and distinct star points. If we get that look right the overall image is dark. As we try to lighten the image we loose the tight center to the sun and its distinct points.   

In this case we use some extreme things that we normally would do; well I guess I should say, I never do. In this case I used the Surreal Lighting effect button, something that I normally never use. And I brought the strength back to 50. This kept our sun’s circle tight but didn’t cause the rest of the image to get super dark which even if we took out into Photoshop would be tough to correct for. 

There was a little haloing around the Hopper and a little graying of the area around the sun but noting I couldn’t fix in Post. 

Here are the compete settings for this image’ 

Strength 50
Saturation 70
Luminosity 0
Detail Contrast 0
Lighting Effect Surreal
Smooth Highlight 0
White point .250%
Black Point 0
Gamma 1.20 

You may want to try a little Highlights Smoothing in these cases moving the slider towards the middle to get the look you may want.

And that’s it for Photomatix Pro.

I then took the image into Photoshop and touched it up with a levels layer and some dodging and burning. I burned the edge of the Hopper with a Midtone Burn tool set to 10% to take care of some of the haloing and then dodge the highlights and burned the shadows a bit on the hopper body itself.

Then I sharpened the image just a bit using Nik Sharpner Pro 3.0 and I was done…well except for one more timy trick. 

 There still was a little graying in the rays of the sun, that I just wasn’t happy with. So I added another blank layer on the image and I grabbed a soft paint brush  set to 20% Opacity , 20% fill and then I sampled the blue sky next to the sun and just painted over the gray area till it became a little more blue. Not super necessary but it just bothered me a bit. 

 

 

 

This is the final image 

 

 As you notice the image contains a lot of sun flare and I even cloned out one in the grass area but I am fine with them in this instance. 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Here are a few other examples of shooting the sun

In this one, I used f/8 and went for the blob look. I wanted the sun to look more oppresive in a harsh environment of the Salton Sea

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

These two don’t show the difficulty of shooting mid-day but rather using the Star Effect on sunset suns

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

I’ll leave you with one little bit of trivia. The number of points on your star effect tell you if you have a even or odd nuber of aperture blades in your lens and how many blades.

If you have an even number of blades say 8 as  you will see eight points to the star. 16 points are actually produced but the over lap each other and look like 8. If you have 7 blades you will see 14 points because on odd numbers they don’t overlap. (Generally the more blades the better the lens, better bokeh)

 

Hope that helps,

PT

Shooting Automobiles – Part – 2 – Processing

Shooting Automobiles – Part – 2 – Processing

 Yesterday we covered the shooting of automobiles. Today we will concentrate on the post processing of those images and more specifically post processing the images as High Dynamic Range images. 

As promised I will take you through this step by step just as I would do the image, so you get to see everything that “I” put into it. Just bear one thing in mind, what I do on my image may not what you need to do on your image. Even though I will give my settings in Photomatix doesn’t mean that those will be correct for your image because every image is different. 

They may be a good starting point but I tweak even my starting point to get what I need out of that particular image. Plus you may not even want to have the same effect that I want. If you want a more painterly effect your starting points would be way different than mine. 

Processing In Photomatix Pro 4.1 

Starting with the 3 images I showed you yesterday I open them in Photomatix Pro 4.1. Even though ghosting should not be an issue, I still brought it into the manual de-ghosting screen for a check. This image didn’t need any help but as we will see in the image I shot with OCF, there were about 6 areas with Blinkie-Blackies that needed to be fixed. More on that later. 

So opening the image in the tone mapping screen, Moving down the list I used: Detail Enhancer, Strength 40, Saturation 70, Luminosity -2, Detail Contrast +6.0, Lighting effect Medium, 

Other settings I adjusted;

  • Smooth Highlights 28, I used this to have a smoother gradation of the sky and took some of the gray out of it that can happen in highlights.
  • White Point: 2.000%, this actually has a much larger effect on overall brightness of the image than Luminosity ever has. Still not sure why they call it that.
  • Black Point: 0.092% just to bring back some of the shadows and blacks in the image
  • Gamma: 1.20, this brings the Midtones where I want them. If you watch your histogram of your image, you will see a center peak in almost every image, this controls where that peak is. I prefer it slightly to the left of center but in the end I look at my image more than the histogram to see what look right. It’s just an interesting correlation you may like to see
  • Saturation Highlights: 7.0 this controls the saturation on the highlights only. They appeared a bit washed out so I wanted to add a bit more to them. 

This got the image as far as I would get with the controls of Photomatix. The image now needs some more local adjustments so I will bring the Image into Photoshop or you could bring it back into Lightroom if that is where you like to work. 

This is the image as finished in Photomatix 4.1

 

 

For those of you using Nik HDR Efex Pro, I achieved similar results using these setting

  • Compression: 43%
  • Saturation: 22%
  • Structure: 9%
  • Blacks: 12%
  • Whites:19%
  • Warmth: 26%
  • HDR Method: Natural

Final adjustments in Photoshop 

The first thing I notice and should have noticed when shooting is that the horizon line is not straight. We want to look at the horizon line and not our vehicle because we shot at an angle to it the front should be lower than the rear. So using the measuring tool and Rotate Canvas; arbitrary, I straighten the horizon. (Note there are other ways to get this done in later versions of Photoshop and in Lightroom)

While I am at it since I have to crop the image anyway I will crop in a bit to eliminate some of the periphery of the background.

 

 

 

 With our image now level and cropped at this point I will zoom into 100% and take care of any sensor spots that may be visible in the sky or other areas. Its best these are taken care of now and I use my Spot Healing Brush tool to fix those.

 Problem Areas 

Now it’s time to move on examining the image and see what areas may need work 

 

The first thing I wanted to tackle was the sky and the mountains in the background. Since this is a large area, I decided to use a Curves adjustment layer and mask it just to that area. In The curves box, I brought the highlight across a bit to lighten the highlights and then used my eye dropper to determine where the mountains were on the line and brought those down in levels. I then painted out the rest of the image in the layer mask so that this adjustment only affected the sky and bright mountains. Just to tweak those mountain ever so it more, I burned the shadows on them just a bit.

 

 

 

 

 

 

 

The rest of the work was just dodging and burning the problem areas. Keeping in mind that if we want to take down highlight you burn highlights you don’t add more shadow. Some times burning and dodging is not as intuitive as we want it to be so you need to work on the right segment. To bring out the wheels and headlights more, I set the dodge tool to Highlight and 10%.

 After all my dodging and burning I finished off the image with a sharpening layer using Nik Sharpening Pro 3.0 set to Display: Adaptive Sharpening and 60% 

Final Image 

Here is the final image as I see fit

 

You’ll probably notice these are not HUGE changes to our image but rather just the finishing details that make it the best it can be.

 

 

Our Advanced Shoot HDR + OCF 

Finishing our OCF image off was a very similar process so I don’t think I should bore you with that recap. The one thing that WAS very different was in the beginning stage when I was merging the files. As I said earlier there were areas that I needed to get rid of the Blinkie-Blackies (For an explanation of Blinkie- Blackies see this post). 

These occurred because we had some bright highlights in the 0 exposure from the Off Camera Lights. These didn’t occur in our +2 and -2 frames because the lights did not fire then (On purpose) so it caused a severe difference that the software didn’t know how to handle without some intervention by me 

So I selected the problem areas in the De-Ghosting section of Photomatix Pro 4.1 and selected the 0 image as the image to use to de-ghost.

 

 

After that, the workflow continued just as I did the other shot. Determine my problem areas and addressing them all as needed.

This is the final HDR + OCF image. (You may note a difference in the trucks color, this is because the color of the light was so different after twilight, I decided to keep that pink hue as that is what was there at the time. I am not a big fan over-correcting white balance to something that wasn’t there)

 

 

 

Now you may ask, couldn’t you have done the same without OCF? Not really because you have to remember one thing. This image was shot well past sunset. It was dark!… as I was reminded by the two packs of coyotes that started their twilight serenade…which led me to pack up and leave. But we never would have gotten the specular highlights on the trucks body without using some artificial light. 

Now of course we could have, as we did, just shot earlier when that light was there. But the mountains in the background would have had a totally different look as we can see. 

So I hope this help you to try and go out and shoot automobiles. Again you may want a totally different look to your HDR as many people do. So do what you want in Photomatix to get YOUR desired effect. But then take a moment to analyze that result and see where some touch up is needed. You don’t need to do my workflow or my adjustments but just understand it and what does what.

 

Here are a couple more shots from the night with varying degrees of success

Hope that helps,

PT

 

Shooting Automobiles – Part 1 – The Shoot

Shooting Automobiles – Part 1 – The Shoot 

Today we are going to look at yet another subject that can benefit greatly from shooting and processing in HDR-HighDynamicRange: the Automobile. 

Automobiles are almost like shooting portraits outdoors, shot wrong and at the wrong time of day can lead to disappointment. So let’s take a close look at what it takes to get a truly pleasing shot. Today we will focus on setting up the shoot itself and tomorrow we will work on the processing. 

I am also going to do this in two parts, a basic shoot and then an advanced set-up for those that may want to take this above and beyond. 

Location, Location, Location 

Shooting an automobile is as much about the background as it is the car itself. In the wrong environment the car will loose the appeal that we as photographers or more importantly the client, (Classic Car Owner, Auto Manufacturer etc) desire. So first we have to find the location; that may be a twisting mountain road, along the shore of the ocean or lake, In front of a cityscape, day or night or in our case, the oft used, desert dry lake bed. 

For this shoot I chose the Clark Lake dry lake bed in the Anza-Borrego desert of California. My Favorite place to shoot. 

My choice of locations and the desire to shoot HDR was confirmed today when I opened up Road & Track magazine and saw a shot of a 2012 Dodge Charger shot in HDR IN the Anza-Borrego desert. I regularly run into their team doing tests along the way from their Newport Beach headquarters to the desert. In fact, for inspiration for your shoot check out the better automotive publications and even the websites for car manufactures like Porsche and Lamborghini. They often have some downloadable wallpapers that have some stunning photography. 

I choose the spot I wanted because having been there and shot many times I knew how the light would be at all times of the day. I knew at a certain time of day the lake bed would be pushed into shadow while the mountains behind it would still be lit and nicely lit come the golden hour. One note when shooting near large mountain ranges. You need to know that sunset behind those mountains can occur 1-2 hours before actual sunset depending on the altitude and your proximity to those mountains.

The good thing is that it provides for a very long twilight period where the sky provides plenty of light yet without any direct light on your subject. This is kinda of like working with a giant softbox in the sky. Plenty of soft natural light to make our subject look good. This lake bed has mountains on 3 sides so I knew I had to be there at 4PM even though actual sunset was 6:15PM but I actually was able to work past sunset with the aide of something else in the advanced setup of this tutorial. 

Shooting earlier in the day is not desirable, the light is too contrasty with harsh shadows and even if we could capture that dynamic range it isn’t pleasing to our subject at all 

So we want to shoot later when our subject is not in direct sunlight. 

Place the vehicle in the location you want. Again this may take some pre-scouting so you know where the light will be at what time and location 

Having a clean vehicle

This image is going to be sharp and full of detail so a clean vehicle is of the essence. Any blemish will show up. But, we may not have the luxury of a cover trailer to bring the vehicle to the location and it may get dusty just getting there or even while on the location if winds are high. If the vehicle is not your own, DON’T Touch it. Leave it to the car owner to clean. Any scratch you put into a $10,000 paint job will be your fault. 

 If the vehicle is your own or if the owner needs advice on how to clean the car on location, I recommend a California Car Duster to get the big stuff off and then wiping the car down with a Micro fiber cloth using a detailing lubricant such as Meguiar’s Car Detailer. This will prevent the tiny scratches you can get from wiping a car with a dry cloth.

The setup

 Once the vehicle is clean and in place you can begin to play with your setup as the light gets where you want it. Don’t wait for the light to be where you want to start to set-up as the light will change very quickly and you may only get 15 minutes with each lighting scenario so you have to be ready. 

You will need to determine angle and focal length for the shoot. In general we don’t want to shoot straight on to a side or the front or rear. We may want to have those shots as alternative angles but that won’t be our money shot. In general we want to be at a 30-45° angle to the side and encompassing either the front or the rear of the vehicle. Once we determine a general shooting area we need to consider the focal length we will shoot at. 

Focal Lengths

Again I will go back to the “Portrait” analogy. Just as in shooting a portrait, we want to choose a focal length that is pleasing to our subjects face or body. We don’t want any part particularly emphasized, especially if it makes the subject look odd. We want as much beauty as possible and emphasize only the positive. For this shoot I chose my Canon 24-105L 4.0 IS. It gave me the range that best suited this shoot.

 On my Full Frame Canon 5D, I like to use focal length of 50 – 70mm. On APS-C bodies this may be in the 35 – 50mm range on your camera. This gets me close enough to see the detail I want, yet still gives me the perspective I need to include a good amount of the scenic background. I have used up to 200mm at times but remember with a long focal length we loose the amount of the background shown due to perspective. If you are a fan of the Nifty Fifties ( Canon 50mm 1.8 Nikon 50mm 1.8) This may be a great time to break it out.

I don’t like to use wider angle lenses because we start to get distortion in size perspective of parts of the vehicle that are closest to the camera and that leads to a less pleasing look such as this one shot at 24mm.

 Notice how the front fender and wheel are disproportionate to the rest of the vehicle. This would be akin to making a person’s nose look big in a portrait. Not good. 

Also note this is a Standard Photograph in the natural light. It doesn’t have the Dynamic range we want with the blown out sky and no detail in the mountains 

The same shot at 50mm provided a much nicer perspective for our vehicle. But again note how the standard image, while getting the mountains now better lit, plunges our vehicle into darkness. Good thing we know about HDR.

 

 

 

 

We’ve got our location, we’ve got our vehicle placed there, we have it clean and we’ve chosen our angle and focal length. So now let’s shoot our HDR.

 Shoot!

I measured the Dynamic range and knew it was well within the normal 3 Shot 2 stops apart shoot. So I set the camera to Aperture priority and Exposure Bracketing and took 3 shots. 0,+2.-2

 

 

 

 

 

These 3 shots get the midtones, the highlight sin the sky and mountains and the shadows of the vehicle all covered. 

Tomorrow in part 2 I will cover in its entirety the processing of these images.

 

Advanced shooting 

The previous was our normal HDR shoot and will be perfect for almost everything we want to do. But there are conditions where we may need to take it to the next level. 

In Photography we either need to “find the light” or “Create the light” I wanted to shot later into the actual twilight. The only problem with this is I loose some of the natural softbox lighting I get earlier in the evening, especially low on the body and into the wheels and tire area. So to fix that… 

HDR + OCF = OMG 

OK so let’s decipher those acronyms. We know HDR, High Dynamic Range. OCF is, Off Camera Flash. If two things are all the rage in photography right now it is HDR and OCF. So why not combine the two. OCF is a way to tame dynamic range. You use the natural or ambient light to light your background and then provide strobe lighting for your subject and in a lot of cases that is good enough to get the image you want, But of course not for me. I want to take it one step further. 

Here is my Basic Set-up. Two Flashes on stands, One Canon 580EX and One Vivitar 285HV. And Cactus wireless triggers to fire the flashes remotely.  I used 42” Shoot-through Umbrellas (I added the second after I shot this shot on the Vivitar).  I also moved the flashes closer to the subject later to create a larger light source.

 

Of course we could do an entire lesson or website just on OCF, so I won’t. I will just show you some possibilities of using this set-up. But I will give you some pointers that can help.

 

  • Make your light source as large as possible. This means having the lights as close to your subject as you can without being in the shot and also using a large diffuser to eliminate hotspots, This can be a Softbox or an umbrella or even shooting through a large diffuser, remember we are trying to evenly light a large object so we need a lot of nice diffuse light
  • Watch for reflections. We are also shooting a highly reflective object so we have to watch for distinct reflections of the lights. We do this primarily by using “Angle of incidence, angle of reflection” Meaning if the light is at the same but opposite angle our camera is to the subject. We will see a reflection. So if the camera is at a 45° angle to the car, we don’t want the light at an opposite 45° angle to it. 

One lucky part of doing this shoot for HDR, that would be a bad thing in regular OCF shooting, is that the flash takes a second to recharge. In a normal shoot this would mean some missed shots if you shot too quickly. I used this to my advantage because I only wanted the flash to fire on the 0 exposure shot. If I quickly took the +2,-2 shots afterwards the flash did not have enough time to recharge to fire. If I really needed to, I easily shut the trigger off on the camera after the first shot if I needed more time. 

To give you an idea what the shot looks like lit by the OCF flashes here is an example. What should be noted here is this shot was shot well past sunset and it was in fact quite dark. If you look at the shot settings you will see that it is ISO400 f/10 and 1.6 seconds of exposure! But also note that the strobe light matches the ambient which is something we would want.

 

 

 

Tomorrow we will look at this image processed with the other two for our final HDR. I know this doesn’t really delve into how to do OCF. It’s not meant to other than just give you a feel for it and see if it is something you might like to attempt. 

We still can get a great image using HDR alone so this may not be worth YOUR time. 

So be back tomorrow for part two of this tutorial. Post processing where I will take you step by step on how I finished two images and the final results.

I know, you don’t want to wait, but my typing finger is sore.

Later

PT

Color Managment and Monitor Calibration

Okay so you just shot and processed the most amazing HDR ever and you decided to get that 40” x 60” print for your wall which set you back a few hundred bucks. But even though you sent to the lab a shot of the Taj Mahal, you get back something that looks like a turd on a crap pile. Or that same image you post on Google+ and you think is all wonderful. People are wondering why you even posted such an underexposed shot.

What’s at work here? Poor color management and no monitor calibration. One of the most important and perhaps confusing parts of a photographer’s workflow, yet one of the most overlooked.

So today let’s examine this and make some sense of it so that you can get the ultimate results out of all your HDRs and even your regular photographs.

Color Management

Color management is what assures that the color we see is the same that others will see and also other devices. So that what appears red to us, looks red to others or prints/ displays red on other devices. If it looks red to us and ends up looking orange to everyone else would be a problem.

Color Spaces and Profiles

Image © Image created by Jeff Schewe CC

A color space or  gamut is the range of colors visible and the number and variations of hues within that. There are 3 main color spaces used in photography: ProPhoto RGB, Adobe RGB and sRGB. Representing color gamuts from the widest to narrower, in order succession. There is also CMYK which is used by photographer’s that send their images in for print on a Printing press. But that would be a whole separate article in itself and most people don’t run into this

While it may seem that we want to use the widest Gamut possible, it’s not always the most desirable and in the end may not even be visible on either your monitor or the final print. Only some of the highest end monitors are able to even display the full Adobe RGB color space but as monitor increase in quality we may want to have the widest gamut possible.

There are occasions where using too wide a gamut can lead to problems later on down the line when we have to convert that gamut to a lesser one and that lesser one can’t contain all that the wide gamut produced, which can lead to posterization (banding) in our images.

Color Management in Photoshop

We choose a Color Space for our Working profile or space in our editing program. In Photoshop this would be under Edit> Color Settings

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Working our way down this screen, we would want to set our settings to custom so we can make the choices we want.

Working Space

Working space is the space we choose to work under in our editing program (in this case Photoshop) under working spaces the one we are concerned with is RGB and this is where you will make a personal choice. I use Adobe RGB. I think it is the Momma Bear choice of just right. Others may choose to work in ProPhoto. Others sRGB. Truth of the matter is if you really don’t know, choose sRGB. It will get you in the least trouble along the way. And really is just fine for most situations.

Does it really matter what I use?

Well It can. What you decide to use as your working space doesn’t really matter that much. BUT what you use as the Color Space that is embedded into the image that you send to someone else can make a difference depending on the use of the final image.

If you are going to use the image on a website, you should have that file in sRGB. SRGB is the standard for the internet. Although some web browsers are capable of reading color profiles. Most people don’t know how or where to even set them so if your image is in sRGB you will have the largest compatibility.

If you have your images printed by a Commercial Lab, see what profile they use or if not, if they read embedded profiles. If you send an Adobe RGB file to a Lab that requires the image in sRGB you will get some color shifts leading to a loss of quality of the print

But what if I want to work in a different space than the internet or a lab?

You absolutely can. In Photoshop there is a feature called “Convert to Profile” under the edit menu. Using this feature will successfully translate one color space to another. You may loose some color, but it will translate in in a way that things don’t go all wonky. 

Lightroom will automatically do this for you, just make sure when you export the final image that you set the color space correctly for its use.

Color Management Options

Once that choice is made we move down to the color management area which is REALLY important. Correct settings here will assure that if you bring an image in that you didn’t shoot or edit that image, it will adjust itself for what space you are working in. So the first thing I do is make sure that the three boxes for: Profile Mismatches, Missing Profiles and Pasting profiles are all checked. This will ensure that a warning dialog will pop up if you try to open an image with a different profile other than what your working space is and allows you to make a choice of what you to do if a mismatch occurs.

Under policies, the dropdown for RGB, you will see three choices; Off, Preserve Embedded profile, Convert to working space. I have mine set to preserve embedded profiles because I work on a lot of other people’s photos. You may want to have your set to convert to working space for simplicity. Just know that since you have the warning check boxes checked you will still have a choice upon opening an image if you decide to do something other than the default action.

If you don’t want to be bothered at all with this, Leave the default action as Convert to working space and turn off all the warning check boxes. Then Photoshop will just automatically convert a different profile than what is your working space. I just like the options.

So what do I say if I get a Pop-up telling me of a Mismatch or Missing profile?

If you get the warning box for a Profile Mismatch:

You can make a choice that best suits you. I usually honor the embedded profile because most likely someone has sent me an image to work on and I want to work on it and send it back the same way.

 

 

 

 

 

 

If the Profile is missing:

If you can’t contact the person to ask what was used the best bet is to just play it safe and assign a space of sRGB.

Conversion Options

Under Conversion options: Engine, Leave as Adobe Ace anything else is really for advance users
Under Intent (Rendering Intent) I use Relative Colorimetric to keep the relationships betweens colors on conversion. You could also choose Perceptual if you are going from a very wide gamut to a narrow one. But for the most part Relative Colorimetric works just fine.
Check the box for black point compensation checked and I also check the box for dither. Dither can reduce the banding I talked about earlier in our images.

Once you are done, click OK and your settings for your color management in Photoshop are set.

Color management in Lightroom

For some odd reason, Adobe wants to keep the color management within Lightroom a secret; In fact they don’t even want you to be able to mess with it so you can’t. But, through a little investigation, I will at least tell you how it works.

When you are viewing files in the Library module, they are displayed in Adobe RGB. When you are in the develop module and working on a RAW file, the working space is ProPhoto RGB. When you open a Tiff, Jpeg or a PSD file, Lightroom will honor the embedded profile for that image. (We’ll talk about embedding and saving profiles in a bit)

That’s the way it is, you can’t change it. But it really is fine.

For those of you that use Photoshop Elements, your choices are a little more limited and quite honestly the reason it is not a professional editing program, even though it works really well for editing. In the settings of Elements, you basically have two choices; Optimized for print (Adobe RGB) or Optimized for display (sRGB)

Next step

Okay, so now that we have all our setting correct in Photoshop or Lightroom how do we assure that what you see on your monitor is what I see on mine. We do this by calibrating our monitors.

Monitor Calibration

Monitor Calibration is calibrating our monitor, through the use of a Hardware & Software device, to a certain standard. When two monitors are calibrated to the same standard they will look relatively the same. I say relatively because in the real world there still may be some slight differences.

Right out of the box LCD monitors are almost always too bright and the color temperature is way too cool, approx 9300°k. Manufacturers do this for two reasons; to make the monitor pop on the showroom shelf (brighter appears better) and also at these settings it make it easier to read text especially in a brightly lit room. So if you do a lot of text editing. Well this may be great. But we are photographers and it’s not.

We could try to calibrate our monitors by eye, but our eyes are very bad at doing this. Everyone’s eyes see color and brightness a little differently in fact my left eye sees color a little warmer than my right. So instead of using our eyes it’s best to use a Hardware Calibration device that uses either a colorimeter or s spectrophotometer.

The two most popular brands are  i1 Display Pro by X-rite and The Spyder 3 Pro by Datacolor. It’s kinda a Nikon/Canon thing, everyone having their favorite. I’ve used a Spyder for years now. But I don’t think I would have a problem using X-rite’s either.

 

 

Choose a Standard

Just like we needed to determine a “standard” with our editing program’s working space, so too do we need to determine a standard for our calibration. There are two common standards used most often. The first calls for a White Temperature of 6500°k and a Gamma of 2.2. The Second calls for a standard of 5000°k White and 2.2 Gamma…  Choosing one depends on the use of your image and also the need to match someone else’s standard such as a Photo Print Lab.

If you normally edit in an average lit room and your images are destined only for the web, the 6500°k. Gamma 2.2 would probably be your best choice. If however such as in my case, I am trying to match the standard used by the labs that do my prints. I also edit in an almost dark condition as far as room light is concerned. So the best choice for what I am doing is 5000°k, Gamma 2.2. You may also see a Luminance standard specified which can range from 80 to 140 cd/m2. 100 is usually called for by most labs.

Once you determine the standard you want as a target, you follow the directions for your particular device. It will display some standard colors and then match and adjust what it knows the color to be with what it is reading.  You may need to make some adjustments to the monitor’s setting itself. For color temp, gamma, brightness and contrast. The profile the calibration generates can make adjustments to the color and brightness but it has its limits so if you make the initial adjustments necessary the calibrator may not need to work as hard. Make sure you calibrate the monitor under the conditions you will be editing under.

Once the calibration process is done. The calibration software will generate an ICM profile which will load when your computer starts up. Make sure it loads on start up and you usually can tell because at a certain point at startup you can see the monitor’s color and  brightness change. Because of a change in the Windows operating system with Vista and Windows 7, there have been occurrences of the monitor loosing its profile when the monitor goes to sleep or even if a warning window pops up. For the most part this has been cleared up in updates to calibration software but it still may occur.

Putting it all together

So now we have our settings set and our monitor calibrated. We can feel better that what we see is what other people – that have calibrated their monitors and set their setting – will see. But in order to enable others to know what standard you are actually talking about, when we save an image we need to embed the Color profile, that we worked on or converted to, into the image data itself. We do that in the Save As command and dialog box by checking the box for Color and the Profile used.

 

 

In Lightroom, we would do this on Export and choose the color space that we want the image exported with.

Now with the profile embedded in your image not only will this make your workflow, color and constancy better. Anyone else that opens your file will know how it should look too.

Okay, I’m sorry but this is just way too confusing and complicated for me.

You’re right, Color management and calibrating is one of the most confusing parts for all photographers

So let me just break it down into a couple points and just try to follow them and you still will be way ahead of everyone

* Calibrate your monitor. I know even this part is hard but try your best. It really is just that important. Some of the calibrators have a basic and advanced mode. Use the basic mode to just get you up and running quickly. There are some lower priced calibrators out there too
* Use sRGB as your working space and embed or export with that.
* Turn off the warning check boxes and just have Photoshop convert to the working space. You won’t have to worry about this in Lightroom

And just leave it at that. That is the best default, least worrisome of all options. Then go out and take some great photos and sleep at night.

Hope that helps,

 

PT

 

Shooting Architectural Interiors – Processing with Nik HDR Efex Pro

In this post we are going to talk about shooting and processing Architectural Interiors.

The reason why

Many of you have probably looked at ads for homes on real estate website or the books you pick up for free at the grocery store. The images are usually taken by the agent to save money or may be even taken by professionals…well that just don’t know any better. They all have the tell tale look. They were shot during the day with tons of light coming into the windows and you get one of two things because of the wide dynamic range present. You get super bright blown out windows and a properly exposed room with quite a bit of flare around those windows. Or, you get properly exposed windows and a room so dark you can’t tell if it is a bedroom or the kitchen.

Now a good photographer could know better and shoot at night when you have more control over light, or they could bring in a huge amounts of artificial lights and  get the scene to work. But the truth is, either the realtor has no budget for this big bucks photographer with a truck full of grip equipment. Or they don’t have the time for shooting at night when the home owners are home. Enter HDR.

Shooting

So lets discuss how to shoot an interior using HDR and then we will go over how to best process that shoot in Nik’s HDR Efex Pro.

Those of you that know me know I am not a big advocate of shooting a gazillion exposures. People think if 3 is good 12 must be amazing. And that just isn’t true. Sometimes it is a waste of time, of computing power and may lead to lesser images because of registration errors, shooting images beyond the dynamic range that is there which leads to soft or noisy images and a host of other reasons. Some of my most successful  Landscape HDR images have been shot with only 3 exposures.

But for this lesson I am going to go against my usual wisdom. For two reasons. One is a mater of dynamic range. As much as we may have shooting outdoors, sometimes we can have even more shooting an interior. We maybe have  EV15 (Exposure Value) light coming through a window, yet we also may have light as low as candle light in the room or EV4, 11 full stops of exposure. ( for an explanation of Exposure Value, see this great explanation and charts at Fred Parker Photography ) So that is one reason we will want to shoot quite a number of exposures, just to cover the Dynamic Range.

Reason two; Detail. As detailed as the outdoors is, we are viewing it from a distance and you may not see all the nuances of texture that every object has in that scene. In interior photography, everything is closer, more defined and with that we need to have texture that we can see and well, almost feel. The nap of the carpet, the texture of the upholstery. We’re closer we need to see that.

For this example I shot 9 exposures 1 stop apart. Exposures because that was the dynamic range I measured. 1 stop apart because of the desire for detail of tonality.

Determine your dynamic range

First I determined the dynamic range I needed  to cover. I could not have done this just from where the camera was on the tripod. Because the camera’s meter averages, even in spot mode. It would not have known the correct exposure for the windows light. So I brought my camera to the window itself and metered the light outdoors. This was my beginning exposure. And no, I didn’t need to shoot an underexposure of the outdoor light, I just needed to get it right. This exposure was f/16,  1/125 ISO250.  I then moved to the darkest area of the room and metered there, this would be my final exposure and I just need to  get between the two readings in one step intervals ( I didn’t do the math, I used the 3 clicks of the dial equal one stop trick) My end exposure was f/16, 2 sec. ISO 250. It took 9 images to get from one to the other.

Do YOU need to do 9 exposures? It depends on what your final destination for your images are. I did test with this shoot and shot HDR’s with 9, 7, 5 and 3 exposures. 9 had the best detail, 7 was very very close, 5 was good, 3 was eh. If you image is just destined for websize on a realtor’s website or in one of those small grocery store magazines, 3, 5 whatever, you’ll be fine and far above those that shoot the windows blown out. But say your image is destined for a big glossy Architectural Magazine or a large print on the wall of an Interior Designer. You want the 9 shots.

So once I determined what I needed for dynamic range , I returned the camera to the tripod and composed the scene . Now I like to turn on as many of the rooms lights as possible to give it a more natural look, or “as lived in” look. I will try to only have one color temperture of light on, Tungsten, Halogen, Florescent, because we will have enough problems with white balance with possibly two different light temperature source, we don’t need 5. For this shoot I was in luck, since the lights in the room were CFL’s balanced for 5000°k or daylight.

My scene was set and I shot the 9 frames. Here they are in contact sheet form. (Click to enlarge) The image sequence runs from the bottom left up and down to top right.

Processing

Now that we have our images shot, It’s time to merge and tone map them into our HDR image.

For this shoot, I knew the right tool for the job was Nik HDR Efex Pro Anyone that has seen my workshop in my garage knows I always have more than one tool for any job . For this job HDR Efex Pro was the correct tool because of the amount and quality of detail.

Selecting my 9 images in Lightroom 3 I exported them to HDR Efex Pro. In the first part of the tone mapping, I wanted to get my overall look. So I worked on the right panel and started with the following setting.  Tone Compression 22%. Saturation 20%, Structure 4%, Black 6% and Whites 8%

This yielded me this image

Using Control Points

Not a bad starting point for overall balance. But the windows just aren’t right. This is going to be hard for any HDR program to get right because the software will look for the brightest points  and the darkest points and put them where it thinks best. It just gets them wrong here. All is not lost though, enter the beauty of Nik HDR Efex Pro’s Control Points. I placed 9 control points in this image. In the windows, on the Photos on two walls, on the ceiling and on the fireplace. I adjusted these all individually to get the best balance for all the areas and most importantly,  to bring back the detail to the windows.

Here are what the control points looked like and also how it looks when you click on the control points mask so you can really see all the areas that control points are affecting

 

Once I had this all the best I could I took the image into Photoshop For some final touches and this yielded us our final image.

I wish you could see the detail in the full resolution file. The grain of the leather and the nap of the carpet is incredible and the print this made was really as the room looked. Truth be told if I was going to submit this to a high end magazine I may work on the windows even further which would have taken a lot of time and may not be worth it just for realtor submissions.

Getting the correct White Balance

As I spoke about earlier, we also need to consider white balance when working with interior shots.  In a big budget shoot, we could of course  use some gels on all the different light sources to make them all the same color temperature. But we may not have the time nor budget to do such things. We could change bulbs. But most homeowners probably don’t want you messing around with all the light fixtures in their home. So let’s just go simple.

In most instances, I recommend doing a white balance for the predominate light source in our scene. But lets look at the room I shot here and see what the real life experience will be. This also is why shooting RAW is so important, besides giving us the ultimate dynamic range and color latitude, it also allows us to go in later and easily change the white balance of our shoot.

So for this image, the predominant light was outdoor light coming in from the windows along with 3 sources of incandecant light as accents only. The day was cloudy and rainy so setting the white balance for cloudy yielded these results.

Not bad and since I am a landscape shooter I tend to like warm light but I think this is just too much

Let’s try adjusting for the Tungsten Light and see what that returns

Yeah, That’s not any better, in fact I think it’s quite worse. The lights themselves look good but the tone overall is much too cool

Hmmm…OK. Let’s try just as it was shot with the Auto-White balance

To me this is the best of all worlds and the best balance that could be had. Comparing a print of this image to the actual room that day was pretty much spot on for “As the eye sees” my favorite reference. Funny I guess Auto White Balance doesn’t suck as much as some seem to think.

I hope this has helped you understand how to shoot and post process Architectural Interior images, maybe this could provide you with a new income stream selling to Real Estate agents that need every tool they can muster in such a down market.

Equipment used for this shoot: Canon 5D  , Canon 17-40 4.0 L ,  Canon Remote Control , Manfrotto Tripod and Head and of course Nik HDR Efex Pro

Hope that helps,

PT

I’m sorry but there is just something inherently wrong with camera design

* Unfortunately this post will provide more questions than answers, but they need to be asked

On Sunday I went to the beach and didn’t intend to shoot but it was such an absolutely perfect day with blue skies, white puffy clouds and intense emerald colored water, I knew I just had to shoot even if it was mid-day. It was that pretty.

Normally I wouldn’t shoot mid-day for a number of reasons all us photographers know. But I could see with my eyes how beautiful it was and it was surely something easily captured by my camera. To cut down on the glare of the beautiful emerald water, I placed a B + W circular polarizer on my 17-40 lens in order to keep the glare of the sun on the water out of the shot which actually helps to tame the dynamic range of the shot by not have those specular highlight. One look therough the viewfinder and I saw pure magic. The scene in front of my camera was georgeous.

Snap

 Normally I don’t go right home and download my images, It may wait till late that night or the next day to review them. But I had the time and I was anxious to review the images that just looked so beautiful through the view finder.

So I downloaded them and opened the images and….What? are  you @#$%iung kidding me? That’s what I got?! REALLY?!

 

 

 

 

 

 

 

I mean, it’s OK ,it’s visually and compositionaly interesting..but that’s NOT what I saw.

The clouds are dingy and dull, The sky is a Gray/blue mix. Where is that emerald water? Why are the rocks almost monotone and desaturated looking. I hate it . I would never show this shot to anyone.

 

 

 

 

 

 

 

 

 

So what DID the scene look like? Well… This

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Now on this day I didn’t set up to shoot HDRs. I assessed the conditions and saw no need to and if there isn’t a need I won’t. Was it a full dynamic range scene? Absolutely. But was it beyond what should have been capturable with a single exposure? Absolutely not. So I shot one single exposure of the scene. So how did I end up with the second shot above? Using a little talked about feature of Photomatix pro. The Single Exposure Pseudo HDR with Tone mapping. In the past I haven’t been that fond of Pseudo HDRs. Whether it was making different RAW exposures and combining those or any of the number of Pseudo HDR programs out there, Topaz, Lucis etc. I believed if you wanted to do it. Just do it right, take the exposures.

But in this case I didn’t want to increase the dynamic range at all. The camera captured it fine and is confirmed by the histogram. The exposure is fine too, within LESS than 1/3 of a stop of perfect. It’s not the range from bight to shadow that is all wrong, it’s what’s in between. But by Tone Mapping the single image in Photomatrix Pro I was able to correct for what the camera just could not get right. But WHY do I have to?

This is something I have seen for a good portion of my over 40 years of shooting. In fact in the Mid 90’s I stopped shooting for a full year because of never being able to capture with my camera what I saw. But at the end of the year I saw what had become the problem; The popularity of 1 Hour Photo labs. With their automated processing and no real person looking  at the images to determine the correct exposure of the print. That automation took away from what was shot and what the final print looked like. This became evident to me when I shot a scene of Christmas Light right at dusk and the prints I got back looked like I shot it mid-day. Was I that off? Fortunately for me the lab that developed them, printed on the back the adjustments they made to the image, that was when the light bulb went off when I saw the huge amount of Exposure compensation they applied to that print. I had a perfect exposure. They messed it all up. So I retuned to shooting.

But even with that and even with better labs that were more hands on with quality technicians. There still were problems. I still often would see beautiful things through the viewfinder and never be able to get them on film. I had and still do have high hopes for digital.  Eliminating that person  at the photo lab that knew nothing about the shot in the development of it was a hugely freeing thing for me. I could make sure that what I wanted was what was printed. But still what I shot was not what I had straight out of the camera. And I blame this on the engineers that designed digital cameras because I believe that the basis they used for development of a digital sensor was based on film for the model to how it should react and NOT on our eyes as a model

Now I do realize that the above statement is not wholly true. We do know that  part of the sensor design was based on the fact that the human eye is more sensitive to Green than it is Red or Blue. This is why there are twice as many green photosites on a sensor as there are Blue or Red. I also fully realize that no camera sensor nor film is as sensitive to dynamic range as the human eye is. I get that, I know that. But there is something inherently wrong in the in between, the way our eyes place brightness value and color values in  everything in between. Is it because we try to make sensor linear when in fact our eyes are everything but?

I don’t know the answer to these questions. I’m neither a scientist nor an engineer. I don’t know much about how our eyes physically work nor do I know a lot about optical design nor sensor or film science. I just know something isn’t right.

As I sat on my couch this morning contemplating this article. I looked at the morning light coming into my house Lighting up my Family room and my living room. My camera was beside me as was my laptop. Could I capture an image at just that moment, immediately load it into my  calibrated laptop and would it look like what was before my eyes that moment?

Here lets find out.

The shot

 

Was this what I saw?

No, there is barely any color in the walls and they are a Sandstone Tan

The Red Artwork on the far wall, it’s dark and doesn’t look anything like what I am looking at.

Is there a red pillow on that couch? I clearly see it with my eyes. where did it go.

The sun is really showing the color of the wood in the TV stand to my eyes. It barely looks anything but black except for the foot.

Where did everything I see right now go? There’s nothing wrong with the exposure, the dynamic range again was captured except for some slight blowout on the floor molding  from the sunlight. There was plenty of light coming into the room. Why is it NOT what I see?

 

 

 

 

 

 

 But yet a quick and I mean very quick run through Photomatix Tone Mapping and I got this:

 

This looks EXACTLY like what I see before me. Why does my camera get this sooo wrong yet Photomatix get’s it so right. and again. I don’t think it’s a Dynamics problem. The histograms for these two images are very similar  in each end of the spectrum. It’s what’s in between that just is handled differently

 

 

 

 

 

 

 

 

 

 

 

 

 

 

So Like I said in the begining, this post really doesn’t provide many answers if any. It is more just questions. The only thing I know is that I WILL use single image processing in Photomatix much more often. IF I know that what I got just isn’t what was there at least I have some recourse to the end result

 

Hope that…ummm…Helps?

 

PT

 

Anatomy of a shot – Harbor Lights

So I recently did a shoot at San Diego Harbor, A was looking for a city lights shot with some boats in the scene.

Trying to do this with normal photography provides enough challenges iteself: Capturing the dynamic range between the water and the building lights, Using a fast enough shutter speed to stop any motion in the boats on a water necessitating a higher ISO which can translate into higher noise in the image. Even if you didn’t have to worry about the boat movement, capturing city lights can be difficult because it may take long exposures and digital sensors suffer from some noise problems from the long exposures.

But on top of this I wanted to do HDR’s which added more problems because now not only did I have to worry about the movement of the boats in one image but across 3 images of very different shutter speeds, so even if one of the exposures had a shutter speed fast enough to stop motion , surely I couldn’t get three that did AND then capture that motion all at the same spot.

For this shot and for most city light shot, normally I wouldn’t want to shoot when it is dark. I will try to shoot during the dusk period or sunset to a 1/2 hour later ( Dusk is longer the farther away from  the equator you are). But I had already used up that period trying to get the other shots I wanted for this evening. I did not see this shot till later on the way back to my truck.

It was a difficult shoot and also a lot of work in post but I think I accomplished what I wanted. To show it as it was in person. So let’s break it down and see just what it took.

Of course as always my Canon 5D was mounted on a sturdy tripod. And using the timer mode, AV mode  I fired off three shots.

Because of the darkness and the need to stop the boats in motion on at least one of my frames (hopefully the 0EV one) I set my ISO to ISO 500 and using a Depth of field calculator ( There are some great phone apps for this) I determined that with my 24mm Lens and distance to subject I could shoot as open as f/5 and still maintain a DOF from 6 feet to infinity ( The hyperfocal distance for those that follow this stuff was 12’7″ and everything in my frame was past that distance) being able to shoot that wide open help immensely since I could shoot at a much lower ISO.

So here are the three images I shot, at shutter speeds of 1/6, .6 and 2.5 seconds respectively

 

 

 

 

 

 

 

 

 

 

My next step was to take the three image into Photomatix Pro 4.0 and use it’s powerful De-ghosting tool in the first menu ( See my tutorial on how to do that HERE)

I selected the dinghies only and used the 0EV shot as the de-ghosted one. Even though that image was .6 second, the baywaters were still enough that I didn’t get any motion blur in the dinghies. With that area de-ghsoted I moved onto to tone mapping the image.

Using my usual tone mapping preset recipe of 70 Strength, 70 saturation, high smoothing and -1.20 Gamma. I finished out in Photomatix with this image

 

 

 

 

 

 

 

 

 

 

 

OK, that looks pretty nice, some good detail fairly nice range, lots of color. But the truth was that wasn’t how I saw it. The colors were way too poppy and I didn’t have the detail in the buildings I really wanted. So I need a solution to fix both those problems. I try desautrating the color, different level and curves adjusments but they really didn’t fix what I wanted or if they did they caused others

So to fix my “Color” problem i turned to our old friend… Black & White. Black & white is great for detail and contrast so I am going to turn to that for some help.

In Photoshop, I opened the image and then made a duplicate layer,  That layer I converted to Black & white using a Gradient Map process (* Google it). The result was this:

 

 

 

 

 

 

 

 

 

 

 

Perfect just what I wanted. Now here is where the magic comes in. I am going to duplicate the bottom color layer again and move that above the black  & white layer. Magic time. Now I moved to the Blend mode  and changed it to “Darken” on that top color layer.

Wow, now that was exactly what I was looking for, the colors while much more muted and were faithful to what was actually there. The sky became more of the black it was at that time of night and the detail and intensity of the skyline buildings came back to where it should be and as it was to the eye.

Then after taking the image into Neat Image to clean up a little bit of noise on the boats and then some sharpening of the image with a Low Pass Filter Sharpening ( I will have a quick turtorail on how to do thsi soon) My Image was done, Just what I saw that night