Does The Color Temperature Of Your Light Affect Skin Tones in Photography?

OK, we’re going to get nerdy here. There are many different kinds of artificial lights, and then there is the light from our sun, commonly referred to daylight. If you want to read up on light color temperature, Wikipedia has a great article.

For artificial lights, they generally fall into a couple of different classes: warm, neutral, and cool.

Warm lights are lights that are tungsten, incandescent, candles, etc. Neutral lights are what you generally see in most retail stores, and nowadays usually consists of fluorescent or LED lighting. Neutral light usually sits between 3800K and 5000K. Cool lights are generally any light that has a color temperature above 5000K.

For photography, if you’re shooting ambient light, it’s going to be all over the place. If you’re shooting with continuous “Hot Lights” then it’s generally on the warmer side (2800-3200K), and if you’re shooting with the newer LEDs, the good ones are tunable, but otherwise they’ll be on the cooler side, and if you’re shooting with a speed light or studio strobe, it most definitely will be on the cooler side, usually at least 5500K, but sometimes upwards of 6500K depending on the quality and power setting.

This all leads to an interesting question: When shooting people, is there a best color temperature to use that renders more pleasing skin tones? Searching Google leads to lots of articles on how to light for skin tones, but very little in the way of whether to use warmer or cooler color temps for your lighting. With that being said, generally, pros tend to err towards warmer color temperatures (i.e. Tungsten) because it tends to look better. This is why much of our indoor lighting for houses is generally pretty warm.

I thought I would put this to the test and take some pictures using a studio strobe at it’s native color temperature, which is advertised to be ~5600K, and then that same strobe, but gelled with a CTO gel to get the color temperature down to roughly what a tungsten hot light would be, which is ~3200K, then compare the two and see what it looks like. Note: the color temperature is going to vary a little depending on the power level of the strobe, but it should be roughly correct.

So lets start with a standard bare strobe with no color correction:

Here’s the color checker chart:

AB7A5458

So we can see we’re correctly white balanced for the light in our software, now lets see what yours truly looks like under the exact same light with the exact same white balance in software:

AB7A5457

I look like a caucasian guy. If anything, my skin is rendering a bit on the flat or gray side partially because I’m one of those awful people that has a neutral skin undertone. I am however cursed with a pretty bad complexion, so there are parts of my face that are rendering as blotches of color. I guess this is what happens when you’ve been a life long sufferer of super severe eczema.

Let’s look at the same light, but gelled to 3200K:

AB7A5459

Again, we can see that we are correctly white balanced in our software. Let’s see what I look like under this same light:

AB7A5455

Hmm… that’s interesting.

Let’s look at the two color checker cards side by side. The one on the left is 5600K the one on the right is 3200K:

color_checker_side_by_side

We can see that even though the the white balance between the two matches, the colors don’t exactly render the same way.

Let’s look at me side by side, again left is 5600K, right is 3200K:

face_side_by_side

Again, very interesting. Keep in mind some of the color you’re seeing is due to my very unfortunate complexion.

So does the color temperature of your light effect the skin tones in photography? I would say yes. Which one looks better? That’s really a subjective thing more than anything else.

The important thing to take away here is that you should be cognizant of the fact that some people might look better in warmer light and some might look better in cooler light and adjust accordingly.

Till next time.

Special: 10% Off Session Fee for New Customers

For the month of February 2019, Simple Photography Services is running a special. All new customers who book a portrait or headshot session with me will receive a 10% discount on their session when they pay for their session.

This is for new customers only who have never shot with me before and applies to headshots, individual portraits, couples portraits, and the kids/families/groups sessions. The sessions can be either in my studio, or on location.

Book your session today!

The Humble Egg

A walkthrough of how a still life composition of an egg is lit with studio strobes.

The Egg

It’s been a while since I’ve shot anything in the studio outside of film profiles, so I thought I’d spend a couple of hours today and shoot a proper fine art/still life in black and white.

This was shot digitally, however, I also went ahead and shot a number of images on Ilford HP5+ 120 roll film.

This is obviously lit, so as a learning exercise, lets walk through the lighting set up.

I used two lights. The first and most important was the base fill light. I took full advantage of the super reflective white wall behind the camera and turned it into a giant fill light by pointing a Paul Buff White Lightning X1600 strobe at it with an umbrella reflector that throws light 180 degrees. This was metered to f/2.0.

With that done, I then took another Paul Buff White Lightning X1600 strobe, mounted a 36 inch Octo-box on it from LumoPro and placed it camera left. I positioned and rotated it until I had the light feathering right across the background and then metered it to f/8.0.

For the composition, I kept it simple. A basic white egg cup, an egg, and a seamless white paper backdrop from Savage Universal.

Once that was done, I shot it at f/16.0 on a tripod with an APS-C camera and a 50mm prime lens. The camera/lens system make is pretty irrelevant as you can do this with pretty much any camera that has a flash hot shoe and interchangeable lens.

If you want to do almost the same thing on the cheap, you can substitute the real studio strobes with smaller and significantly less expensive portable speed-lights. Put the main light behind a nice big photo umbrella, though you’ll still need a white wall. For the seamless backdrop, you can substitute a white poster board for a significant cost savings, however, it tends to have a shinier almost glossy finish than the matte finish of a real backdrop paper from Savage, so keep that in mind. You may need to use a flag (black foam board) to feather the light on the backdrop.

Enjoy!

Ilford FP4+ Film Review Published

Just a quick note, I’ve completed and published the tech sheet for Ilford FP4+ Film.

If you send your film in to Simple Film Lab, you can now see what you’ll get if you shoot Ilford FP4+ film and send it in to us to process and scan in. Check the review out here. I’ve included a characteristic curve, a slideshow of sample images, and downloadable sample Adobe Digital Negatives of what you could get if your film was handled by us.

Enjoy!

Coming in 2017: Simple Film Lab

Photographic film has taken quite a beating in the last decade or so. Film labs have been closing left and right for quite some time now. This is quite unfortunate and something that I’ve struggled with for quite some time myself being as I’m primarily a film photographer.

This led me onto a path of processing and digitizing my own film and developing tools to do so that also give me my images in a way that is complimentary to film.

I’ve finally reached a point where I can offer my services to other film photographers.

A Few Things To Note

You shoot film because of the color and look that you get with it, not because it gives you a lot of resolution or is inexpensive. So with that being said, what do I bring to the table with Simple Film Lab that is better than the other film labs out there? If you look at what other labs charge and what I will be charging, I’m certainly not less expensive from a purely monetary stance. I also won’t really be delivering the highest of resolutions either.

In order to really take advantage of what film has to offer, one must beef up the entire imaging chain. Almost every lab I’ve looked at and tried out typically scans with a Noritsu or Frontier scanner and delivers jpegs. You hear a lot about how a Frontier scanner delivers color like this or that, and how some film scanner is beloved by x type of photographers. OK. I mean no disrespect to other film labs, however, having a process where you deliver jpegs of film scans to customers is not doing the customers or film any favors.

It’s all about the color. While I do have a dedicated 35mm film scanner that is very recent and can scan 35mm film at really high resolutions, and I do have a very high resolution flatbed scanner that can scan 120 film at crazy high resolutions, I also have a way to digitize film using a very controlled light source, with very good optics, and a reasonably high resolution imaging sensor. The setup I prefer could be called a DSLR film scanner, but it’s actually more complicated that than. Photographic film by definition is very high dynamic range, with a lot of color. When you digitize film, what you are essentially doing is taking a picture of the film emulsion. You can take the picture of the film emulsion with a dedicated film scanner, a flatbed scanner, or with a digital or film camera. It’s what you do with the digitized image after that that makes all the difference.

Typically, the color negative is inverted by either the film scanner itself, the scanning software, or manually in Adobe Photoshop. While one can get good results with that, I’ve brought my skills as a computer programmer to bear and developed code that significantly beefs up the entire imaging and color chain after digitizing to full 64 bit floating point in linear color space. What does that even mean? That means the process to turn the color negative into a color positive along with the following color modifications to get a usable image happen in very high resolution 64 bit floating point linear color space. I’d love to be able to deliver 64 bit floating point linear light images to customers, however, that is not something that any software customers would have access to really supports, so the next best thing is 16 bits per sample (or 48 bit) TIFF files in the ProPhoto color space.

Because the high precision digitization workflow requires a calibrated film profile for every film we support digitizing, Simple Film Lab will not accept any film to be processed and digitized. While we can pretty much process any C-41 film (we use standard Kodak C-41 Chemicals), the service we offer is coupled together, so when you send film in, it is to be processed and digitized. The cost therefore, might seem high per roll, but when you factor in that you’re getting processing and a very high quality film scan, and 48 bit TIFF files in the ProPhoto color space as the delivery with enough resolution to make 16×24 prints, it’s worth it, at least we think there’s a market for it.

The Plan

The plan is to start accepting processing orders for Kodak Ektar 100 film in 35mm, and 120 roll the first quarter of 2017, then add Kodak Portra 160, Portra 400, and Portra 800 in 120 roll film in the second or third quarter and add 35mm Portra 160, 400, and 800 later in the year if there is demand for it along with the 4×5 sheet versions at some point in the second half of 2017. We’re also going to keep things simple in terms of what resolutions we offer: There will really only be two options, standard resolution, and custom scan. Standard 2:3 resolution will be 7200×4800 pixels with other aspect ratios having 4800 pixels on the short side, and custom scan is exactly what it sounds like, a custom scan with an output to your specifications. The standard processing/scan target price will be $20 per roll not including shipping, and custom scan will be priced according to how much time/effort Simple Film Lab has to put in. At the end of the day, it all boils down to image processing time and who is spending that time.

All film will always be processed with fresh chemicals, and the target turnaround time will be 5-7 business days. As things pick up, we’ll be adding additional films to the catalog that we support. There are a couple of emulsions that are pretty popular with wedding photographers (Fuji 400H, looking right at you), and we do plan to support it, however, that comes with some challenges, as most labs that cater to processing/scanning that film also use Fuji Frontier scanners and already deliver pretty good results, so in that instance, the biggest issue is going to be getting customers to move away from those labs and start using Simple Film Lab instead.

Additionally, you can expect very good customer service. As my own customer, I have very high standards, and I’m a firm believer in providing very high standards to my customers. Because Simple Film Lab is a small operation, as a customer, you’ll be dealing directly with me, and it will be my eyeballs that look at every single one of your images before they’re sent to you.

In short, Simple Film Lab is the Film Lab that I would want as a customer. Keep watching this space, good things are on the way.

How Much Image Resolution Do You Need?

It depends. Don’t you hate those types of answers? Unfortunately, there is no simple answer, because you have input resolution and output resolutions, and sub-types inside those two high levels.

Cut to the chase

If all you do is share your pictures online or make 4×6, 5×7, or 8×10 prints, then pretty much any camera made within the last 5-10 years will give you more image resolution than you need and you don’t have anything to worry about and shouldn’t even need to think about it. Carry on snapping away. If you want to learn about why, then read on.

Keeping it simple

In the interest of distilling information down to useful bite size chunks, let’s start with output resolution.

Output resolution

The vast majority of us don’t actually print our pictures, however, print resolutions are typically a lot higher than screen resolutions for a given amount of surface area, so we’ll talk in terms of print resolution.

For this discussion, we’ll equate dots and pixels as the same thing, so terms like dpi (dots per inch) and ppi (pixels per inch) mean the same thing. Likewise, we’ll say a pixel and a dot are a single discrete full color entity that we can see. It may be made up of one or more smaller sub-parts, like a red element, a green element, and a blue element in a computer display, or multiple ink droplets on a piece of paper, but for this discussion it’s one discrete visible full color unit.

It turns out that the human eye actually does have a finite amount resolution in terms of how we would describe it if it where a digital sensor. If we where to take resolution and print it onto a piece of paper, and look at it up close and personal, the magic number where we stop being able to discern fine detail sits right around 300 dots per inch, or 150 line pairs per inch on the paper (or screen), meaning that if we take 150 black lines that are 1 pixel wide and 300 pixels tall, 150 white lines that are also 1 pixel wide and 300 pixels tall, and alternate between them, when printed on the paper so that all 300 lines fit into onto a 1×1 inch square, it would actually look like a grey square to most of us instead of look like alternating black and white lines. This is why most magazines print at 300 dpi, and your iPhone’s Retina display is also about 300 dpi. Spatially speaking, our vision starts to poop out and adding more image resolution than that generally does not make the picture have more detail or look sharper to our eyes.

So, with this 300 dpi number, it’s pretty easy to do some simple math and extrapolate out how much output resolution we need for the various ways we look at our pictures: for a 4×6 inch print, 300 dpi times 4 inches is 1200 pixels on the short edge, and 300 dpi times 6 inches is 1800 pixels on the long edge, or an image that is 1800×1200 pixels. That’s a measly 2.1 megapixels.

The same math for an 8×12 (or 8×10) print comes out to 3600×2400 pixels, or a very modest 8.6 megapixels. A 16×24 (or 16×20) print is 7200×4800 pixels, or 34.5 megapixels. Now we’re starting to get into some serious resolution.

The 16×24 print size not withstanding, if we take display sizes into account, we soon discover some correlations. The average computer display or HD TV can comfortably display the 1800×1200 resolution image with little to no scaling and look quite good. A newer 4K display can display the 3600×2400 resolution image with little to no scaling and look quite good. It should be noted that the aspect ratios between print and screen aren’t the same, so if you never intend to print, you can crop your images to 16:9 aspect to match your display and simply size your output to either HD (1920×1080) or UHD (3840×2160) resolution and call it a day.

What I do

I’ve standardized my “house format” if you will to that of a 16×24 (or 16×20) print size, meaning all of my keeper images, regardless of their input resolution get scaled up or down to 7200×4800 pixels. That’s my working/master resolution (as of 2016).

For paying clients, for standard uses, they get 3600×2400 (or 3840×2160) pixels as the deliverable (scaled down from the 7200×4800 master) unless they’re going to print larger than 8×10, and in that case, the conversation shifts to commissioning me to get the image and do the print for them, since I’ll typically want to capture more resolution than normal and work with a print service that specializes in larger print sizes, and that involves renting gear that is appropriate for what the client wants in terms of output size. Depending on the size or the aspect ratio of the print, I may break from the 7200×4800 and go larger, but typically, that is starting to get into higher end output, and very low volume. Keep in mind, a 16×20 or 16×24 print while not really huge, is not small. It’s 4 times the size of an 8×10. It’s big enough, you have a frame built for it and hang it on a wall.

Input resolution

This is where it starts to get a little techie and can be a bit confusing if you’re not a tech head, so lets take it a little at a time. I saved input resolution for last since what resolution you want/need to output tends to drive what resolution you need to acquire to use as your input.

When it comes to input resolution, it can be greatly affected by lots of different factors, so now is a great time to talk about the concept of “effective resolution”. For example, when you take a picture, how much visible resolution ends up in your image that you can see as fine detail is affected by things such as mirror slap (if you’re shooting an SLR or DSLR), shutter shock, how long your shutter is open (which affects how much movement happened, which shows up as blurring), your hand shaking the camera when you press the shutter button (which shows up as blurring), how deep your depth of field is (which affects how much of your image is in really sharp focus), how much image noise the sensor is introducing into the image, how much diffraction is happening in your lens (depending on your lens f-stop), and how much spatial resolution the lens you’re using is capable of actually putting onto your camera sensor. These things that I just listed are all things that will affect how much resolution you’re effectively putting into your image regardless of whether your shooting film or digital, full frame, medium format, large format, APS-C, Micro 4/3, or smaller. We haven’t even started talking about the raw image sensor resolution yet.

In short, that nice crispy 24 megapixel camera you just picked up? Unless you’re using a really high resolution lens (which is incredibly expensive), and practicing some pretty rigorous shooting process to keep your camera movement and vibration under control, you’re not getting anywhere near 24 megapixels of resolution when you take a picture. Even then, your camera sensor is hiding a dirty little secret.

You see, a 24 megapixel camera outputs an image that is 6000×4000 pixels. It does not actually have 6000 red pixels, 6000 green pixels, and 6000 blue pixels across. The same goes vertically. Nope. What it does have is 6000×4000 light detecting sensors, that then has a color filter array placed over it (usually in a bayer pattern). The color filter array takes that 6000×4000 pixels and divides it up between red, green, and blue. Since human vision is most sensitive to green, a full half of the sensor resolution gets filtered to green, and the remaining half goes to red and blue, which each get a quarter of the resolution. To get to an image that you can actually see, this then goes through a demosaicing process into an image that is 6000 red, green, and blue pixels by 4000 red, green, and blue pixels.

What this means is that for a 24 megapixel image captured with a 24 megapixel camera, you are effectively seeing 12 megapixels of green, and 6 megapixels each of red and blue. Even though, spatially speaking, you have 24 million light sensing elements on your sensor, you are not getting 24 megapixels of full color information. It’s actually more like 6 to 9 megapixels of full color information, which interestingly enough is right there in the ball park of making a really nice 8×10 print.

This is why medium format cameras have been 40+ megapixels for a while now. It’s less about getting the raw spatial resolution, and more about effectively getting more full color spatial resolution. This is why an 8×10 print from a 50 megapixel Canon 5Ds looks so much better than the same picture taken with a ten year old Canon Digital Rebel XTi that’s 10 megapixels. It’s not about the raw spatial resolution, since we can’t really see more than 300 dpi on the page anyway, it’s about getting 300 dpi of full color information.

What about large prints?

But wait a minute, photographers have been making large prints with cameras that don’t have anywhere near that resolution for a while now and they look great. What gives? Well, as it turns out, the larger you print, the less spatial resolution you actually need. It sounds counter intuitive, but once you start getting into 16×20 or larger print sizes, you stop looking at it up close and personal like you would a smaller print, and instead stand back to take it in. The further away from the print you stand, the less dpi your eyes can actually resolve on the print. This is why a 65″ 1080p HDTV which is only 2 megapixels still looks good. You sit further away from it than you would a smaller TV. Combine that with the fact that our brains are very good at filling in missing information, all the photographer has to do is make sure that the image is scaled up in a way that pixelation isn’t obvious if inspected up close. Our brains will do the rest.

With that being said, for larger print sizes more camera resolution will generally result in a better looking output image simply because we’re then putting more raw spatial and full color resolution into the image and have a lot more real estate that we can fill up with that resolution, until we effectively have more than 300 dpi that we’re putting on the print surface.

Conclusion

So how much image resolution do we really need? For the average person sharing online and making 8×10 or smaller prints, a camera that is at least 6 or 7 megapixels will provide totally usable images. The larger you print, the more resolution you’ll want. Digital cameras have just recently gotten to the point where we can actually capture and put all the full color resolution that we can see into an 8×10 print, which makes for super fabulous prints, so this is a great time to be taking pictures.

Image Sharpening Explained, Simply

What is image sharpening? We’ve all heard about it and have undoubtedly heard about various image sharpening tools like unsharp mask or smart sharpen, but I’ve found that very few of us actually understand what image sharpening is.

So what is image sharpening?

No matter what image sharpening tool or algorithm you use, they all have the same end result, which is to increase the contrast of the lines and edges of objects in the image. That’s all image sharpening is. The primary differences between the various algorithms or methods of sharpening isn’t the end result, but rather how the lines and edges of objects in the image are detected. Likewise the various sliders or controls you get for each method of sharpening are to control the amount of sharpening and to fine tune the underlying line and edge detection for that sharpening method.

So there you have it. Image sharpening explained in 4 simple sentences. It’s not that difficult. The human visual system is extremely good at detecting lines and edges, so when we sharpen an image, all we’re doing is making what we’re visually sensitive to more pronounced. It’s a very effective visual perceptual trick that we’re playing on our brains when we sharpen an image.

The soapbox

I’ve noticed some image sharpening trends the last 5-6 years that really bother me and make me think that all these people on the internet that are dispensing photography information and advice and are supposed to be photography experts don’t really know what they’re talking about and doing. I can’t help myself. I have to say something about it.

The image sharpening aesthetic

This is a huge pet peeve of mine. All too often, people think that a sharp picture has lots of detail. As a result, they sharpen their images way, way, way too much. The Internet is riddled with posts on how and when to sharpen. You have input sharpening, creative sharpening, and output sharpening. You have tons of sharpening algorithms, plugins, and tools to increase the clarity of your images. There are companies out there whose entire business model is literally based on selling you something that will help you sharpen your images. On the camera hardware front, as of late, it seems that if a newly released camera doesn’t output a ridiculously over-sharpened image the Internet declares that camera as a piece of garbage. Ugh.

On top of that, the internet is flooded with images that are painfully over-sharpened (usually as a result of said company that sells image sharpening tools), all in the name of having a nice sharp image that is “crispy”. You know what else makes a nice sharp image? A nice moderately high resolution image that has a depth of field that is large enough so that the whole subject of the image is entirely in focus, assuming the person taking the picture actually nailed the focus.

Image focus and resolution to the rescue

I can’t believe how many people get a super fast prime lens, and then proceed to shoot with it wide open, then sharpen the ever living day lights out of the resulting images in an effort to try to get the subject sharp. It’s almost as if they don’t realize that when you’re shooting an 85mm+ lens on a full frame camera at f/1.4 or f/1.2, the depth of field is so small that the only thing in the image that’s going to be in nice sharp focus is one eye, or the tip of the subject’s nose, or one of the subject’s cheek bones, or their lips, or whatever the camera actually happened to lock focus on. Having a really small depth of field definitely has its uses, but if you want a nice sharp image, try stopping your aperture down to something like f/8. You’d be amazed at how much more resolution and fine detail is there and how much sharper your photos are as a result once they’re scaled to whatever the final image output resolution is. Again, assuming that you actually nailed the focus.

Have you ever seen a picture that actually had as much resolution and fine detail as what could be natively represented by the medium displaying the image? Probably not, but you’ll know it when you see it.

I’ll give an example: have you ever watched an HD movie on your iPhone in HD? You should try it some time. It looks incredible. The reason why isn’t because of image sharpening, but rather because you’re displaying the maximum amount of resolution and fine detail that your iPhone screen can natively represent.

Another example: ever seen Christopher Nolan’s “The Dark Knight” movie on Blu-Ray? He shot parts of the movie on very high resolution IMAX cameras and cut those scenes in with the rest of the film, which was shot on standard super-35 film. Even at Blu-Ray resolution (which is a whole whopping 2 megapixels image size), the difference in image resolution and fine detail between super-35 film and IMAX’s 65mm film is stunning. The IMAX sequences just look a lot sharper, not because of image sharpening, but because they contain as much resolution and fine detail as what can be packed into a 1920×1080 pixel image size, which results in a picture that looks very sharp with very little actual image sharpening applied or needed.

Huh. We just came full circle back to image sharpening. Imagine that.

OK. Sooo… When do you do image sharpening?

Ideally, you sharpen at the very end when your image is at it’s final output size. If you have a good image with good resolution and fine detail, you’ll discover that sharpening is often like what salt and pepper is to a great meal. A little bit goes a long way, and if used just right, it greatly enhances things, however, more is rarely better.

There are other places in your workflow where you can sharpen, like input sharpening, and creative sharpening, and those instances do have their uses, however, they tend to be really over-used and abused, so for the sake of simplicity, we’ll leave them off the discussion table for now and maybe visit them in separate posts.

You may have noticed that I brought up image resolution and fine detail a number of times in this post while talking about image sharpening. How much resolution you actually need is a subject for a different post, so we’ll get into that later, though suffice it to say, you don’t need nearly as much resolution as you think you do, the trick is actually acquiring that resolution in a way that makes for sharp images that will need very little image sharpening to look good.

Till next time.

Enlarge Images Without Pixelation

Why?

Usually, we want to reduce the size of images, not enlarge them. However, there are times where we have a reasonably small source image that we need to make bigger for one reason or another.

I actually routinely enlarge images because I have a variety of cameras that I create images with and they’re all different resolutions and bit depths, so I normalize the keepers to one larger master resolution and bit depth.

It should be noted that my method described below is not a miracle worker. You won’t be making 10x or 20x enlargements that look good, but you can easily make 3-4x enlargements that will look totally passable.

The best part? You don’t have to pay for any software beyond just having Adobe Photoshop, so no plug-ins or extra software to buy (Perfect Resize, I’m looking right at you).

How?

The how is actually pretty simple, and before I explain the specifics of what I do in my version of Photoshop, I’ll explain the generic version so that you can convert it to your software, which may not be the same thing as what I’m running. Ok? Let’s get started.

Before doing this, start with your source image and save it as a 16 bit tiff file at its original resolution.

  1. Figure out what your resulting resolution is to be and multiply the longest edge of the image by four. For example, I generally normalize to 7200×4800 pixels, so I’d end up with 28,800×19,200 pixels. Resize your source image to that new, really huge resolution.

  2. Add some noise to the newly resized really huge image. The better way to do this is with monochromatic noise so it looks more organic, but if your software doesn’t do that, any noise is better than no noise. I generally add between 5% and 10%. It should be Gaussian noise.

  3. Add a Gaussian Blur to the really huge image. It’s radius should be 2 pixels.

  4. Resize the really huge image down to your resulting image size. Save it as a 16 bit tiff file and do the rest of whatever post processing you’re going to do.

That’s it! This method also works like a charm for upgrading 8 bit images to nice smooth 16 bit images that you can really push around in Lightroom/Photoshop without any ugly banding or posterization popping out at you.

Why it works

What?!?! You’re adding noise and blur to the image! Doesn’t that destroy image quality and detail and make the image blurry?

If we were to do that to an image at its native size, then yes, all we would be doing is adding noise and making it blurry. The thing to keep in mind is that we’re doing this to an image that is 8 times the resolution of our final output resolution.

When we scale the image back down, that noise and blur has the effect of filling in and smoothing out what would otherwise be pixelation in the image.

The key to this working so well is the bit depth and ratios that we use. The small amount of noise that gets added when we’re at 8x resolution gets reduced down and visually provides a smoothing effect to the new image size without making it look soft or blurry.

Likewise, the Gaussian blur we added was two pixels to an image that is 8x the resolution of what it ultimately will be, meaning when we scale the image back down, a 16×16 block of pixels gets turned into a 4×4 block of pixels, which means that we’re scaling down more than we blurred. When we scale down more than we blur, the blur that was applied starts to do interesting things for us. It visually provides the effect of filling in and smoothing out what would otherwise be pixelation in the image.

Combined with the added noise in step 2, it’s a very dramatic one-two punch to an image that would otherwise look pixelated and awful.

How I do it

We all use different software. I happen to use the latest version Adobe Lightroom CC and Adobe Photoshop CC. This isn’t a tutorial for how to use Lightroom or Photoshop, it’s just a basic walk-through of what I do. You can and should modify it to suit your needs.

All of my images start off in Lightroom at their original resolution and bit depth. I have a Lightroom catalog that I use for staging these images for processing that all of my keepers make their first stop in. In this catalog, I add all my metadata to each image (it makes tracking it later easier) and the only image adjustment I make here is to remove image noise that was introduced by the camera in the form of color and/or luminance noise. I’m very conservative with this and look at the image at a 1:1 or 2:1 ratio in the area where noise is most prominent in the image. I only do just barely enough noise removal to tone down the noise since the more noise removal you do, the more the image fine detail gets muddled. This is done on an image by image basis and how much noise reduction is applied varies greatly between what camera took the image and at what ISO the image was shot at.

From there, I export the image out as a 16 bit tiff file at the “super-sized” resolution (28,800×19,200 pixel or there abouts depending on the image aspect ratio).

I then open that tiff file in Photoshop and add a layer over the background layer that is the image. I change the layer’s blending mode to “overlay” and fill it with 50% gray. From there I convert the layer to a smart object.

With the smart object selected, I go to the ‘Filter’ menu, and select ‘Noise’->’Add Noise’. In the dialog box that pops up, select ‘Gaussian’, and check the ‘Monochromatic’ check-box. Change the amount to a value between 5% and 10%. I’ve found that less than 5% tends to be not enough, and more than 10% is too much. I generally set it to 7.5% as a start then tweak it up or down as needed for best results. You should experiment around for what values work best for you based on what resolution you are working at.

From there, I go to the ‘Filter’ menu again and select ‘Blur’->’Gaussian Blur’. In the pop up dialog box I select a radius of 2 pixels.

From there, tweak the amount of noise up or down for best results (you can do this because it’s a smart object).

When you’re happy with the image, go to the ‘Layer’ menu and select ‘Flatten Image’.

Now resize the image to your final enlarged size (in my case 7200×4800 pixels) and save it as a 16 bit tiff that you’ll pull into your real Lightroom catalog.

Import the new enlarged tiff file into your Lightroom catalog that you use for managing your media, convert it to a DNG file, then finish the rest of your post processing on the file.

Isn’t this a lot of work?

Yes and no. We only do this on our keeper images, which for most photographers is only a fraction of the images that they take. That and the only thing we’re doing that is any different is the Photoshop bit. You should still be doing noise reduction, adding meta-data, and post processing. The only real difference is a quick middle step in Photoshop that literally only takes a couple minutes per image, if that.

Everybody should do what works for them, and what works for me isn’t necessarily for everybody, however, the process I outlined above allows me to shoot everything from an iPhone jpeg, to a DSLR raw or jpeg image, to frame grabs from video as jpegs, and end up with a reasonably sized standard image resolution that is actually very usable.

As proof in the pudding, most all of the images I’ve recently shot digitally have had this treatment, and if I hadn’t told you that those images got this treatment, while looking at them, you’d be none the wiser. It makes differentiating between jpeg and raw, and lower iPhone/video resolution and DSLR resolution extremely difficult, which is the point.

Caveat Emptor

This works best with an image that already has reasonably good resolution content to begin with. This does not magically add detail where there is none, nor does it really add resolution or rescue images that already look terrible.

What it does do is add very high frequency or broadband information to an image in a way that our brains find very pleasant, which allows our brains to do the heavy lifting of ‘seeing’ what detail is there in the enlarged image without seeing the unpleasant visual effects of scaling up that image. In short, we’re playing a very effective visual trick on our brains in very much the same way that adding dithering to digital audio allows us to hear further down into the sound.

Our brains are very good at filtering out high frequency broadband noise to get to the detail in the noise, as long as that noise isn’t overwhelming to the point of being distracting. The trick is riding that balance between helping the perceived image quality and hurting it.

Till next time.