2018 Changes For Simple Film Lab

2018 is going to be a great year!

We’ve updated or are in the process of updating the pages for Simple Film Lab and the new updated order form should be online and available within the next couple of days.

Here are the highlights:

Standardization

We’ve introduced a new standardized film development regime based on XTOL and standardized our scanning protocol so that film you send in to us can easily be either printed onto photo sensitive paper in a darkroom, or can be scanned in using standard contrast indexes that correlate to black and white paper grades. This makes things much simpler and leads to other things listed below.

All Black and White Negative Films

We can now develop and scan in all commonly available black and white negative films in 135, 120, and 4×5 sheet formats, so send them in and get them processed! This is huge for us and we couldn’t have been able to realistically do it without standardizing our development environment.

Custom Film Development

Yep, we do that too. In addition to XTOL, you can request that your film be developed with D76, HC110, and Rodinal with custom dilutions, development temperatures, development agitation scheme, and development times. You can pretty much go nuts, though be aware that doing so can lead to unpredictable results.

Custom Film Scanning

Want your film scanned in with the equivalent of a grade 3 paper instead of the standard grade 2? No problem. We have a range of available contrast indexes that you can have your film scanned in at. It’s the digital equivalent of printing on said paper in the darkroom except you get a Digital Negative file instead. Combined with custom film development and you can get really creative if you want to.

Other File Formats

Don’t like Digital Negatives? No Problem. You can now request other formats without actually going the custom scan route.

There’s more than this, so check the Lab pages as we’ll be getting those pages updated with whats going on for 2018!

Kodak Tri-X vs Ilford HP5+ Film

Introduction

Spending some time with Google shows that there are numerous comparisons between Kodak Tri-X (400TX) and Ilford’s HP5+ film. Are they the same? Are they different? Is one better than the other? On and On and On. Let’s take as objective look as possible between the two emulsions and see what the deal is here.

Cut to the chase

If you don’t want to read any further, and just want a fast answer, then here it is: Kodak Tri-X and Ilford HP5+ are so close to each other that I can say that they are totally, completely, one hundred percent interchangeable. This means that you can shoot and process them exactly the same way in the same chemicals for the same development time. The end result will be close enough that you can’t tell the difference.

The evaluation

We are going to evaluate the two emulsions on two criteria: Tonal range, and granularity.

Tonal range

To evaluate the tonal range, we’ll shoot an 18% grey card on each emulsion, and shoot it from 7 stops under to 11 stops over normal exposure in full stop increments with a studio strobe. The exposure will be set via a Sekonic light meter incident reading and shot through a T-Stop rated lens, with the light meter reading within 0.1 stops of the actual amount of light hitting the grey card. To evaluate the density values for each stop of light, the emulsion will then be digitized with a DSLR using a studio strobe through the same T-Stop rated lens.

From there, the raw captures are evaluated and the average sample value from a 256×256 square in the middle of the scanned frame is calculated. This is done for each emulsion. This will give us a good idea of the density level of the emulsion for a given exposure value.

This is done exactly the same way for each emulsion. The camera position relative to the gray card does not change between each emulsion, and the focal point does not change between each emulsion.

Granularity

For granularity, this is actually pretty straight forward. Look at the scan of the correctly exposed 18% grey card for each emulsion in Adobe Lightroom at 1:1. The scans are just over 4200 dots per inch, which is more than enough resolution to actually digitize individual grains.

Development

To ensure that we’re as close as possible for each emulsion, they’re both developed in Kodak D76 1:1 at 20 degrees Celsius +-0.1 degree in the same daylight tank at the same time for 13:00 with 1 fast inversion every 15 seconds. There was a several minute pre-soak at 20 degrees of the tank/emulsions to get everything up to temperature. A 1:4 vinegar/water stop bath was used to stop development. Both emulsions were fixed in Kodak Fixer for 10:00 with constant agitation.

Concessions

Obviously, this is not up to scientific standards, however, it is within the tolerances that I can bring to bear with the equipment available to me, and I feel that my tolerances are tight enough to use with a reasonable amount of certainty in the results.

Results

Below are the results for each item being evaluated.

Tonal Range

Here is the chart of the two emulsions.

400TX_vs_HP5

When looking at this, there’s a couple of things to remember: It’s not the actual values of each density step that matter because those will vary a bit due to variations in the the power of the strobe firing during the exposures, variations of power of the strobe firing during the scanning, and how many specks of dust and fibers there are on the emulsion in the scanned sample area, which will affect the average calculated sample value. In fact, I’ve repeated this test twice exactly the same way and have even done multiple scanning passes of each emulsion for each test and gotten different but similar results for every single density step. This is the nature of the medium. There’s a lot of moving parts and things that can affect the outcome.

The key takeaway here is the shape of the curve for each emulsion. I’ve included a combined curve that is the average of all the scanning passes of both tests for both emulsions with each end slightly extended beyond sampled values.

In short, both emulsions have the same tone curve and tonal range if developed in the same developer at the same temperature, for the same amount of time and same agitation.

Granularity

OK, what about the grain? I’ll let the image below speak for itself. You can right click on it and download the full image to look at it at full size if you want to look at it really close.

400TX_vs_HP5_grain

So, what are we looking at? A comparison of each emulsion scanned in at 4200+ dpi side by side at 1:1 in Adobe Lightroom. The grain structure is readily evident, and frankly, to me, the two emulsions are close enough in their granularity that at sane enlargement levels, they’re nearly if not completely indistinguishable.

Conclusion

With black and white film, the tonal range and granularity are really only the two things that matter, and as I said in the cut to the chase section, if Tri-X and HP5+ are shot and developed the same way, they’re interchangeable in terms of tonal range and granularity.

Coming in 2017: Simple Film Lab

Photographic film has taken quite a beating in the last decade or so. Film labs have been closing left and right for quite some time now. This is quite unfortunate and something that I’ve struggled with for quite some time myself being as I’m primarily a film photographer.

This led me onto a path of processing and digitizing my own film and developing tools to do so that also give me my images in a way that is complimentary to film.

I’ve finally reached a point where I can offer my services to other film photographers.

A Few Things To Note

You shoot film because of the color and look that you get with it, not because it gives you a lot of resolution or is inexpensive. So with that being said, what do I bring to the table with Simple Film Lab that is better than the other film labs out there? If you look at what other labs charge and what I will be charging, I’m certainly not less expensive from a purely monetary stance. I also won’t really be delivering the highest of resolutions either.

In order to really take advantage of what film has to offer, one must beef up the entire imaging chain. Almost every lab I’ve looked at and tried out typically scans with a Noritsu or Frontier scanner and delivers jpegs. You hear a lot about how a Frontier scanner delivers color like this or that, and how some film scanner is beloved by x type of photographers. OK. I mean no disrespect to other film labs, however, having a process where you deliver jpegs of film scans to customers is not doing the customers or film any favors.

It’s all about the color. While I do have a dedicated 35mm film scanner that is very recent and can scan 35mm film at really high resolutions, and I do have a very high resolution flatbed scanner that can scan 120 film at crazy high resolutions, I also have a way to digitize film using a very controlled light source, with very good optics, and a reasonably high resolution imaging sensor. The setup I prefer could be called a DSLR film scanner, but it’s actually more complicated that than. Photographic film by definition is very high dynamic range, with a lot of color. When you digitize film, what you are essentially doing is taking a picture of the film emulsion. You can take the picture of the film emulsion with a dedicated film scanner, a flatbed scanner, or with a digital or film camera. It’s what you do with the digitized image after that that makes all the difference.

Typically, the color negative is inverted by either the film scanner itself, the scanning software, or manually in Adobe Photoshop. While one can get good results with that, I’ve brought my skills as a computer programmer to bear and developed code that significantly beefs up the entire imaging and color chain after digitizing to full 64 bit floating point in linear color space. What does that even mean? That means the process to turn the color negative into a color positive along with the following color modifications to get a usable image happen in very high resolution 64 bit floating point linear color space. I’d love to be able to deliver 64 bit floating point linear light images to customers, however, that is not something that any software customers would have access to really supports, so the next best thing is 16 bits per sample (or 48 bit) TIFF files in the ProPhoto color space.

Because the high precision digitization workflow requires a calibrated film profile for every film we support digitizing, Simple Film Lab will not accept any film to be processed and digitized. While we can pretty much process any C-41 film (we use standard Kodak C-41 Chemicals), the service we offer is coupled together, so when you send film in, it is to be processed and digitized. The cost therefore, might seem high per roll, but when you factor in that you’re getting processing and a very high quality film scan, and 48 bit TIFF files in the ProPhoto color space as the delivery with enough resolution to make 16×24 prints, it’s worth it, at least we think there’s a market for it.

The Plan

The plan is to start accepting processing orders for Kodak Ektar 100 film in 35mm, and 120 roll the first quarter of 2017, then add Kodak Portra 160, Portra 400, and Portra 800 in 120 roll film in the second or third quarter and add 35mm Portra 160, 400, and 800 later in the year if there is demand for it along with the 4×5 sheet versions at some point in the second half of 2017. We’re also going to keep things simple in terms of what resolutions we offer: There will really only be two options, standard resolution, and custom scan. Standard 2:3 resolution will be 7200×4800 pixels with other aspect ratios having 4800 pixels on the short side, and custom scan is exactly what it sounds like, a custom scan with an output to your specifications. The standard processing/scan target price will be $20 per roll not including shipping, and custom scan will be priced according to how much time/effort Simple Film Lab has to put in. At the end of the day, it all boils down to image processing time and who is spending that time.

All film will always be processed with fresh chemicals, and the target turnaround time will be 5-7 business days. As things pick up, we’ll be adding additional films to the catalog that we support. There are a couple of emulsions that are pretty popular with wedding photographers (Fuji 400H, looking right at you), and we do plan to support it, however, that comes with some challenges, as most labs that cater to processing/scanning that film also use Fuji Frontier scanners and already deliver pretty good results, so in that instance, the biggest issue is going to be getting customers to move away from those labs and start using Simple Film Lab instead.

Additionally, you can expect very good customer service. As my own customer, I have very high standards, and I’m a firm believer in providing very high standards to my customers. Because Simple Film Lab is a small operation, as a customer, you’ll be dealing directly with me, and it will be my eyeballs that look at every single one of your images before they’re sent to you.

In short, Simple Film Lab is the Film Lab that I would want as a customer. Keep watching this space, good things are on the way.

How Much Image Resolution Do You Need?

It depends. Don’t you hate those types of answers? Unfortunately, there is no simple answer, because you have input resolution and output resolutions, and sub-types inside those two high levels.

Cut to the chase

If all you do is share your pictures online or make 4×6, 5×7, or 8×10 prints, then pretty much any camera made within the last 5-10 years will give you more image resolution than you need and you don’t have anything to worry about and shouldn’t even need to think about it. Carry on snapping away. If you want to learn about why, then read on.

Keeping it simple

In the interest of distilling information down to useful bite size chunks, let’s start with output resolution.

Output resolution

The vast majority of us don’t actually print our pictures, however, print resolutions are typically a lot higher than screen resolutions for a given amount of surface area, so we’ll talk in terms of print resolution.

For this discussion, we’ll equate dots and pixels as the same thing, so terms like dpi (dots per inch) and ppi (pixels per inch) mean the same thing. Likewise, we’ll say a pixel and a dot are a single discrete full color entity that we can see. It may be made up of one or more smaller sub-parts, like a red element, a green element, and a blue element in a computer display, or multiple ink droplets on a piece of paper, but for this discussion it’s one discrete visible full color unit.

It turns out that the human eye actually does have a finite amount resolution in terms of how we would describe it if it where a digital sensor. If we where to take resolution and print it onto a piece of paper, and look at it up close and personal, the magic number where we stop being able to discern fine detail sits right around 300 dots per inch, or 150 line pairs per inch on the paper (or screen), meaning that if we take 150 black lines that are 1 pixel wide and 300 pixels tall, 150 white lines that are also 1 pixel wide and 300 pixels tall, and alternate between them, when printed on the paper so that all 300 lines fit into onto a 1×1 inch square, it would actually look like a grey square to most of us instead of look like alternating black and white lines. This is why most magazines print at 300 dpi, and your iPhone’s Retina display is also about 300 dpi. Spatially speaking, our vision starts to poop out and adding more image resolution than that generally does not make the picture have more detail or look sharper to our eyes.

So, with this 300 dpi number, it’s pretty easy to do some simple math and extrapolate out how much output resolution we need for the various ways we look at our pictures: for a 4×6 inch print, 300 dpi times 4 inches is 1200 pixels on the short edge, and 300 dpi times 6 inches is 1800 pixels on the long edge, or an image that is 1800×1200 pixels. That’s a measly 2.1 megapixels.

The same math for an 8×12 (or 8×10) print comes out to 3600×2400 pixels, or a very modest 8.6 megapixels. A 16×24 (or 16×20) print is 7200×4800 pixels, or 34.5 megapixels. Now we’re starting to get into some serious resolution.

The 16×24 print size not withstanding, if we take display sizes into account, we soon discover some correlations. The average computer display or HD TV can comfortably display the 1800×1200 resolution image with little to no scaling and look quite good. A newer 4K display can display the 3600×2400 resolution image with little to no scaling and look quite good. It should be noted that the aspect ratios between print and screen aren’t the same, so if you never intend to print, you can crop your images to 16:9 aspect to match your display and simply size your output to either HD (1920×1080) or UHD (3840×2160) resolution and call it a day.

What I do

I’ve standardized my “house format” if you will to that of a 16×24 (or 16×20) print size, meaning all of my keeper images, regardless of their input resolution get scaled up or down to 7200×4800 pixels. That’s my working/master resolution (as of 2016).

For paying clients, for standard uses, they get 3600×2400 (or 3840×2160) pixels as the deliverable (scaled down from the 7200×4800 master) unless they’re going to print larger than 8×10, and in that case, the conversation shifts to commissioning me to get the image and do the print for them, since I’ll typically want to capture more resolution than normal and work with a print service that specializes in larger print sizes, and that involves renting gear that is appropriate for what the client wants in terms of output size. Depending on the size or the aspect ratio of the print, I may break from the 7200×4800 and go larger, but typically, that is starting to get into higher end output, and very low volume. Keep in mind, a 16×20 or 16×24 print while not really huge, is not small. It’s 4 times the size of an 8×10. It’s big enough, you have a frame built for it and hang it on a wall.

Input resolution

This is where it starts to get a little techie and can be a bit confusing if you’re not a tech head, so lets take it a little at a time. I saved input resolution for last since what resolution you want/need to output tends to drive what resolution you need to acquire to use as your input.

When it comes to input resolution, it can be greatly affected by lots of different factors, so now is a great time to talk about the concept of “effective resolution”. For example, when you take a picture, how much visible resolution ends up in your image that you can see as fine detail is affected by things such as mirror slap (if you’re shooting an SLR or DSLR), shutter shock, how long your shutter is open (which affects how much movement happened, which shows up as blurring), your hand shaking the camera when you press the shutter button (which shows up as blurring), how deep your depth of field is (which affects how much of your image is in really sharp focus), how much image noise the sensor is introducing into the image, how much diffraction is happening in your lens (depending on your lens f-stop), and how much spatial resolution the lens you’re using is capable of actually putting onto your camera sensor. These things that I just listed are all things that will affect how much resolution you’re effectively putting into your image regardless of whether your shooting film or digital, full frame, medium format, large format, APS-C, Micro 4/3, or smaller. We haven’t even started talking about the raw image sensor resolution yet.

In short, that nice crispy 24 megapixel camera you just picked up? Unless you’re using a really high resolution lens (which is incredibly expensive), and practicing some pretty rigorous shooting process to keep your camera movement and vibration under control, you’re not getting anywhere near 24 megapixels of resolution when you take a picture. Even then, your camera sensor is hiding a dirty little secret.

You see, a 24 megapixel camera outputs an image that is 6000×4000 pixels. It does not actually have 6000 red pixels, 6000 green pixels, and 6000 blue pixels across. The same goes vertically. Nope. What it does have is 6000×4000 light detecting sensors, that then has a color filter array placed over it (usually in a bayer pattern). The color filter array takes that 6000×4000 pixels and divides it up between red, green, and blue. Since human vision is most sensitive to green, a full half of the sensor resolution gets filtered to green, and the remaining half goes to red and blue, which each get a quarter of the resolution. To get to an image that you can actually see, this then goes through a demosaicing process into an image that is 6000 red, green, and blue pixels by 4000 red, green, and blue pixels.

What this means is that for a 24 megapixel image captured with a 24 megapixel camera, you are effectively seeing 12 megapixels of green, and 6 megapixels each of red and blue. Even though, spatially speaking, you have 24 million light sensing elements on your sensor, you are not getting 24 megapixels of full color information. It’s actually more like 6 to 9 megapixels of full color information, which interestingly enough is right there in the ball park of making a really nice 8×10 print.

This is why medium format cameras have been 40+ megapixels for a while now. It’s less about getting the raw spatial resolution, and more about effectively getting more full color spatial resolution. This is why an 8×10 print from a 50 megapixel Canon 5Ds looks so much better than the same picture taken with a ten year old Canon Digital Rebel XTi that’s 10 megapixels. It’s not about the raw spatial resolution, since we can’t really see more than 300 dpi on the page anyway, it’s about getting 300 dpi of full color information.

What about large prints?

But wait a minute, photographers have been making large prints with cameras that don’t have anywhere near that resolution for a while now and they look great. What gives? Well, as it turns out, the larger you print, the less spatial resolution you actually need. It sounds counter intuitive, but once you start getting into 16×20 or larger print sizes, you stop looking at it up close and personal like you would a smaller print, and instead stand back to take it in. The further away from the print you stand, the less dpi your eyes can actually resolve on the print. This is why a 65″ 1080p HDTV which is only 2 megapixels still looks good. You sit further away from it than you would a smaller TV. Combine that with the fact that our brains are very good at filling in missing information, all the photographer has to do is make sure that the image is scaled up in a way that pixelation isn’t obvious if inspected up close. Our brains will do the rest.

With that being said, for larger print sizes more camera resolution will generally result in a better looking output image simply because we’re then putting more raw spatial and full color resolution into the image and have a lot more real estate that we can fill up with that resolution, until we effectively have more than 300 dpi that we’re putting on the print surface.

Conclusion

So how much image resolution do we really need? For the average person sharing online and making 8×10 or smaller prints, a camera that is at least 6 or 7 megapixels will provide totally usable images. The larger you print, the more resolution you’ll want. Digital cameras have just recently gotten to the point where we can actually capture and put all the full color resolution that we can see into an 8×10 print, which makes for super fabulous prints, so this is a great time to be taking pictures.

Image Sharpening Explained, Simply

What is image sharpening? We’ve all heard about it and have undoubtedly heard about various image sharpening tools like unsharp mask or smart sharpen, but I’ve found that very few of us actually understand what image sharpening is.

So what is image sharpening?

No matter what image sharpening tool or algorithm you use, they all have the same end result, which is to increase the contrast of the lines and edges of objects in the image. That’s all image sharpening is. The primary differences between the various algorithms or methods of sharpening isn’t the end result, but rather how the lines and edges of objects in the image are detected. Likewise the various sliders or controls you get for each method of sharpening are to control the amount of sharpening and to fine tune the underlying line and edge detection for that sharpening method.

So there you have it. Image sharpening explained in 4 simple sentences. It’s not that difficult. The human visual system is extremely good at detecting lines and edges, so when we sharpen an image, all we’re doing is making what we’re visually sensitive to more pronounced. It’s a very effective visual perceptual trick that we’re playing on our brains when we sharpen an image.

The soapbox

I’ve noticed some image sharpening trends the last 5-6 years that really bother me and make me think that all these people on the internet that are dispensing photography information and advice and are supposed to be photography experts don’t really know what they’re talking about and doing. I can’t help myself. I have to say something about it.

The image sharpening aesthetic

This is a huge pet peeve of mine. All too often, people think that a sharp picture has lots of detail. As a result, they sharpen their images way, way, way too much. The Internet is riddled with posts on how and when to sharpen. You have input sharpening, creative sharpening, and output sharpening. You have tons of sharpening algorithms, plugins, and tools to increase the clarity of your images. There are companies out there whose entire business model is literally based on selling you something that will help you sharpen your images. On the camera hardware front, as of late, it seems that if a newly released camera doesn’t output a ridiculously over-sharpened image the Internet declares that camera as a piece of garbage. Ugh.

On top of that, the internet is flooded with images that are painfully over-sharpened (usually as a result of said company that sells image sharpening tools), all in the name of having a nice sharp image that is “crispy”. You know what else makes a nice sharp image? A nice moderately high resolution image that has a depth of field that is large enough so that the whole subject of the image is entirely in focus, assuming the person taking the picture actually nailed the focus.

Image focus and resolution to the rescue

I can’t believe how many people get a super fast prime lens, and then proceed to shoot with it wide open, then sharpen the ever living day lights out of the resulting images in an effort to try to get the subject sharp. It’s almost as if they don’t realize that when you’re shooting an 85mm+ lens on a full frame camera at f/1.4 or f/1.2, the depth of field is so small that the only thing in the image that’s going to be in nice sharp focus is one eye, or the tip of the subject’s nose, or one of the subject’s cheek bones, or their lips, or whatever the camera actually happened to lock focus on. Having a really small depth of field definitely has its uses, but if you want a nice sharp image, try stopping your aperture down to something like f/8. You’d be amazed at how much more resolution and fine detail is there and how much sharper your photos are as a result once they’re scaled to whatever the final image output resolution is. Again, assuming that you actually nailed the focus.

Have you ever seen a picture that actually had as much resolution and fine detail as what could be natively represented by the medium displaying the image? Probably not, but you’ll know it when you see it.

I’ll give an example: have you ever watched an HD movie on your iPhone in HD? You should try it some time. It looks incredible. The reason why isn’t because of image sharpening, but rather because you’re displaying the maximum amount of resolution and fine detail that your iPhone screen can natively represent.

Another example: ever seen Christopher Nolan’s “The Dark Knight” movie on Blu-Ray? He shot parts of the movie on very high resolution IMAX cameras and cut those scenes in with the rest of the film, which was shot on standard super-35 film. Even at Blu-Ray resolution (which is a whole whopping 2 megapixels image size), the difference in image resolution and fine detail between super-35 film and IMAX’s 65mm film is stunning. The IMAX sequences just look a lot sharper, not because of image sharpening, but because they contain as much resolution and fine detail as what can be packed into a 1920×1080 pixel image size, which results in a picture that looks very sharp with very little actual image sharpening applied or needed.

Huh. We just came full circle back to image sharpening. Imagine that.

OK. Sooo… When do you do image sharpening?

Ideally, you sharpen at the very end when your image is at it’s final output size. If you have a good image with good resolution and fine detail, you’ll discover that sharpening is often like what salt and pepper is to a great meal. A little bit goes a long way, and if used just right, it greatly enhances things, however, more is rarely better.

There are other places in your workflow where you can sharpen, like input sharpening, and creative sharpening, and those instances do have their uses, however, they tend to be really over-used and abused, so for the sake of simplicity, we’ll leave them off the discussion table for now and maybe visit them in separate posts.

You may have noticed that I brought up image resolution and fine detail a number of times in this post while talking about image sharpening. How much resolution you actually need is a subject for a different post, so we’ll get into that later, though suffice it to say, you don’t need nearly as much resolution as you think you do, the trick is actually acquiring that resolution in a way that makes for sharp images that will need very little image sharpening to look good.

Till next time.

Enlarge Images Without Pixelation

Why?

Usually, we want to reduce the size of images, not enlarge them. However, there are times where we have a reasonably small source image that we need to make bigger for one reason or another.

I actually routinely enlarge images because I have a variety of cameras that I create images with and they’re all different resolutions and bit depths, so I normalize the keepers to one larger master resolution and bit depth.

It should be noted that my method described below is not a miracle worker. You won’t be making 10x or 20x enlargements that look good, but you can easily make 3-4x enlargements that will look totally passable.

The best part? You don’t have to pay for any software beyond just having Adobe Photoshop, so no plug-ins or extra software to buy (Perfect Resize, I’m looking right at you).

How?

The how is actually pretty simple, and before I explain the specifics of what I do in my version of Photoshop, I’ll explain the generic version so that you can convert it to your software, which may not be the same thing as what I’m running. Ok? Let’s get started.

Before doing this, start with your source image and save it as a 16 bit tiff file at its original resolution.

  1. Figure out what your resulting resolution is to be and multiply the longest edge of the image by four. For example, I generally normalize to 7200×4800 pixels, so I’d end up with 28,800×19,200 pixels. Resize your source image to that new, really huge resolution.

  2. Add some noise to the newly resized really huge image. The better way to do this is with monochromatic noise so it looks more organic, but if your software doesn’t do that, any noise is better than no noise. I generally add between 5% and 10%. It should be Gaussian noise.

  3. Add a Gaussian Blur to the really huge image. It’s radius should be 2 pixels.

  4. Resize the really huge image down to your resulting image size. Save it as a 16 bit tiff file and do the rest of whatever post processing you’re going to do.

That’s it! This method also works like a charm for upgrading 8 bit images to nice smooth 16 bit images that you can really push around in Lightroom/Photoshop without any ugly banding or posterization popping out at you.

Why it works

What?!?! You’re adding noise and blur to the image! Doesn’t that destroy image quality and detail and make the image blurry?

If we were to do that to an image at its native size, then yes, all we would be doing is adding noise and making it blurry. The thing to keep in mind is that we’re doing this to an image that is 8 times the resolution of our final output resolution.

When we scale the image back down, that noise and blur has the effect of filling in and smoothing out what would otherwise be pixelation in the image.

The key to this working so well is the bit depth and ratios that we use. The small amount of noise that gets added when we’re at 8x resolution gets reduced down and visually provides a smoothing effect to the new image size without making it look soft or blurry.

Likewise, the Gaussian blur we added was two pixels to an image that is 8x the resolution of what it ultimately will be, meaning when we scale the image back down, a 16×16 block of pixels gets turned into a 4×4 block of pixels, which means that we’re scaling down more than we blurred. When we scale down more than we blur, the blur that was applied starts to do interesting things for us. It visually provides the effect of filling in and smoothing out what would otherwise be pixelation in the image.

Combined with the added noise in step 2, it’s a very dramatic one-two punch to an image that would otherwise look pixelated and awful.

How I do it

We all use different software. I happen to use the latest version Adobe Lightroom CC and Adobe Photoshop CC. This isn’t a tutorial for how to use Lightroom or Photoshop, it’s just a basic walk-through of what I do. You can and should modify it to suit your needs.

All of my images start off in Lightroom at their original resolution and bit depth. I have a Lightroom catalog that I use for staging these images for processing that all of my keepers make their first stop in. In this catalog, I add all my metadata to each image (it makes tracking it later easier) and the only image adjustment I make here is to remove image noise that was introduced by the camera in the form of color and/or luminance noise. I’m very conservative with this and look at the image at a 1:1 or 2:1 ratio in the area where noise is most prominent in the image. I only do just barely enough noise removal to tone down the noise since the more noise removal you do, the more the image fine detail gets muddled. This is done on an image by image basis and how much noise reduction is applied varies greatly between what camera took the image and at what ISO the image was shot at.

From there, I export the image out as a 16 bit tiff file at the “super-sized” resolution (28,800×19,200 pixel or there abouts depending on the image aspect ratio).

I then open that tiff file in Photoshop and add a layer over the background layer that is the image. I change the layer’s blending mode to “overlay” and fill it with 50% gray. From there I convert the layer to a smart object.

With the smart object selected, I go to the ‘Filter’ menu, and select ‘Noise’->’Add Noise’. In the dialog box that pops up, select ‘Gaussian’, and check the ‘Monochromatic’ check-box. Change the amount to a value between 5% and 10%. I’ve found that less than 5% tends to be not enough, and more than 10% is too much. I generally set it to 7.5% as a start then tweak it up or down as needed for best results. You should experiment around for what values work best for you based on what resolution you are working at.

From there, I go to the ‘Filter’ menu again and select ‘Blur’->’Gaussian Blur’. In the pop up dialog box I select a radius of 2 pixels.

From there, tweak the amount of noise up or down for best results (you can do this because it’s a smart object).

When you’re happy with the image, go to the ‘Layer’ menu and select ‘Flatten Image’.

Now resize the image to your final enlarged size (in my case 7200×4800 pixels) and save it as a 16 bit tiff that you’ll pull into your real Lightroom catalog.

Import the new enlarged tiff file into your Lightroom catalog that you use for managing your media, convert it to a DNG file, then finish the rest of your post processing on the file.

Isn’t this a lot of work?

Yes and no. We only do this on our keeper images, which for most photographers is only a fraction of the images that they take. That and the only thing we’re doing that is any different is the Photoshop bit. You should still be doing noise reduction, adding meta-data, and post processing. The only real difference is a quick middle step in Photoshop that literally only takes a couple minutes per image, if that.

Everybody should do what works for them, and what works for me isn’t necessarily for everybody, however, the process I outlined above allows me to shoot everything from an iPhone jpeg, to a DSLR raw or jpeg image, to frame grabs from video as jpegs, and end up with a reasonably sized standard image resolution that is actually very usable.

As proof in the pudding, most all of the images I’ve recently shot digitally have had this treatment, and if I hadn’t told you that those images got this treatment, while looking at them, you’d be none the wiser. It makes differentiating between jpeg and raw, and lower iPhone/video resolution and DSLR resolution extremely difficult, which is the point.

Caveat Emptor

This works best with an image that already has reasonably good resolution content to begin with. This does not magically add detail where there is none, nor does it really add resolution or rescue images that already look terrible.

What it does do is add very high frequency or broadband information to an image in a way that our brains find very pleasant, which allows our brains to do the heavy lifting of ‘seeing’ what detail is there in the enlarged image without seeing the unpleasant visual effects of scaling up that image. In short, we’re playing a very effective visual trick on our brains in very much the same way that adding dithering to digital audio allows us to hear further down into the sound.

Our brains are very good at filtering out high frequency broadband noise to get to the detail in the noise, as long as that noise isn’t overwhelming to the point of being distracting. The trick is riding that balance between helping the perceived image quality and hurting it.

Till next time.