Coming in 2017: Simple Film Lab

Photographic film has taken quite a beating in the last decade or so. Film labs have been closing left and right for quite some time now. This is quite unfortunate and something that I’ve struggled with for quite some time myself being as I’m primarily a film photographer.

This led me onto a path of processing and digitizing my own film and developing tools to do so that also give me my images in a way that is complimentary to film.

I’ve finally reached a point where I can offer my services to other film photographers.

A Few Things To Note

You shoot film because of the color and look that you get with it, not because it gives you a lot of resolution or is inexpensive. So with that being said, what do I bring to the table with Simple Film Lab that is better than the other film labs out there? If you look at what other labs charge and what I will be charging, I’m certainly not less expensive from a purely monetary stance. I also won’t really be delivering the highest of resolutions either.

In order to really take advantage of what film has to offer, one must beef up the entire imaging chain. Almost every lab I’ve looked at and tried out typically scans with a Noritsu or Frontier scanner and delivers jpegs. You hear a lot about how a Frontier scanner delivers color like this or that, and how some film scanner is beloved by x type of photographers. OK. I mean no disrespect to other film labs, however, having a process where you deliver jpegs of film scans to customers is not doing the customers or film any favors.

It’s all about the color. While I do have a dedicated 35mm film scanner that is very recent and can scan 35mm film at really high resolutions, and I do have a very high resolution flatbed scanner that can scan 120 film at crazy high resolutions, I also have a way to digitize film using a very controlled light source, with very good optics, and a reasonably high resolution imaging sensor. The setup I prefer could be called a DSLR film scanner, but it’s actually more complicated that than. Photographic film by definition is very high dynamic range, with a lot of color. When you digitize film, what you are essentially doing is taking a picture of the film emulsion. You can take the picture of the film emulsion with a dedicated film scanner, a flatbed scanner, or with a digital or film camera. It’s what you do with the digitized image after that that makes all the difference.

Typically, the color negative is inverted by either the film scanner itself, the scanning software, or manually in Adobe Photoshop. While one can get good results with that, I’ve brought my skills as a computer programmer to bear and developed code that significantly beefs up the entire imaging and color chain after digitizing to full 64 bit floating point in linear color space. What does that even mean? That means the process to turn the color negative into a color positive along with the following color modifications to get a usable image happen in very high resolution 64 bit floating point linear color space. I’d love to be able to deliver 64 bit floating point linear light images to customers, however, that is not something that any software customers would have access to really supports, so the next best thing is 16 bits per sample (or 48 bit) TIFF files in the ProPhoto color space.

Because the high precision digitization workflow requires a calibrated film profile for every film we support digitizing, Simple Film Lab will not accept any film to be processed and digitized. While we can pretty much process any C-41 film (we use standard Kodak C-41 Chemicals), the service we offer is coupled together, so when you send film in, it is to be processed and digitized. The cost therefore, might seem high per roll, but when you factor in that you’re getting processing and a very high quality film scan, and 48 bit TIFF files in the ProPhoto color space as the delivery with enough resolution to make 16×24 prints, it’s worth it, at least we think there’s a market for it.

The Plan

The plan is to start accepting processing orders for Kodak Ektar 100 film in 35mm, and 120 roll the first quarter of 2017, then add Kodak Portra 160, Portra 400, and Portra 800 in 120 roll film in the second or third quarter and add 35mm Portra 160, 400, and 800 later in the year if there is demand for it along with the 4×5 sheet versions at some point in the second half of 2017. We’re also going to keep things simple in terms of what resolutions we offer: There will really only be two options, standard resolution, and custom scan. Standard 2:3 resolution will be 7200×4800 pixels with other aspect ratios having 4800 pixels on the short side, and custom scan is exactly what it sounds like, a custom scan with an output to your specifications. The standard processing/scan target price will be $20 per roll not including shipping, and custom scan will be priced according to how much time/effort Simple Film Lab has to put in. At the end of the day, it all boils down to image processing time and who is spending that time.

All film will always be processed with fresh chemicals, and the target turnaround time will be 5-7 business days. As things pick up, we’ll be adding additional films to the catalog that we support. There are a couple of emulsions that are pretty popular with wedding photographers (Fuji 400H, looking right at you), and we do plan to support it, however, that comes with some challenges, as most labs that cater to processing/scanning that film also use Fuji Frontier scanners and already deliver pretty good results, so in that instance, the biggest issue is going to be getting customers to move away from those labs and start using Simple Film Lab instead.

Additionally, you can expect very good customer service. As my own customer, I have very high standards, and I’m a firm believer in providing very high standards to my customers. Because Simple Film Lab is a small operation, as a customer, you’ll be dealing directly with me, and it will be my eyeballs that look at every single one of your images before they’re sent to you.

In short, Simple Film Lab is the Film Lab that I would want as a customer. Keep watching this space, good things are on the way.

Film Review: Kodak T-MAX 100 B&W Film Profile

Ahh… Kodak. There for a while it looked like they where in trouble, then they emerged from bankruptcy as Kodak alaris, and seems to have stabilized, and today they still produce some of the best professional film around, though with a much more limited selection than from before.

But what a selection!

Today, we’re going to take a look at Kodak Professional T-MAX 100 Film, or 100TMX, which is it’s film code (for brevity, we’ll refer to it by its film code from here on out). This is a medium speed B&W film that comes in 35mm roll, 120 roll, and 4×5 sheet form. In 2002 Kodak released a new version of this film with much improved resolution, finer grain, and a shorter recommended development time. As a long time user of 100TMX film, I can say that it is indeed right up there in terms of resolution and fine grain. Since it’s a black and white film and has no standardized development formula or development time, it can sometimes be difficult to pin down the best way to shoot and develop 100TMX film, but once you have it, it’s downright gorgeous.

I primarily use 100TMX for those times when I really need the maximum resolution possible and light is not an issue.

I’m a hybrid shooter, meaning that I shoot film, but it gets digitized and is digital all the way to the end delivery of either a print on paper, or a digital delivery. Because of this, my process may differ from someone who also does analog prints in a wet darkroom.

Before we get too far into things, I just want to iterate that this post is not meant to be a “pro-film”, or a “why you should shoot film”, or a “film vs. digital” debate. This is meant to take a look at a particular film and document its various characteristics so that if you want to shoot with this film, you’ll have an idea of what to expect.


Oh baby. 100TMX has resolution and lots of it. Looking at Kodak’s tech pub for 100TMX (document F-4016, published Feb. 2016, google it for the latest link), Kodak states that 100TMX has 200 line pairs per millimeter at 1000:1 contrast ratio. That’s a lot. If you do the math, that’s over 10,000 dpi of resolution on the film. My film scanner is 9600 dpi (though it probably doesn’t actually go that high, another story), and the film grain is fine enough that I can’t see it when I scan at max resolution, even with a wet gate scan.

35mm (135) Film (Small Format)

In full frame 35mm (135 film) terms, 200 line pairs per mm works out to a solid 14,400×9600 pixels or just over 100 megapixels for a 35mm frame. Granted, very few if any 35mm lenses can actually resolve that kind of resolution onto the film, so in practice, the amount of resolvable detail you’ll actually be able to see in the frame has more to do with your lens resolution and shooting prowess than the film. I typically scan at 9600 dpi, then scale down to my working resolution (or house size) of 7200×4800 pixels, which is more than enough resolution for everything up to a 24×16 inch print at 300 dpi. For that purpose, 100TMX film is effectively grain free.

120 Roll Film (Medium Format)

For 120 roll film, 100TMX can deliver some pretty stunning resolution. If you shoot 6×6 and scan in at 9600 dpi, you can easily have a 400+ megapixel image, and it goes up from there for 6×7, 6×8, and 6×9 frame sizes. Again, with that being said, in practice, most medium format lenses don’t put anywhere near 200 line pairs per millimeter on the film (and 9600 dpi is 188lp/m, way more than most lenses will do), so your effective total image resolution is limited by your lens resolution multiplied by your film frame size. I mostly shoot 6×9 and have standardized my medium format working resolution at 4x the resolution of the small format house size, which comes out to 14400×9600 pixels. At that resolution, I can do 32×48 inch prints at 300dpi and they would be awesomely sharp with no visible film grain. Again, 100TMX is effectively grain free at medium format resolutions.

4×5 Sheet Film (Large Format)

For sheet film, 100TMX is not the limiting factor in terms of resolution, but can deliver some awesomely huge resolution by fact of the sheer size of the image frame. In large format sheet film, the resolution of your camera lens and the resolution you can scan at is your limiting factor. Unless you have a super expensive large format lens, your average large format lens can put 50-60 line pairs per mm onto the film. Some are a little less, some are a little more, but that’s about where it is, which comes out to about 2400-3000 dpi of resolvable detail on the film.

Scanning-wise, to see that resolution, you should scan at at least that resolution. I consider 4800 dpi to pretty much be the minimum resolution to scan large format film at. If you scan the whole 4×5 frame (which has an exposed area of 4.75×3.8 inches) you come out at about 400 megapixels. This doesn’t sound like more resolution than medium format, but in practice it is. Medium format lenses are not twice the resolution of large format lenses, so even though you’re scanning large format film at a lower resolution, because you’re scanning over 4x the exposed film area, your total resolution and fine detail is a lot better.

For me, I almost never scan the whole frame. I generally go for a 1.5:1 or 2:1 aspect ratio. My film scanner maxes out at 2.75×9 inches for 9600 dpi scans, so I generally either scan 2.75×4.125 inches (for 1.5:1 aspect) or 2.375×4.75 inches (for 2:1 aspect) at 9600 dpi then scale it down to 4800 dpi, which gives me 19,800×13,200 pixels for 1.5:1 aspect, and 22,800×11,400 pixels for 2:1 aspect. I do the 2:1 aspect for landscapes, and the 1.5:1 aspect for everything else.

With respects to 100TMX, in large format film land, it’s very much grain free, even for the largest most massive enlargements.

Sample Images

Kodak TMAX 100 (100TMX) 9600 dpi film grain inspection 01
The image above was shot with a Hasselblad 500C/M, scanned in at 9600 dpi and cropped to 1.5:1. You can click through to Flickr and look at the larger 9600 dpi version. There’s no film grain.

Kodak TMAX 100 (100TMX) 9600 dpi film grain inspection 02
The image above was shot with a Hasselblad 500C/M, scanned in at 9600 dpi and cropped to 1.5:1. You can click through to Flickr and look at the larger 9600 dpi version. There’s no film grain.

Exposure Latitude and Dynamic Range

When doing research for this post, I ran across a lot of opinions about how much exposure latitude and dynamic range this film has. In my own experience, in a hybrid workflow, 100TMX film has a massive amount of exposure latitude and plenty of dynamic range. I’ve seen a lot of people say things on the internet where it’s clear that they don’t really shoot 100TMX, or if they do, they don’t develop it and scan it themselves. Kodak officially rates it at one stop under, 3 stops over. In my experience, for most subject brightness ranges seen in daylight type shooting, I routinely shoot plus or minus 2 stops without worry and no change to how I develop the film. To put it in simpler terms, 9 times out of 10, I generally expose using the sunny 16 rule and 99% of the time, once I scan it, I have an image that I can totally use. 1/125 and f/16 (or equivalent exposure) is always my starting point. If there are clouds in the sky, I don’t change anything. If it goes to full overcast, I don’t change anything unless something that I want to have good detail is in full or open shade, then I’ll open it up by a stop or two. As it starts to go to late afternoon/early evening, I’ll open it up by a stop, then by another in mid-late evening. Much longer than that, and I start getting down into f/4 and 1/60 or 1/30 a second territory, so at that point, if I’m still shooting, I’ll switch to a faster film (Kodak T-MAX 400 in case you where wondering).

As a hybrid shooter, as long as you get an exposure that has enough density in the darker areas of the frame that are important, once you scan it (raw with no gamma correction), you’ll have an image that is totally usable and you can tweak to your hearts content. As you get closer to the extremes of over and under exposure, your unique tones start to get squashed closer and closer together until at some point they’re close enough together that your scanner can’t tell them apart. It’s at that point that you don’t have a usable image. There’s a zone leading up to that point (particularly on denser negatives) where your tones are close enough together that even though your scanner can see them, you don’t have enough unique tones to prevent banding in your image as you push and pull it around in your digital dark room. Those images aren’t completely un-usable, but you will be limited by how much you can push it around. Once you get out of that zone and into good tone-density territory, pretty much anything is usable once scanned.

So with that being said, just how much exposure latitude is there? Time for some sample images!

Sample Images

Below are some sample images that I’ve taken of the same scene during noon day sun. I’ve exposed the scene over 8 images, with each image exposed 1 stop brighter than the last image. Following the sunny 16 rule, that image that is nominally exposed will be considered to be EV 0. The first of the 8 images was exposed at 1/500 of a second and f/32, a whopping 4 stops under exposed or EV -4. The last of the 8 images was exposed at 1/30 and f/11, a whopping 3 stops over exposed, or EV +3, for a total of 8 stops of exposure latitude.

The scene is comprised of a black foam core board (for reflected black), a white foam core board (for reflected white), a grey exposure card, a white balance card, and an X-Rite color checker chart. All the scans where completed with the same scanner gain setting. After scanning, each exposure was gamma corrected so that the exposure card was at 46.6% luminance level. The unified scan level and gamma correction was done purely as a baseline to make all the exposures roughly the same exposure level so it would be easier to see the differences between each exposure level in terms of contrast and unique tones.

In practice, each image would be scanned with a scanner gain setting at the maximum level while not clipping the lightest part of the negative to allow for the maximum tonal resolution possible, and then post processed with whatever would be the best combination of gamma and exposure/curves/contrast to get the best looking image.

So with that, the images!

EV +3

EV +3

EV +2

EV +2

EV +1

EV +1

EV 0

EV 0

EV -1

EV -1

EV -2

EV -2

EV -3

EV -3

EV -4

EV -4


Looking at the images above, every single one of those is not really an issue. That’s 8 stops of exposure latitude (on the same roll and developed the same way) and none of the above images would really present a problem when it came to tuning them up into usable images. Some would take a little more tweaks than others, but as a whole, not a single one of them is even remotely close to being a problem image. In fact, the image that was exposed a whopping 4 stops under (EV -4) I scanned it in, inverted it, and changed the gamma from 1.0 to 1.01. I could have not changed the gamma at all and simply did some contrast and curves tweaks and ended up with a nice image.


The more contrast and dynamic range contained in the subject you’re photographing, the faster you will run into banding and unusable images as you over or under expose. It’s a fact of life and a fact of how film works. Not just this film, but all film. As I’ve said before, in practice, with 100TMX, I’ve not had a single issue with +-2 stops for most things you’ll encounter during normal daylight, and there’s a lot of scenarios where there’s even less contrast and dynamic range in the image and you have lots more exposure leeway, but with that being said, it’s exposure leeway. It lets you shoot into darker and darker light, and if indoors, lets you set 1 exposure setting with flash and shoot away without having to worry about it. As long as what your shooting is within the range of your flash, you can get a usable image. It has such a huge exposure latitude, that unless you just completely blow the exposure and are way off in the weeds, you’ll get a usable image.


Here’s where it gets a little tricky. As with all black and white films, there are development guidelines. Any film photographer that shoots black and white and is worth their salt develops their own film. We all have our preferred methods. Every single film has a multitude of ways it can be developed, none of which are really standardized.

For 100TMX, I prefer to use plain old Kodak D76 developer and Kodak Fixer. D76 comes in powder form, and so it’s really easy to measure and mix. I’ve had really good results with it and frankly, it’s reasonably cheap, and keeps in powder form for a really long time. I dilute the D76 to 1:1 as a one shot mix that gets discarded after use. Doing that, a single bag of D76 nets me about 15 rolls of either 35mm or 120 film, or about 60 sheets of 4×5 sheet film if I develop 4 sheets at a time.

My development process is as follows:

  1. I get a roll of 100TMX loaded into my Paterson daylight processing tank using my film changing bag and place the Paterson tank in the bathroom on the bathroom sink.
  2. I measure out 600 milliliters of room temperature filtered bottle water and pour it into the tank to let the film soak.
  3. I measure out 27 grams of D76 powder (using a table top food scale) and mix it with 550 milliliters of room temperature filtered bottle water (in a reused and clean gatorade bottle)
  4. I shake the gatorade bottle until it looks like all the D76 has dissolved, then use a food thermometer to measure the temperature. If it’s warmer than 75 degrees Fahrenheit (almost never) I’ll throw it in the freezer or fridge until I’m ready to use it a few minutes later.
  5. I go out into the garage and get the gallon of Kodak Fixer that I have mixed up and stage it next to the Paterson processing tank in the bathroom, I’ll generally make sure I can quickly open it before starting, as sometimes it can get a bit stuck if it’s been a while since I’ve used it.
  6. I measure out 600 milliliters of room temperature filtered bottle water and stage it next to the paterson tank. This is the stop bath.
  7. I get the D76 mix from the fridge (if I put it there) and measure the temp again. I look up the development time for whatever the temp is on Kodak’s D76 tech pub for 100TMX (it’s the shorter time between the two 100TMX films listed unless you have the old pre-2002 version of the film). I round up to the next temp time if the measured temp is between two of the temps on the tech pub. This is not a big deal. Just don’t be over 75 degrees or under 65 degrees.
  8. I turn on the sink water and pour out the soak bath while the sink water is running. The soak bath will be purple. This is OK! Turn the sink water off after the soak bath is rinsed down the drain.
  9. I start my phones lap timer and at the same time pour the D76 into the Paterson tank.
  10. I get the lid to the tank on as quickly as possible (this usually takes 15-20 seconds to do), then slowly perform tank inversions until the timer hits 60 seconds.
  11. Set the tank down and tap it on the sink counter a couple of times to dislodge air bubbles.
  12. At the 90 second mark (30 seconds after the last inversion sequence), do 3 tank inversions over a 5-10 second period for all three combined. Each inversion should be about 2-3 seconds.
  13. Repeat steps 11 and 12 for the remainder of the development time. Basically, you want to do inversions every 30 seconds and tap to dislodge bubbles after the sequence.
  14. Remove the lid to the tank after the last inversion sequence before the time is up, this should be 15-30 seconds before the time is up.
  15. 5-10 seconds before the time is up turn on the sink tap and start pouring out the D76 mix into the sink.
  16. Immediately pour the 600 milliliters of stop bath into the tank and use the mixer stick that came with the tank to agitate the stop bath. Do this for 60 seconds.
  17. Pour the stop bath into the sink.
  18. Pour the Kodak Fixer into the tank. You need to put at least 500 ml in (and can measure it out if you want to), but I generally pour it in until its in the top funnel part of the paterson tank, which is enough.
  19. Put the lid back on the tank (after rinsing it) and restart the lap timer.
  20. Do 3 inversions every 30 seconds for the first 5 minutes, then 5 inversions every 60 seconds for the next 5 minutes. At this point, if your fixer is new and fresh, you’re good to go, if it’s getting exhausted, you’ll need to keep going. You can’t over fix. I generally fix for 10-15 minutes. New fixer is 10 minutes, older fixer is 15 minutes.
  21. Pour the fixer back into the fix solution bottle. This is the one thing you do re-use. The more exhausted it is, the yellower the mix will be.
  22. You can now open the Paterson tank.
  23. With the film still inside the paterson tank (open or not), put the tank under the sink faucet (cold, not hot) and rinse the film until it is no longer pink. This takes anywheres from 15-20 minutes. Sometimes longer. What I do is turn the faucet on just fast enough to fill the tank in about a minute and leave it running. Once every 5 minutes, I dump the tank and put it back under the faucet. I do that until the film isn’t pink any more.
  24. Measure out 600 milliliters of filtered bottle water and put 1 drop of baby shampoo in it.
  25. Turn off the sink faucet, dump the water out, and pour in the 600ml that you just put the baby shampoo in.
  26. Let it sit for a minute then pull out the film, get it off the development roller and hang it to dry.
  27. You’re done! Clean up your mess. Put the Fixer back in the garage. Wash all the tank parts and set them out to dry.
  28. A couple hours later, your film will be dry, take it off the hanger, cut it fit your film sleeves, and scan in the ones you want to digitize. Then sleeve your film.

100TMX Film Look

So, if you do a reasonably good job shooting and developing it, what does it look like?

It looks like that!

It looks like that! Pretty awesome right?

How Much Image Resolution Do You Need?

It depends. Don’t you hate those types of answers? Unfortunately, there is no simple answer, because you have input resolution and output resolutions, and sub-types inside those two high levels.

Cut to the chase

If all you do is share your pictures online or make 4×6, 5×7, or 8×10 prints, then pretty much any camera made within the last 5-10 years will give you more image resolution than you need and you don’t have anything to worry about and shouldn’t even need to think about it. Carry on snapping away. If you want to learn about why, then read on.

Keeping it simple

In the interest of distilling information down to useful bite size chunks, let’s start with output resolution.

Output resolution

The vast majority of us don’t actually print our pictures, however, print resolutions are typically a lot higher than screen resolutions for a given amount of surface area, so we’ll talk in terms of print resolution.

For this discussion, we’ll equate dots and pixels as the same thing, so terms like dpi (dots per inch) and ppi (pixels per inch) mean the same thing. Likewise, we’ll say a pixel and a dot are a single discrete full color entity that we can see. It may be made up of one or more smaller sub-parts, like a red element, a green element, and a blue element in a computer display, or multiple ink droplets on a piece of paper, but for this discussion it’s one discrete visible full color unit.

It turns out that the human eye actually does have a finite amount resolution in terms of how we would describe it if it where a digital sensor. If we where to take resolution and print it onto a piece of paper, and look at it up close and personal, the magic number where we stop being able to discern fine detail sits right around 300 dots per inch, or 150 line pairs per inch on the paper (or screen), meaning that if we take 150 black lines that are 1 pixel wide and 300 pixels tall, 150 white lines that are also 1 pixel wide and 300 pixels tall, and alternate between them, when printed on the paper so that all 300 lines fit into onto a 1×1 inch square, it would actually look like a grey square to most of us instead of look like alternating black and white lines. This is why most magazines print at 300 dpi, and your iPhone’s Retina display is also about 300 dpi. Spatially speaking, our vision starts to poop out and adding more image resolution than that generally does not make the picture have more detail or look sharper to our eyes.

So, with this 300 dpi number, it’s pretty easy to do some simple math and extrapolate out how much output resolution we need for the various ways we look at our pictures: for a 4×6 inch print, 300 dpi times 4 inches is 1200 pixels on the short edge, and 300 dpi times 6 inches is 1800 pixels on the long edge, or an image that is 1800×1200 pixels. That’s a measly 2.1 megapixels.

The same math for an 8×12 (or 8×10) print comes out to 3600×2400 pixels, or a very modest 8.6 megapixels. A 16×24 (or 16×20) print is 7200×4800 pixels, or 34.5 megapixels. Now we’re starting to get into some serious resolution.

The 16×24 print size not withstanding, if we take display sizes into account, we soon discover some correlations. The average computer display or HD TV can comfortably display the 1800×1200 resolution image with little to no scaling and look quite good. A newer 4K display can display the 3600×2400 resolution image with little to no scaling and look quite good. It should be noted that the aspect ratios between print and screen aren’t the same, so if you never intend to print, you can crop your images to 16:9 aspect to match your display and simply size your output to either HD (1920×1080) or UHD (3840×2160) resolution and call it a day.

What I do

I’ve standardized my “house format” if you will to that of a 16×24 (or 16×20) print size, meaning all of my keeper images, regardless of their input resolution get scaled up or down to 7200×4800 pixels. That’s my working/master resolution (as of 2016).

For paying clients, for standard uses, they get 3600×2400 (or 3840×2160) pixels as the deliverable (scaled down from the 7200×4800 master) unless they’re going to print larger than 8×10, and in that case, the conversation shifts to commissioning me to get the image and do the print for them, since I’ll typically want to capture more resolution than normal and work with a print service that specializes in larger print sizes, and that involves renting gear that is appropriate for what the client wants in terms of output size. Depending on the size or the aspect ratio of the print, I may break from the 7200×4800 and go larger, but typically, that is starting to get into higher end output, and very low volume. Keep in mind, a 16×20 or 16×24 print while not really huge, is not small. It’s 4 times the size of an 8×10. It’s big enough, you have a frame built for it and hang it on a wall.

Input resolution

This is where it starts to get a little techie and can be a bit confusing if you’re not a tech head, so lets take it a little at a time. I saved input resolution for last since what resolution you want/need to output tends to drive what resolution you need to acquire to use as your input.

When it comes to input resolution, it can be greatly affected by lots of different factors, so now is a great time to talk about the concept of “effective resolution”. For example, when you take a picture, how much visible resolution ends up in your image that you can see as fine detail is affected by things such as mirror slap (if you’re shooting an SLR or DSLR), shutter shock, how long your shutter is open (which affects how much movement happened, which shows up as blurring), your hand shaking the camera when you press the shutter button (which shows up as blurring), how deep your depth of field is (which affects how much of your image is in really sharp focus), how much image noise the sensor is introducing into the image, how much diffraction is happening in your lens (depending on your lens f-stop), and how much spatial resolution the lens you’re using is capable of actually putting onto your camera sensor. These things that I just listed are all things that will affect how much resolution you’re effectively putting into your image regardless of whether your shooting film or digital, full frame, medium format, large format, APS-C, Micro 4/3, or smaller. We haven’t even started talking about the raw image sensor resolution yet.

In short, that nice crispy 24 megapixel camera you just picked up? Unless you’re using a really high resolution lens (which is incredibly expensive), and practicing some pretty rigorous shooting process to keep your camera movement and vibration under control, you’re not getting anywhere near 24 megapixels of resolution when you take a picture. Even then, your camera sensor is hiding a dirty little secret.

You see, a 24 megapixel camera outputs an image that is 6000×4000 pixels. It does not actually have 6000 red pixels, 6000 green pixels, and 6000 blue pixels across. The same goes vertically. Nope. What it does have is 6000×4000 light detecting sensors, that then has a color filter array placed over it (usually in a bayer pattern). The color filter array takes that 6000×4000 pixels and divides it up between red, green, and blue. Since human vision is most sensitive to green, a full half of the sensor resolution gets filtered to green, and the remaining half goes to red and blue, which each get a quarter of the resolution. To get to an image that you can actually see, this then goes through a demosaicing process into an image that is 6000 red, green, and blue pixels by 4000 red, green, and blue pixels.

What this means is that for a 24 megapixel image captured with a 24 megapixel camera, you are effectively seeing 12 megapixels of green, and 6 megapixels each of red and blue. Even though, spatially speaking, you have 24 million light sensing elements on your sensor, you are not getting 24 megapixels of full color information. It’s actually more like 6 to 9 megapixels of full color information, which interestingly enough is right there in the ball park of making a really nice 8×10 print.

This is why medium format cameras have been 40+ megapixels for a while now. It’s less about getting the raw spatial resolution, and more about effectively getting more full color spatial resolution. This is why an 8×10 print from a 50 megapixel Canon 5Ds looks so much better than the same picture taken with a ten year old Canon Digital Rebel XTi that’s 10 megapixels. It’s not about the raw spatial resolution, since we can’t really see more than 300 dpi on the page anyway, it’s about getting 300 dpi of full color information.

What about large prints?

But wait a minute, photographers have been making large prints with cameras that don’t have anywhere near that resolution for a while now and they look great. What gives? Well, as it turns out, the larger you print, the less spatial resolution you actually need. It sounds counter intuitive, but once you start getting into 16×20 or larger print sizes, you stop looking at it up close and personal like you would a smaller print, and instead stand back to take it in. The further away from the print you stand, the less dpi your eyes can actually resolve on the print. This is why a 65″ 1080p HDTV which is only 2 megapixels still looks good. You sit further away from it than you would a smaller TV. Combine that with the fact that our brains are very good at filling in missing information, all the photographer has to do is make sure that the image is scaled up in a way that pixelation isn’t obvious if inspected up close. Our brains will do the rest.

With that being said, for larger print sizes more camera resolution will generally result in a better looking output image simply because we’re then putting more raw spatial and full color resolution into the image and have a lot more real estate that we can fill up with that resolution, until we effectively have more than 300 dpi that we’re putting on the print surface.


So how much image resolution do we really need? For the average person sharing online and making 8×10 or smaller prints, a camera that is at least 6 or 7 megapixels will provide totally usable images. The larger you print, the more resolution you’ll want. Digital cameras have just recently gotten to the point where we can actually capture and put all the full color resolution that we can see into an 8×10 print, which makes for super fabulous prints, so this is a great time to be taking pictures.

Image Sharpening Explained, Simply

What is image sharpening? We’ve all heard about it and have undoubtedly heard about various image sharpening tools like unsharp mask or smart sharpen, but I’ve found that very few of us actually understand what image sharpening is.

So what is image sharpening?

No matter what image sharpening tool or algorithm you use, they all have the same end result, which is to increase the contrast of the lines and edges of objects in the image. That’s all image sharpening is. The primary differences between the various algorithms or methods of sharpening isn’t the end result, but rather how the lines and edges of objects in the image are detected. Likewise the various sliders or controls you get for each method of sharpening are to control the amount of sharpening and to fine tune the underlying line and edge detection for that sharpening method.

So there you have it. Image sharpening explained in 4 simple sentences. It’s not that difficult. The human visual system is extremely good at detecting lines and edges, so when we sharpen an image, all we’re doing is making what we’re visually sensitive to more pronounced. It’s a very effective visual perceptual trick that we’re playing on our brains when we sharpen an image.

The soapbox

I’ve noticed some image sharpening trends the last 5-6 years that really bother me and make me think that all these people on the internet that are dispensing photography information and advice and are supposed to be photography experts don’t really know what they’re talking about and doing. I can’t help myself. I have to say something about it.

The image sharpening aesthetic

This is a huge pet peeve of mine. All too often, people think that a sharp picture has lots of detail. As a result, they sharpen their images way, way, way too much. The Internet is riddled with posts on how and when to sharpen. You have input sharpening, creative sharpening, and output sharpening. You have tons of sharpening algorithms, plugins, and tools to increase the clarity of your images. There are companies out there whose entire business model is literally based on selling you something that will help you sharpen your images. On the camera hardware front, as of late, it seems that if a newly released camera doesn’t output a ridiculously over-sharpened image the Internet declares that camera as a piece of garbage. Ugh.

On top of that, the internet is flooded with images that are painfully over-sharpened (usually as a result of said company that sells image sharpening tools), all in the name of having a nice sharp image that is “crispy”. You know what else makes a nice sharp image? A nice moderately high resolution image that has a depth of field that is large enough so that the whole subject of the image is entirely in focus, assuming the person taking the picture actually nailed the focus.

Image focus and resolution to the rescue

I can’t believe how many people get a super fast prime lens, and then proceed to shoot with it wide open, then sharpen the ever living day lights out of the resulting images in an effort to try to get the subject sharp. It’s almost as if they don’t realize that when you’re shooting an 85mm+ lens on a full frame camera at f/1.4 or f/1.2, the depth of field is so small that the only thing in the image that’s going to be in nice sharp focus is one eye, or the tip of the subject’s nose, or one of the subject’s cheek bones, or their lips, or whatever the camera actually happened to lock focus on. Having a really small depth of field definitely has its uses, but if you want a nice sharp image, try stopping your aperture down to something like f/8. You’d be amazed at how much more resolution and fine detail is there and how much sharper your photos are as a result once they’re scaled to whatever the final image output resolution is. Again, assuming that you actually nailed the focus.

Have you ever seen a picture that actually had as much resolution and fine detail as what could be natively represented by the medium displaying the image? Probably not, but you’ll know it when you see it.

I’ll give an example: have you ever watched an HD movie on your iPhone in HD? You should try it some time. It looks incredible. The reason why isn’t because of image sharpening, but rather because you’re displaying the maximum amount of resolution and fine detail that your iPhone screen can natively represent.

Another example: ever seen Christopher Nolan’s “The Dark Knight” movie on Blu-Ray? He shot parts of the movie on very high resolution IMAX cameras and cut those scenes in with the rest of the film, which was shot on standard super-35 film. Even at Blu-Ray resolution (which is a whole whopping 2 megapixels image size), the difference in image resolution and fine detail between super-35 film and IMAX’s 65mm film is stunning. The IMAX sequences just look a lot sharper, not because of image sharpening, but because they contain as much resolution and fine detail as what can be packed into a 1920×1080 pixel image size, which results in a picture that looks very sharp with very little actual image sharpening applied or needed.

Huh. We just came full circle back to image sharpening. Imagine that.

OK. Sooo… When do you do image sharpening?

Ideally, you sharpen at the very end when your image is at it’s final output size. If you have a good image with good resolution and fine detail, you’ll discover that sharpening is often like what salt and pepper is to a great meal. A little bit goes a long way, and if used just right, it greatly enhances things, however, more is rarely better.

There are other places in your workflow where you can sharpen, like input sharpening, and creative sharpening, and those instances do have their uses, however, they tend to be really over-used and abused, so for the sake of simplicity, we’ll leave them off the discussion table for now and maybe visit them in separate posts.

You may have noticed that I brought up image resolution and fine detail a number of times in this post while talking about image sharpening. How much resolution you actually need is a subject for a different post, so we’ll get into that later, though suffice it to say, you don’t need nearly as much resolution as you think you do, the trick is actually acquiring that resolution in a way that makes for sharp images that will need very little image sharpening to look good.

Till next time.

Enlarge Images Without Pixelation


Usually, we want to reduce the size of images, not enlarge them. However, there are times where we have a reasonably small source image that we need to make bigger for one reason or another.

I actually routinely enlarge images because I have a variety of cameras that I create images with and they’re all different resolutions and bit depths, so I normalize the keepers to one larger master resolution and bit depth.

It should be noted that my method described below is not a miracle worker. You won’t be making 10x or 20x enlargements that look good, but you can easily make 3-4x enlargements that will look totally passable.

The best part? You don’t have to pay for any software beyond just having Adobe Photoshop, so no plug-ins or extra software to buy (Perfect Resize, I’m looking right at you).


The how is actually pretty simple, and before I explain the specifics of what I do in my version of Photoshop, I’ll explain the generic version so that you can convert it to your software, which may not be the same thing as what I’m running. Ok? Let’s get started.

Before doing this, start with your source image and save it as a 16 bit tiff file at its original resolution.

  1. Figure out what your resulting resolution is to be and multiply the longest edge of the image by four. For example, I generally normalize to 7200×4800 pixels, so I’d end up with 28,800×19,200 pixels. Resize your source image to that new, really huge resolution.

  2. Add some noise to the newly resized really huge image. The better way to do this is with monochromatic noise so it looks more organic, but if your software doesn’t do that, any noise is better than no noise. I generally add between 5% and 10%. It should be Gaussian noise.

  3. Add a Gaussian Blur to the really huge image. It’s radius should be 2 pixels.

  4. Resize the really huge image down to your resulting image size. Save it as a 16 bit tiff file and do the rest of whatever post processing you’re going to do.

That’s it! This method also works like a charm for upgrading 8 bit images to nice smooth 16 bit images that you can really push around in Lightroom/Photoshop without any ugly banding or posterization popping out at you.

Why it works

What?!?! You’re adding noise and blur to the image! Doesn’t that destroy image quality and detail and make the image blurry?

If we were to do that to an image at its native size, then yes, all we would be doing is adding noise and making it blurry. The thing to keep in mind is that we’re doing this to an image that is 8 times the resolution of our final output resolution.

When we scale the image back down, that noise and blur has the effect of filling in and smoothing out what would otherwise be pixelation in the image.

The key to this working so well is the bit depth and ratios that we use. The small amount of noise that gets added when we’re at 8x resolution gets reduced down and visually provides a smoothing effect to the new image size without making it look soft or blurry.

Likewise, the Gaussian blur we added was two pixels to an image that is 8x the resolution of what it ultimately will be, meaning when we scale the image back down, a 16×16 block of pixels gets turned into a 4×4 block of pixels, which means that we’re scaling down more than we blurred. When we scale down more than we blur, the blur that was applied starts to do interesting things for us. It visually provides the effect of filling in and smoothing out what would otherwise be pixelation in the image.

Combined with the added noise in step 2, it’s a very dramatic one-two punch to an image that would otherwise look pixelated and awful.

How I do it

We all use different software. I happen to use the latest version Adobe Lightroom CC and Adobe Photoshop CC. This isn’t a tutorial for how to use Lightroom or Photoshop, it’s just a basic walk-through of what I do. You can and should modify it to suit your needs.

All of my images start off in Lightroom at their original resolution and bit depth. I have a Lightroom catalog that I use for staging these images for processing that all of my keepers make their first stop in. In this catalog, I add all my metadata to each image (it makes tracking it later easier) and the only image adjustment I make here is to remove image noise that was introduced by the camera in the form of color and/or luminance noise. I’m very conservative with this and look at the image at a 1:1 or 2:1 ratio in the area where noise is most prominent in the image. I only do just barely enough noise removal to tone down the noise since the more noise removal you do, the more the image fine detail gets muddled. This is done on an image by image basis and how much noise reduction is applied varies greatly between what camera took the image and at what ISO the image was shot at.

From there, I export the image out as a 16 bit tiff file at the “super-sized” resolution (28,800×19,200 pixel or there abouts depending on the image aspect ratio).

I then open that tiff file in Photoshop and add a layer over the background layer that is the image. I change the layer’s blending mode to “overlay” and fill it with 50% gray. From there I convert the layer to a smart object.

With the smart object selected, I go to the ‘Filter’ menu, and select ‘Noise’->’Add Noise’. In the dialog box that pops up, select ‘Gaussian’, and check the ‘Monochromatic’ check-box. Change the amount to a value between 5% and 10%. I’ve found that less than 5% tends to be not enough, and more than 10% is too much. I generally set it to 7.5% as a start then tweak it up or down as needed for best results. You should experiment around for what values work best for you based on what resolution you are working at.

From there, I go to the ‘Filter’ menu again and select ‘Blur’->’Gaussian Blur’. In the pop up dialog box I select a radius of 2 pixels.

From there, tweak the amount of noise up or down for best results (you can do this because it’s a smart object).

When you’re happy with the image, go to the ‘Layer’ menu and select ‘Flatten Image’.

Now resize the image to your final enlarged size (in my case 7200×4800 pixels) and save it as a 16 bit tiff that you’ll pull into your real Lightroom catalog.

Import the new enlarged tiff file into your Lightroom catalog that you use for managing your media, convert it to a DNG file, then finish the rest of your post processing on the file.

Isn’t this a lot of work?

Yes and no. We only do this on our keeper images, which for most photographers is only a fraction of the images that they take. That and the only thing we’re doing that is any different is the Photoshop bit. You should still be doing noise reduction, adding meta-data, and post processing. The only real difference is a quick middle step in Photoshop that literally only takes a couple minutes per image, if that.

Everybody should do what works for them, and what works for me isn’t necessarily for everybody, however, the process I outlined above allows me to shoot everything from an iPhone jpeg, to a DSLR raw or jpeg image, to frame grabs from video as jpegs, and end up with a reasonably sized standard image resolution that is actually very usable.

As proof in the pudding, most all of the images I’ve recently shot digitally have had this treatment, and if I hadn’t told you that those images got this treatment, while looking at them, you’d be none the wiser. It makes differentiating between jpeg and raw, and lower iPhone/video resolution and DSLR resolution extremely difficult, which is the point.

Caveat Emptor

This works best with an image that already has reasonably good resolution content to begin with. This does not magically add detail where there is none, nor does it really add resolution or rescue images that already look terrible.

What it does do is add very high frequency or broadband information to an image in a way that our brains find very pleasant, which allows our brains to do the heavy lifting of ‘seeing’ what detail is there in the enlarged image without seeing the unpleasant visual effects of scaling up that image. In short, we’re playing a very effective visual trick on our brains in very much the same way that adding dithering to digital audio allows us to hear further down into the sound.

Our brains are very good at filtering out high frequency broadband noise to get to the detail in the noise, as long as that noise isn’t overwhelming to the point of being distracting. The trick is riding that balance between helping the perceived image quality and hurting it.

Till next time.