How Much Image Resolution Do You Need?

It depends. Don’t you hate those types of answers? Unfortunately, there is no simple answer, because you have input resolution and output resolutions, and sub-types inside those two high levels.

Cut to the chase

If all you do is share your pictures online or make 4×6, 5×7, or 8×10 prints, then pretty much any camera made within the last 5-10 years will give you more image resolution than you need and you don’t have anything to worry about and shouldn’t even need to think about it. Carry on snapping away. If you want to learn about why, then read on.

Keeping it simple

In the interest of distilling information down to useful bite size chunks, let’s start with output resolution.

Output resolution

The vast majority of us don’t actually print our pictures, however, print resolutions are typically a lot higher than screen resolutions for a given amount of surface area, so we’ll talk in terms of print resolution.

For this discussion, we’ll equate dots and pixels as the same thing, so terms like dpi (dots per inch) and ppi (pixels per inch) mean the same thing. Likewise, we’ll say a pixel and a dot are a single discrete full color entity that we can see. It may be made up of one or more smaller sub-parts, like a red element, a green element, and a blue element in a computer display, or multiple ink droplets on a piece of paper, but for this discussion it’s one discrete visible full color unit.

It turns out that the human eye actually does have a finite amount resolution in terms of how we would describe it if it where a digital sensor. If we where to take resolution and print it onto a piece of paper, and look at it up close and personal, the magic number where we stop being able to discern fine detail sits right around 300 dots per inch, or 150 line pairs per inch on the paper (or screen), meaning that if we take 150 black lines that are 1 pixel wide and 300 pixels tall, 150 white lines that are also 1 pixel wide and 300 pixels tall, and alternate between them, when printed on the paper so that all 300 lines fit into onto a 1×1 inch square, it would actually look like a grey square to most of us instead of look like alternating black and white lines. This is why most magazines print at 300 dpi, and your iPhone’s Retina display is also about 300 dpi. Spatially speaking, our vision starts to poop out and adding more image resolution than that generally does not make the picture have more detail or look sharper to our eyes.

So, with this 300 dpi number, it’s pretty easy to do some simple math and extrapolate out how much output resolution we need for the various ways we look at our pictures: for a 4×6 inch print, 300 dpi times 4 inches is 1200 pixels on the short edge, and 300 dpi times 6 inches is 1800 pixels on the long edge, or an image that is 1800×1200 pixels. That’s a measly 2.1 megapixels.

The same math for an 8×12 (or 8×10) print comes out to 3600×2400 pixels, or a very modest 8.6 megapixels. A 16×24 (or 16×20) print is 7200×4800 pixels, or 34.5 megapixels. Now we’re starting to get into some serious resolution.

The 16×24 print size not withstanding, if we take display sizes into account, we soon discover some correlations. The average computer display or HD TV can comfortably display the 1800×1200 resolution image with little to no scaling and look quite good. A newer 4K display can display the 3600×2400 resolution image with little to no scaling and look quite good. It should be noted that the aspect ratios between print and screen aren’t the same, so if you never intend to print, you can crop your images to 16:9 aspect to match your display and simply size your output to either HD (1920×1080) or UHD (3840×2160) resolution and call it a day.

What I do

I’ve standardized my “house format” if you will to that of a 16×24 (or 16×20) print size, meaning all of my keeper images, regardless of their input resolution get scaled up or down to 7200×4800 pixels. That’s my working/master resolution (as of 2016).

For paying clients, for standard uses, they get 3600×2400 (or 3840×2160) pixels as the deliverable (scaled down from the 7200×4800 master) unless they’re going to print larger than 8×10, and in that case, the conversation shifts to commissioning me to get the image and do the print for them, since I’ll typically want to capture more resolution than normal and work with a print service that specializes in larger print sizes, and that involves renting gear that is appropriate for what the client wants in terms of output size. Depending on the size or the aspect ratio of the print, I may break from the 7200×4800 and go larger, but typically, that is starting to get into higher end output, and very low volume. Keep in mind, a 16×20 or 16×24 print while not really huge, is not small. It’s 4 times the size of an 8×10. It’s big enough, you have a frame built for it and hang it on a wall.

Input resolution

This is where it starts to get a little techie and can be a bit confusing if you’re not a tech head, so lets take it a little at a time. I saved input resolution for last since what resolution you want/need to output tends to drive what resolution you need to acquire to use as your input.

When it comes to input resolution, it can be greatly affected by lots of different factors, so now is a great time to talk about the concept of “effective resolution”. For example, when you take a picture, how much visible resolution ends up in your image that you can see as fine detail is affected by things such as mirror slap (if you’re shooting an SLR or DSLR), shutter shock, how long your shutter is open (which affects how much movement happened, which shows up as blurring), your hand shaking the camera when you press the shutter button (which shows up as blurring), how deep your depth of field is (which affects how much of your image is in really sharp focus), how much image noise the sensor is introducing into the image, how much diffraction is happening in your lens (depending on your lens f-stop), and how much spatial resolution the lens you’re using is capable of actually putting onto your camera sensor. These things that I just listed are all things that will affect how much resolution you’re effectively putting into your image regardless of whether your shooting film or digital, full frame, medium format, large format, APS-C, Micro 4/3, or smaller. We haven’t even started talking about the raw image sensor resolution yet.

In short, that nice crispy 24 megapixel camera you just picked up? Unless you’re using a really high resolution lens (which is incredibly expensive), and practicing some pretty rigorous shooting process to keep your camera movement and vibration under control, you’re not getting anywhere near 24 megapixels of resolution when you take a picture. Even then, your camera sensor is hiding a dirty little secret.

You see, a 24 megapixel camera outputs an image that is 6000×4000 pixels. It does not actually have 6000 red pixels, 6000 green pixels, and 6000 blue pixels across. The same goes vertically. Nope. What it does have is 6000×4000 light detecting sensors, that then has a color filter array placed over it (usually in a bayer pattern). The color filter array takes that 6000×4000 pixels and divides it up between red, green, and blue. Since human vision is most sensitive to green, a full half of the sensor resolution gets filtered to green, and the remaining half goes to red and blue, which each get a quarter of the resolution. To get to an image that you can actually see, this then goes through a demosaicing process into an image that is 6000 red, green, and blue pixels by 4000 red, green, and blue pixels.

What this means is that for a 24 megapixel image captured with a 24 megapixel camera, you are effectively seeing 12 megapixels of green, and 6 megapixels each of red and blue. Even though, spatially speaking, you have 24 million light sensing elements on your sensor, you are not getting 24 megapixels of full color information. It’s actually more like 6 to 9 megapixels of full color information, which interestingly enough is right there in the ball park of making a really nice 8×10 print.

This is why medium format cameras have been 40+ megapixels for a while now. It’s less about getting the raw spatial resolution, and more about effectively getting more full color spatial resolution. This is why an 8×10 print from a 50 megapixel Canon 5Ds looks so much better than the same picture taken with a ten year old Canon Digital Rebel XTi that’s 10 megapixels. It’s not about the raw spatial resolution, since we can’t really see more than 300 dpi on the page anyway, it’s about getting 300 dpi of full color information.

What about large prints?

But wait a minute, photographers have been making large prints with cameras that don’t have anywhere near that resolution for a while now and they look great. What gives? Well, as it turns out, the larger you print, the less spatial resolution you actually need. It sounds counter intuitive, but once you start getting into 16×20 or larger print sizes, you stop looking at it up close and personal like you would a smaller print, and instead stand back to take it in. The further away from the print you stand, the less dpi your eyes can actually resolve on the print. This is why a 65″ 1080p HDTV which is only 2 megapixels still looks good. You sit further away from it than you would a smaller TV. Combine that with the fact that our brains are very good at filling in missing information, all the photographer has to do is make sure that the image is scaled up in a way that pixelation isn’t obvious if inspected up close. Our brains will do the rest.

With that being said, for larger print sizes more camera resolution will generally result in a better looking output image simply because we’re then putting more raw spatial and full color resolution into the image and have a lot more real estate that we can fill up with that resolution, until we effectively have more than 300 dpi that we’re putting on the print surface.


So how much image resolution do we really need? For the average person sharing online and making 8×10 or smaller prints, a camera that is at least 6 or 7 megapixels will provide totally usable images. The larger you print, the more resolution you’ll want. Digital cameras have just recently gotten to the point where we can actually capture and put all the full color resolution that we can see into an 8×10 print, which makes for super fabulous prints, so this is a great time to be taking pictures.

Image Sharpening Explained, Simply

What is image sharpening? We’ve all heard about it and have undoubtedly heard about various image sharpening tools like unsharp mask or smart sharpen, but I’ve found that very few of us actually understand what image sharpening is.

So what is image sharpening?

No matter what image sharpening tool or algorithm you use, they all have the same end result, which is to increase the contrast of the lines and edges of objects in the image. That’s all image sharpening is. The primary differences between the various algorithms or methods of sharpening isn’t the end result, but rather how the lines and edges of objects in the image are detected. Likewise the various sliders or controls you get for each method of sharpening are to control the amount of sharpening and to fine tune the underlying line and edge detection for that sharpening method.

So there you have it. Image sharpening explained in 4 simple sentences. It’s not that difficult. The human visual system is extremely good at detecting lines and edges, so when we sharpen an image, all we’re doing is making what we’re visually sensitive to more pronounced. It’s a very effective visual perceptual trick that we’re playing on our brains when we sharpen an image.

The soapbox

I’ve noticed some image sharpening trends the last 5-6 years that really bother me and make me think that all these people on the internet that are dispensing photography information and advice and are supposed to be photography experts don’t really know what they’re talking about and doing. I can’t help myself. I have to say something about it.

The image sharpening aesthetic

This is a huge pet peeve of mine. All too often, people think that a sharp picture has lots of detail. As a result, they sharpen their images way, way, way too much. The Internet is riddled with posts on how and when to sharpen. You have input sharpening, creative sharpening, and output sharpening. You have tons of sharpening algorithms, plugins, and tools to increase the clarity of your images. There are companies out there whose entire business model is literally based on selling you something that will help you sharpen your images. On the camera hardware front, as of late, it seems that if a newly released camera doesn’t output a ridiculously over-sharpened image the Internet declares that camera as a piece of garbage. Ugh.

On top of that, the internet is flooded with images that are painfully over-sharpened (usually as a result of said company that sells image sharpening tools), all in the name of having a nice sharp image that is “crispy”. You know what else makes a nice sharp image? A nice moderately high resolution image that has a depth of field that is large enough so that the whole subject of the image is entirely in focus, assuming the person taking the picture actually nailed the focus.

Image focus and resolution to the rescue

I can’t believe how many people get a super fast prime lens, and then proceed to shoot with it wide open, then sharpen the ever living day lights out of the resulting images in an effort to try to get the subject sharp. It’s almost as if they don’t realize that when you’re shooting an 85mm+ lens on a full frame camera at f/1.4 or f/1.2, the depth of field is so small that the only thing in the image that’s going to be in nice sharp focus is one eye, or the tip of the subject’s nose, or one of the subject’s cheek bones, or their lips, or whatever the camera actually happened to lock focus on. Having a really small depth of field definitely has its uses, but if you want a nice sharp image, try stopping your aperture down to something like f/8. You’d be amazed at how much more resolution and fine detail is there and how much sharper your photos are as a result once they’re scaled to whatever the final image output resolution is. Again, assuming that you actually nailed the focus.

Have you ever seen a picture that actually had as much resolution and fine detail as what could be natively represented by the medium displaying the image? Probably not, but you’ll know it when you see it.

I’ll give an example: have you ever watched an HD movie on your iPhone in HD? You should try it some time. It looks incredible. The reason why isn’t because of image sharpening, but rather because you’re displaying the maximum amount of resolution and fine detail that your iPhone screen can natively represent.

Another example: ever seen Christopher Nolan’s “The Dark Knight” movie on Blu-Ray? He shot parts of the movie on very high resolution IMAX cameras and cut those scenes in with the rest of the film, which was shot on standard super-35 film. Even at Blu-Ray resolution (which is a whole whopping 2 megapixels image size), the difference in image resolution and fine detail between super-35 film and IMAX’s 65mm film is stunning. The IMAX sequences just look a lot sharper, not because of image sharpening, but because they contain as much resolution and fine detail as what can be packed into a 1920×1080 pixel image size, which results in a picture that looks very sharp with very little actual image sharpening applied or needed.

Huh. We just came full circle back to image sharpening. Imagine that.

OK. Sooo… When do you do image sharpening?

Ideally, you sharpen at the very end when your image is at it’s final output size. If you have a good image with good resolution and fine detail, you’ll discover that sharpening is often like what salt and pepper is to a great meal. A little bit goes a long way, and if used just right, it greatly enhances things, however, more is rarely better.

There are other places in your workflow where you can sharpen, like input sharpening, and creative sharpening, and those instances do have their uses, however, they tend to be really over-used and abused, so for the sake of simplicity, we’ll leave them off the discussion table for now and maybe visit them in separate posts.

You may have noticed that I brought up image resolution and fine detail a number of times in this post while talking about image sharpening. How much resolution you actually need is a subject for a different post, so we’ll get into that later, though suffice it to say, you don’t need nearly as much resolution as you think you do, the trick is actually acquiring that resolution in a way that makes for sharp images that will need very little image sharpening to look good.

Till next time.

Enlarge Images Without Pixelation


Usually, we want to reduce the size of images, not enlarge them. However, there are times where we have a reasonably small source image that we need to make bigger for one reason or another.

I actually routinely enlarge images because I have a variety of cameras that I create images with and they’re all different resolutions and bit depths, so I normalize the keepers to one larger master resolution and bit depth.

It should be noted that my method described below is not a miracle worker. You won’t be making 10x or 20x enlargements that look good, but you can easily make 3-4x enlargements that will look totally passable.

The best part? You don’t have to pay for any software beyond just having Adobe Photoshop, so no plug-ins or extra software to buy (Perfect Resize, I’m looking right at you).


The how is actually pretty simple, and before I explain the specifics of what I do in my version of Photoshop, I’ll explain the generic version so that you can convert it to your software, which may not be the same thing as what I’m running. Ok? Let’s get started.

Before doing this, start with your source image and save it as a 16 bit tiff file at its original resolution.

  1. Figure out what your resulting resolution is to be and multiply the longest edge of the image by four. For example, I generally normalize to 7200×4800 pixels, so I’d end up with 28,800×19,200 pixels. Resize your source image to that new, really huge resolution.

  2. Add some noise to the newly resized really huge image. The better way to do this is with monochromatic noise so it looks more organic, but if your software doesn’t do that, any noise is better than no noise. I generally add between 5% and 10%. It should be Gaussian noise.

  3. Add a Gaussian Blur to the really huge image. It’s radius should be 2 pixels.

  4. Resize the really huge image down to your resulting image size. Save it as a 16 bit tiff file and do the rest of whatever post processing you’re going to do.

That’s it! This method also works like a charm for upgrading 8 bit images to nice smooth 16 bit images that you can really push around in Lightroom/Photoshop without any ugly banding or posterization popping out at you.

Why it works

What?!?! You’re adding noise and blur to the image! Doesn’t that destroy image quality and detail and make the image blurry?

If we were to do that to an image at its native size, then yes, all we would be doing is adding noise and making it blurry. The thing to keep in mind is that we’re doing this to an image that is 8 times the resolution of our final output resolution.

When we scale the image back down, that noise and blur has the effect of filling in and smoothing out what would otherwise be pixelation in the image.

The key to this working so well is the bit depth and ratios that we use. The small amount of noise that gets added when we’re at 8x resolution gets reduced down and visually provides a smoothing effect to the new image size without making it look soft or blurry.

Likewise, the Gaussian blur we added was two pixels to an image that is 8x the resolution of what it ultimately will be, meaning when we scale the image back down, a 16×16 block of pixels gets turned into a 4×4 block of pixels, which means that we’re scaling down more than we blurred. When we scale down more than we blur, the blur that was applied starts to do interesting things for us. It visually provides the effect of filling in and smoothing out what would otherwise be pixelation in the image.

Combined with the added noise in step 2, it’s a very dramatic one-two punch to an image that would otherwise look pixelated and awful.

How I do it

We all use different software. I happen to use the latest version Adobe Lightroom CC and Adobe Photoshop CC. This isn’t a tutorial for how to use Lightroom or Photoshop, it’s just a basic walk-through of what I do. You can and should modify it to suit your needs.

All of my images start off in Lightroom at their original resolution and bit depth. I have a Lightroom catalog that I use for staging these images for processing that all of my keepers make their first stop in. In this catalog, I add all my metadata to each image (it makes tracking it later easier) and the only image adjustment I make here is to remove image noise that was introduced by the camera in the form of color and/or luminance noise. I’m very conservative with this and look at the image at a 1:1 or 2:1 ratio in the area where noise is most prominent in the image. I only do just barely enough noise removal to tone down the noise since the more noise removal you do, the more the image fine detail gets muddled. This is done on an image by image basis and how much noise reduction is applied varies greatly between what camera took the image and at what ISO the image was shot at.

From there, I export the image out as a 16 bit tiff file at the “super-sized” resolution (28,800×19,200 pixel or there abouts depending on the image aspect ratio).

I then open that tiff file in Photoshop and add a layer over the background layer that is the image. I change the layer’s blending mode to “overlay” and fill it with 50% gray. From there I convert the layer to a smart object.

With the smart object selected, I go to the ‘Filter’ menu, and select ‘Noise’->’Add Noise’. In the dialog box that pops up, select ‘Gaussian’, and check the ‘Monochromatic’ check-box. Change the amount to a value between 5% and 10%. I’ve found that less than 5% tends to be not enough, and more than 10% is too much. I generally set it to 7.5% as a start then tweak it up or down as needed for best results. You should experiment around for what values work best for you based on what resolution you are working at.

From there, I go to the ‘Filter’ menu again and select ‘Blur’->’Gaussian Blur’. In the pop up dialog box I select a radius of 2 pixels.

From there, tweak the amount of noise up or down for best results (you can do this because it’s a smart object).

When you’re happy with the image, go to the ‘Layer’ menu and select ‘Flatten Image’.

Now resize the image to your final enlarged size (in my case 7200×4800 pixels) and save it as a 16 bit tiff that you’ll pull into your real Lightroom catalog.

Import the new enlarged tiff file into your Lightroom catalog that you use for managing your media, convert it to a DNG file, then finish the rest of your post processing on the file.

Isn’t this a lot of work?

Yes and no. We only do this on our keeper images, which for most photographers is only a fraction of the images that they take. That and the only thing we’re doing that is any different is the Photoshop bit. You should still be doing noise reduction, adding meta-data, and post processing. The only real difference is a quick middle step in Photoshop that literally only takes a couple minutes per image, if that.

Everybody should do what works for them, and what works for me isn’t necessarily for everybody, however, the process I outlined above allows me to shoot everything from an iPhone jpeg, to a DSLR raw or jpeg image, to frame grabs from video as jpegs, and end up with a reasonably sized standard image resolution that is actually very usable.

As proof in the pudding, most all of the images I’ve recently shot digitally have had this treatment, and if I hadn’t told you that those images got this treatment, while looking at them, you’d be none the wiser. It makes differentiating between jpeg and raw, and lower iPhone/video resolution and DSLR resolution extremely difficult, which is the point.

Caveat Emptor

This works best with an image that already has reasonably good resolution content to begin with. This does not magically add detail where there is none, nor does it really add resolution or rescue images that already look terrible.

What it does do is add very high frequency or broadband information to an image in a way that our brains find very pleasant, which allows our brains to do the heavy lifting of ‘seeing’ what detail is there in the enlarged image without seeing the unpleasant visual effects of scaling up that image. In short, we’re playing a very effective visual trick on our brains in very much the same way that adding dithering to digital audio allows us to hear further down into the sound.

Our brains are very good at filtering out high frequency broadband noise to get to the detail in the noise, as long as that noise isn’t overwhelming to the point of being distracting. The trick is riding that balance between helping the perceived image quality and hurting it.

Till next time.