At some point most photographers wonder how many pixels they need in order to print an image file at a given size. This is actually an incredibly complex question with no precise answer, even though the mathematics required to figure it out are simple. On the most basic level, a print that is perfectly acceptable to one person may fall short of someone else's standards. Beyond that the combined factors of camera and subject movement, lens quality, the aperture used to capture the image, and image noise can make huge differences in the quality of the original RAW file, and lower quality means smaller maximum print sizes. Through bad technique and other factors like shoddy lenses a person using a 40 MP camera can produce files that can barely make good 8x10 inch prints. Conversely, using excellent equipment and technique a person using an 18 MP camera can sometimes produce files capable of making good 20x30 inch prints. Clearly the maximum print size obtainable from a file has as much to do with image quality as pixel count.
Movement of the camera and the subject while the shutter is open blurs the image. As more pixels are packed into image sensors of the same dimensions resolution and pixel density increase while the individual pixels get smaller. The smaller the pixels the less subject and/or camera movement it takes to blur an image across more than one of them. When using 50 MP sensors cars on nearby roads and people walking near your tripod can cause ground movements large enough to create visible blurring. By "visible" I mean at 100% (actual pixels) in Photoshop on a typical 110 ppi monitor, which is like looking at a print that is nearly 7 feet wide. Blurring from any cause relates directly to wasted resolution.
The relationship of lens quality to image quality is obvious. Unfortunately as of this writing the Zeiss Otus lenses (55mm for about $4000 and 85mm for about $4500) are the only lenses made that can come anywhere close to resolving 40-50 MP of useful data, and they can do that only at wide apertures that give very shallow depth of field. Less obvious is the relationship of the aperture setting to both lens quality and image quality:
1. Smaller apertures produce more diffraction and lower the maximum resolution in the image projected on the image sensor. Also note that maximum resolution occurs only in the image center. You will not get much more than 21 MP of useful data from any lens stopped down to F/11. Anything more is lost to diffraction. Deconvolution sharpening can can recover some of the data that is lost to diffraction, but it cannot work miracles.
2. All lenses produce less resolution as the distance from the center of the lens increases, so corners of an image often have considerably less resolution than the center of the same image. This is due to lens aberrations that are minimal at the center of the image and more pronounced as the distance from the center increases. Higher quality lenses control these aberrations better, so they can obtain relatively even resolution across the image at wider apertures. That in turn provides an increase in resolution at every position within the image.
3. As the aperture gets smaller lens aberrations are reduced.
The combination of the three items above give us lenses that are sharpest in the center at the widest apertures and sharpest in the corners at much smaller apertures. Getting relatively even image resolution across the entire frame means stopping down to decrease resolution of the image center while increasing resolution at the image edges and corners. The amount of stopping down required varies for different lenses so it pays to do some checking of the lenses you use. Many lenses, even good ones, require an aperture of around f/11 to obtain center and corner resolutions that are not drastically different. When corners contain important detail stopping down is the obvious choice even though it wastes loads of resolution if you are using an image sensor of considerably more than 21MP. When subjects are centered with no important information near the edges and corners of the frame wider apertures are best. When possible, a trick used by some is capturing one image at a relatively wide aperture to maximize center resolution, capturing another at a narrow aperture to maximize corner and edge resolution, and then blending the two images. This should not be confused with focus stacking, which allows a large depth of field by blending large aperture images that are focused at different points. Focus stacking can produce high resolution with nearly unlimited depth of field, but it is a tedious process that often requires automation and special software.
This brings us to depth of field. There are a three ways I can think of to obtain sharpness in near foregrounds and distant backgrounds. The use of a tilt/shift lens and focus stacking can both accomplish this without loss of resolution, and only a tilt/shift lens can do it in a single exposure. Using narrower lens apertures with corresponding loss of resolution is the other way, and it is obviously the most common. I want to add here that if you need to stop down to get adequate depth of field, don't hesitate to do it. The lower resolution photograph you will obtain is far better than a higher resolution one with an out of focus foreground, background, or both.
Adding noise to an image is technically different than decreasing its resolution but it is visually quite similar. Noise looks bad and the larger you print an image the more visible it becomes. Underexposing an image and then boosting exposure during processing is one source of noise, and so is using a higher ISO than required. "Exposing to the right" will minimize noise from any camera. What this means is capturing the highest exposure possible that is not overexposed. This makes for images that look too bright, but that is compensated for in processing. After processing the image will have less noise than one that was normally exposed from the start.
The image sensor format plays a role in the usability of available sensor resolution. Making something like a 24x36 inch print from a 24x36 millimeter sensor stretches the limits of every part of the image making system to a ridiculous degree. The tiny sensor area must be magnified over 645 times to make a print that large. That magnification makes the smallest lens aberrations and every other image defect plainly visible. The more pixels we pack into smaller sensors and the larger we print, an increasingly higher percentage of the added resolution goes to lens aberrations, motion blur, diffraction blur, and other problems. A higher resolution sensors will never make a resulting print look worse than a lower resolution sensor, but the additional resolution is often wasted, adding little or nothing to the image.
The issues I have mentioned here affect medium format cameras to a much smaller degree because the image and its flaws are magnified only about a third as many times to make a comparably sized print. With all else being equal, a medium format camera with a given number of megapixels will always produce better image quality than a 35mm DSLR with the same number of megapixels. On top of the advantage of less magnification, the larger pixels allowed by the larger format will also produce less noise. Even with great technique and the best available prime lenses set at optimal apertures and used under ideal conditions, it is nearly impossible to extract 40 or 50 megapixels of useful information from a 35mm format sensor. Working outdoors with zoom lenses, low light, moving subjects, wind, haze, fog, rain, and even variations in air density across the image field I doubt that anyone gets much more than 24 MP of useful data from their camera on a regular basis regardless of the camera used. But, users of these super-high resolution cameras need not despair because there is a little magic involved here. Well, it's not really magic, but it's complicated. The simple end result is that the resolution of a the lens plus camera *system* increases with an increase in sensor resolution. That means that when you swap the same lens you have been using for years between a typical 20-something megapixel camera and a 50 megapixel camera you will get higher resolution on the 50 megapixel camera. The resolution you end up with will not be anything like 50MP, but if your 22MP camera and a given excellent lens together have a system resolution of 18 MP the same lens with a 50MP camera might be able to resolve something like 30MP. Again, this assumes the very wide apertures and optimal conditions so you will not actually achieve it in most shots, but it certainly does not hurt anything.
Now that you have a feeling for what is involved to maximize the quality of your image files, and you realize that different images from the same camera can have different "maximum" sizes, you probably still want to know roughly how large, you can print them. For a starting point, just take the pixel dimensions of your images and divide each by 240 pixels/inch. That is widely considered the minimum resolution for a good print. For example, if your files are 6000x4000 pixels you get 25x16.7 inches. If you start with a great capture and use a competent interpolation program to increase the size, you can certainly go larger. If the capture is iffy to start with you may have already gone too large. The numbers you get by dividing will come out to some odd sizes. Just round them up or down a little and you will still be in the right ballpark.
In reality 240 ppi is not the best resolution to use for printing. The issue is that no matter what resolution you input to the printer the printer driver will always up-res the image to the printer's native resolution anyway. It turns out that Photoshop and some other programs can do a better job of that than the printer drivers. Canon and HP printers have a native resolution of 600 ppi and the native resolution for Epson is 720 ppi. Because print files at such high resolutions are enormous and the printer drivers do a decent job of doubling the resolution, half of the native resolution is typically used. As an example, suppose you want to make a 30x20 inch print. Using a program like Photoshop you can resize the image to 30x20 inches at 360 ppi (if you're using an Epson printer). The program will create a file that will print at 24x16 inches at 360 ppi. If you convert that back to pixels (multiply 30x360 and 20x360) you'll notice that you have a lot more pixels than your camera originally produced. Photoshop has made up pixels to go between the ones you originally had. There's a limit to how much of that can be done without degrading the image noticeably, but starting with a great capture you will be surprised at what's possible.
You can get a rough approximation of the final print quality in Photoshop and other programs by viewing the image at 50% on a typical 110 ppi monitor. The actual print will typically look a bit better than what you see. If you are using one of the very high resolution cameras and not making an enormous print, 33% can be more realistic. To very accurately assess the print quality crop a piece out of the final print file that has a lot of detail. An 8x10 inch piece (give or take) works well. Print that, see how it looks, and go from there based on your judgment.
Happy pixel counting... and printing!
P.S.: This article is not a class in optics. It is meant to convey general concepts, and to that end all of the resolutions I mention are in megapixels. In actuality lens resolutions are measured in line pairs per millimeter, not megapixels. In fact without a specifically stated pixel density, among other things, stating a lens resolution in megapixels is meaningless. This means I am making some assumptions that I think are realistic and I am fudging things a little in the interest of clarity. I believe the numbers I have presented are all in the right "ballpark", even though I have forgotten much of what I learned during six years studying physics and mathematics.