Let’s talk about pixels. Specifically, the iPhone is 14 pixels. More precisely, the pixels of the iPhone 14 Pro. Because while the headlines are talking about the latest Pro models offering a 48MP sensor instead of a 12MP one, that’s not really the biggest improvement Apple has made to the camera this year.
Indeed, from four the biggest change this year, the 48MP sensor is the least important to me. But bear with me here as we have a lot to unpack before I can explain why I think the 48MP sensor is much less important than:
- Sensor size
- Pixel binning
- Photon engine
One 48MP sensor, two 12MP sensors
Colloquially, we refer to the iPhone’s camera in the singular, and then we refer to three different lenses: the main lens, the wide-angle lens, and the telephoto lens. We do this because it’s familiar – that’s how DSLRs and mirrorless cameras work, one sensor, multiple (interchangeable) lenses – and because it’s an illusion that Apple creates in the camera app for simplicity.
The reality, of course, is different. The iPhone actually has three cameras modules. Each camera module is separate and each has its own sensor. When you press, say, the 3x button, you don’t just select a telephoto lens, you switch to a different sensor. With pan zoom, the camera app automatically and seamlessly selects the appropriate camera module and then makes any necessary pruning.
Only the main camera module has a 48MP sensor; the other two modules still have 12MP.
Apple made this clear when it introduced the new models, but it’s an important detail that some may have overlooked (emphasis mine):
For the first time in the Pro lineup, a new Main camera 48 MP with a quad-pixel sensor that adapts to the photo you take and has second-generation sensor-shift optical image stabilization.
48MP sensor works part-time
Even when you use the main camera with its 48MP sensor, you still only take 12MP photos by default. And again Apple:
For most photos, the quad-pixel sensor combines every four pixels into one large quad-pixel sensor.
The only time you shoot at 48 megapixels is when:
- You are using the main camera (not telephoto or wide angle)
- You are shooting in ProRAW format (it is disabled by default)
- You shoot in good light
If you want to do it, here’s how. But mostly you won’t…
Apple’s approach makes sense
Why give us a 48MP sensor and then basically not use it, you might ask?
Apple’s approach makes sense because, in truth, very a few times when shooting at 48MP is better than 12MP. And since it creates much larger files that devour your storage with an insatiable appetite, there’s no point in using this as the default.
There are only two scenarios I can think of where shooting a 48MP image is useful:
- You are going to print the photo in large size
- You need to crop the image very much
This second reason is also a little questionable because if you need to crop a lot, you might be better off using a 3x camera.
Now let’s talk about the size of the sensor.
When comparing any smartphone camera to a DSLR or a high-end mirrorless camera, there are two big differences.
One of them is the quality of the lenses. Standalone cameras can have much better lenses due to both physical size and cost. It is not uncommon for a professional photographer or a keen amateur photographer to spend four figures on a single lens. Smartphone cameras certainly can’t compete with this.
The second is the size of the sensor. Other things being equal, the larger the sensor, the better the image quality. Smartphones, by the very nature of their size, and all the other technologies they need to fit into, have much smaller sensors than standalone cameras. (They also have limited depth, which puts another significant limitation on sensor size, but we don’t need to go into details.)
A smartphone-sized sensor limits image quality and also makes it difficult to achieve shallow depth of field—which is why the iPhone does it artificially, with portrait mode and cinematic video.
Big Apple sensor + limited megapixel approach
While there are obvious and less obvious limits to the size of the sensor you can use in a smartphone, Apple has historically used larger sensors than other smartphone brands, which is one of the reasons the iPhone has long been considered the best quality phone. cameras. . (Samsung later switched to this too.)
But there is a second reason. If you want the best quality images from your smartphone, you also need pixels be as big as possible.
That’s why Apple sticks to 12MP, while brands like Samsung have crammed a whopping 108MP into a sensor of the same size. Compressing a large number of pixels into a tiny sensor increases noise significantly, which is especially noticeable in low-light photos.
Okay, it took me a while to get there, but now I can finally say why I think a bigger sensor, pixel binning, and a photonic engine are much more important than a 48MP sensor…
#1: iPhone 14 Pro/Max sensor is 65% larger
The main camera sensor on the iPhone 14 Pro/Max is 65% larger this year than last year’s model. Obviously this is still nothing compared to a standalone camera, but for a smartphone camera this is (pun intended) huge!
But, as we mentioned above, if Apple squeezed four times as many pixels into a matrix that is only 65% larger, it would actually give worse quality! That’s why you’ll basically still shoot 12-megapixel images. And this is thanks to…
#2: Pixel Merge
To capture 12-megapixel images with the main camera, Apple uses a pixel-binning method. This means that data from four pixels is converted into one virtual pixel (averaging values), so the 48MP sensor is mostly used as the larger 12MP sensor.
This illustration is simplified but gives the basic idea:
What does it mean? Pixel size is measured in microns (one millionth of a meter). The pixels of most premium Android smartphones are between 1.1 and 1.8 microns. The iPhone 14 Pro/Max, when using the sensor in 12MP mode, actually has 2.44 micron pixels. it Indeed significant improvement.
Without pixel binning, the 48MP sensor would be a downgrade in most cases.
#3: Photon Engine
We know that smartphone cameras certainly can’t compete with standalone cameras in terms of optics and physics, but where they can compete is in computational photography.
Computational photography has been used in SLR for literally decades. For example, when you switch metering modes, it tells the computer inside your DLR to interpret the raw data from the sensor in a different way. Similarly, in consumer DSLRs and all mirrorless cameras, you can choose from a variety of photo modes that again tell the microprocessor how to adjust the data from the sensor to achieve the desired result.
Thus, computational photography already plays a much larger role in standalone cameras than many people think. And Apple is very, very good at computational photography. (Ok, he’s not very good in cinematic video yet, but give him a few years…)
The Photonic Engine is a custom chip that powers Apple’s Deep Fusion approach to computational photography, and I’m already seeing a huge difference in the dynamic range of photos. (Role models in next week’s iPhone 14 diary.) Not just the range itself, but the smart decisions being made about which the remove the shadow and which the select to tame.
The result is significantly higher quality photographs that are more software related than hardware related.
A significantly larger sensor (by smartphone standards) makes a big difference when it comes to image quality.
The pixel binning means that Apple has actually created a much larger 12-megapixel sensor for most photos, allowing the benefits of the larger sensor to be realised.
Photonic Engine means a special chip for image processing. I can already see the real benefits of this.
Read more in the iPhone 14 diary as I put the camera through more extensive testing over the next few days.
FTC: We use automated affiliate links that generate income. More.
Visit 9to5Mac on YouTube for more Apple news: