Sony's new Venice camera has recently pushed full frame sensors for filmmaking into the limelight. But while larger sensors do offer some compelling advantages in the right circumstances, just how big should they become, and can we manage their requirements?
In photography, everything is related: aperture, focal length, sensitivity, shutter speed, the amount of light, the size of the sensor and the resolution of the imager. All of these things interact, constantly. Butt here are underlying things that don't change, or don't change much, or at least won't keep changing forever.
We probably haven't reached the point quite yet where the physics of imaging sensors has reached any sort of theoretical limit, though there certainly is a limit. We've talked about shot noise before, which happens when the number of photons bouncing off an object and entering the camera lens is very low in dim light. Even if we can count each and every one of them, there'll be a visible flicker in the light level we record, simply because of the variable number of photons that actually hit that object during the time the camera's virtual shutter was open. We're starting to hit that with the very best sensor designs — but we're not quite there yet, not with all of them.
Also, sensor and camera manufacturers usually have access to broadly the same level of performance. Every so often, someone will pull ahead slightly, but usually, each generation is within a stop of its peers in terms of noise, sensitivity and dynamic range. So, while we haven't hit the absolute maximum performance of a sensor quite yet, the playing field is usually fairly even, with most cameras of comparable cost operating within a reasonably narrow performance bracket.
So, a pixel of a given cost will have a given sensitivity and noise level, give or take a stop. This implies that increasing pixel count for the same sized sensor, beyond the point at which the pixels are visible, may become almost a zero-sum game. The picture will (assuming perfect lenses) become more detailed because the pixels are larger in number. It will also simultaneously become noisier because the pixels are smaller. Smaller pixels won't statistically have so many photons land on them, so the signal is smaller and requires more amplification.
At some point, then, this becomes a zero-sum game. Adding more pixels to record more detail simply provokes noise which obscures detail. At resolutions beyond 4K, where the pixels will not generally be visible in a conventional viewing environment, this is already a problem. So far, the workaround to this, which really seems to be catching on, is to make the whole sensor bigger. Larger sensors can contain larger pixels for the same overall resolution, reducing noise and increasing dynamic range, which sounds like a good deal. The problem is that we want the pixels to be bigger so that they can fill up with more photons; the overall amount of light required to properly expose the sensor goes up.
OK, we're probably using a lens with a shorter focal length to achieve the same field of view on a larger sensor and a shorter focal length provokes, theoretically, a lower f-number. As a practical matter, making lenses that project a big enough image circle to cover a large sensor with a wide angle of view ends up creating extra engineering challenges, and so lenses for larger-format sensors tend to end up being larger and more expensive in order to maintain even the same f-numbers and sharpness as their small-sensor equivalents. Even then, we might actually want to shoot at a higher stop to maintain a similar depth of field anyway.
The upshot is that while bigger sensors can host more pixels with similar dynamic range and noise performance (or fewer pixels with increased performance, etc), they can end up sacrificing pretty much everything else in order to achieve it. Pick a few of the items you like least from the following list: bigger, heavier, costlier lenses, more light, less depth of field, fuzzier corners and even a bit more gain, which brings back the noise again. Now, on some productions, these concerns are things which can be washed away with the money hose, hiring the best equipment and people in order to work around them. It's hard to deny, though, that the practicalities will come down rather harder on people who aren't shooting major motion pictures and can't afford a camera department of 10 with five figures a week in rental. And that's most of us.
Ultimately, there are sacrifices to larger and larger slabs of silicon in our cameras. Some of us are willing and able to work around them (and still others are willing but less able, financially.) There are good reasons to want big pixel counts, for very-large-screen presentation, ride films, 360-degree work, VFX elements and so on. The thing is, conventional cinema, 24-frame-per-second, two-dimensional, widescreen cinema, seems quite able to beat off pretenders like high frame rate and 3D. Whether or not there's something else alongside, conventional cinema seems set to last and it might not be the most obvious place to deploy a sensor the size of a postcard.