David Shapton makes the case for 17K and why higher resolutions are always going to result in superior images.
It’s debatable whether there is even a slim case for 17K video, unless you’re the owner of a certain spherical concert venue in Las Vegas. But, apart from that somewhat niche use case, it is difficult at first sight to make a convincing argument for such an abundance of pixels in a video format, not least because you have to ask whether each of those pixels is sufficiently different from its neighbors to even be meaningful.
How many pixels in a 17K video? It depends on the aspect ratio, but let’s say it’s around a hundred and fifty million. That’s per frame, and, as you can imagine, it adds up to a lot of data over the course of a feature film.
But that’s no issue for modern technology, where super-fast storage and multi-lane interconnects add up to comfortably enough. In any case, the newly announced Blackmagic URSA Cine 17k cinema camera (I should probably say that it is “mentioned” rather than “announced”) will presumably use Blackmagic Raw compression, which will make the workflow more manageable.
But while it’s important to talk about the implications of workflow design for such a fiercely fine-grained format, it really does matter to know whether or not that torrential supply if pixels is actually going to make a difference to the overall quality of the video.
The answer is a nuanced ‘yes’.
It’s nuanced for many reasons. The first is that if you can get a decent picture with 4K/8MP resolution (and that’s enough for most people viewing on a large-screen TV in their living rooms), why would you need 150MP per frame?
And can you even see the difference? Assuming you had a lens with sufficient resolving power and even if you could actually see the difference between, say 12K and 17K (we’ll come to that…), is it a practical proposition, and are the benefits worth the pain?
The answer has more to do with the analog world than the strictly digital one. Remember that digital video starts with an analog phenomenon (the world, reality, etc) and finishes with an analog image. It has to be analog because we’re not equipped with HDMI sockets in our heads or the necessary electronics to decode the signal. And ideal digital recording system - video or audio - would be able to record and reproduce the real world as accurately as possible, which means it would have to avoid any type of digital degradation or artifacts.
One way to look at this is to think of a high-resolution video as analogous to an audio recording with a high sample rate. With audio, the sample rate is the frequency that is used to sample tiny snapshots of the audio waveform at an instant. The faster you do this - the higher the sample rate - the closer you’ll get to the original waveform when you play it back. At first sight, that might sound a bit haphazard, but we can take comfort from Shannon Nyquist’s theory, which states that if the sample rate is twice or more the bandwidth of the wanted signal, then the original should be reproduced without distortion.
In practice, it’s a little bit harder than that because of a phenomenon called aliasing, where any sounds above the Shannon-Nyquist limit - supposedly out of our hearing range - are “reflected” back into the audible spectrum and sound bad because they’re not harmonically related to the original audio. To avoid this, you have to have a “brick wall” filter at the Shannon-Nyquist frequency to block these potentially troublesome pitches, but such harsh filters can introduce issues all of their own. It gets complicated and expensive to fix these problems, and that is why one popular way to avoid the problem in the first place is to double or quadruple the sampling frequency.
Using a higher sample rate doesn’t stop the artifacts, but it moves them further away from the audible spectrum. It means we’re less likely to be bothered by them the higher the sample rate. It also means that instead of a “brick wall” filter, we can use gentler slopes, and when you take this all into account, it is clearly better to use higher sample rates, even though we can’t actually hear the sounds that are able to be reproduced at these higher frequencies.
It’s similar with video. Instead of a sampling frequency, which exists in the so-called “time domain,” video sampling is spatial. The higher the video sampling rate, the smaller the detail that can be recorded and reproduced. But you still get aliasing. You see it as “staircase” type artifacts on diagonal lines. Aliasing is always there - it’s just that it’s often masked by complexity in the image - foliage, for example. But, just like with audio, the higher the spatial sampling rate (i.e., the more pixels in the pipeline), the less annoying, and ultimately the less visible, will be any aliasing.
So, from the point of view of reducing digital artifacts alone, more resolution is a good idea.
But that still leaves the question of whether extreme resolutions like 17K are worthless or redundant because lenses can’t resolve that level of detail. And that is a valid question. Very few lenses could be that sharp, and even if they were, they probably wouldn’t be at the edges or corners, and certainly not at a wide range of apertures. I’m happy for lens manufacturers to contradict me on this - I’d love to think that you *could* buy lenses that good.
But I think that is almost irrelevant. Apart from avoiding aliasing, I think the main, real benefit of such high resolutions is that while you may not be able to capture much detail at that level, you will capture the essence of the lens’s image (you could say its “character”) in as much detail as you would ever want. I guess it’s a little bit like recording a great guitarist through their favorite amplifier. In some ways, it seems ludicrous to point an expensive studio microphone at a knackered old guitar amp that sounds so good *because* it is knackered. Again, it’s ‘character’.
And given the resurgence in vinyl record sales, that, seemingly, is something we can’t get enough of.