RedShark Replay: The current fixation on resolution is misplaced and we’re in danger of missing the point completely when we make films and videos
Do you remember those grainy video pictures of Neil Armstrong’s first steps on the moon? I do, which is an admission that at the very least reveals that I’m no longer a teenager.
And what was your reaction? (Or your parents’ or your grandparents’ reaction?) Was it “Those pictures are really fuzzy. I can’t watch them”? Or was it “Black and white? I’m turning over to watch a film”, or was it even “Those NASA guys could have graded that a bit better!”?
Of course not. My reaction was more like, to paraphrase: “OMG! OMG! OMG! You can see them on the Moon!”.
From today’s perspective, yes, those pictures were terrible, but you probably shouldn’t apply today’s standards to what is, effectively, pure history. What if we had video of the asteroid that wiped out the dinosaurs? Would we refuse to watch it unless it was in HDR 4K? No, of course not. Given that it would probably be the most important historical artifact in the history of anything, we’d probably be more than OK with it if it moved at one frame per second and looked like it was shot through ten feet of pondwater.
The moon pictures were sent back to Earth from a mission with less computing power than a gerbil and were, of course, analogue in every way. No wonder they lacked detail and were smeary. But in their own way, they were truly awesome.
Today, if Armstrong and Aldrin had taken the cheapest smartphone to the moon, they could have got pictures immeasurably better than the ones they took in 1969 (remember I’m talking about video here - not stills, which were very adequately catered for by the Hasselblad that NASA supplied to the astronauts). And if they’d taken a good camera - something like a Sony FS7, for example, then we’d have seen the most extraordinary video, ever.
But what exactly is it that’s better about cameras these days? I bet most of you would say “resolution”. And it’s true, that aspect of camera technology is the headline grabber. There’s no doubt that HD has transformed expectations and deliverables where video is concerned. It’s not surprising: it’s around five times better than SD, which had been around for approximately sixty years.
So now, only around ten years after the widespread adoption of HD, we’re eyeing up 4K, and you can buy 4K cameras from just about every manufacturer. 4K is four times HD’s pixel count, but - remember - it’s only twice the linear resolution.
“Only”? That’s not exactly an incremental improvement. It’s a massive one. But the numbers don’t equate to the viewing experience. Please don’t get me wrong here. I’m a great fan and an advocate of 4K. But it is what it is and it isn’t what it isn’t. What it isn’t is four times the viewing experience, for several reasons.
Firstly, I don’t know how you’d measure “four times the viewing experience”. I don’t know what units you’d use, or even whether that statement is even meaningful. All you can do is design objective tests that ask viewers how much they prefer exhibit A, HD, over exhibit B, 4K.
And to really appreciate 4K - a minimum requirement of which is that you can see it - you either need to sit very close to the screen or have a display so big that you can see the extra detail wherever you sit.
Remember that if you blur 4K material by as little as half a pixel, you might as well have HD. This, perhaps, is the biggest issue, certainly for filmmakers. It gets worse. Cinematic filming calls for a fashionably shallow depth of field. Which means that your chances of being out of focus are even less. Again, most of the time you’re going to be watching HD or lower, even for material sourced in 4K.
But that’s OK, because having that level of detail in the spatial domain means that when your pictures look sharp, they’re going to look very sharp, and it’s also easier to compress highly detailed pictures because they contain more information to base the smaller resolution versions on.
I’ve seen 8K TV several times and, while it is obviously very impressive as a technical tour de force, it certainly isn’t the best video picture I’ve ever seen.
There’s a word in the last but one paragraph that is actually the most important concept for the rest of the article. It’s “information”. You see, I think the perceived quality of a moving image depends largely on the amount, and the quality, of information in it. And by “information” I don’t just mean the number of pixels, I mean dynamic range and colour gamut, and to achieve this, you have to include good lighting too. Frame rate also comes into it. A few months ago, I saw a comparison between two 8K video clips. Both were surgically sharp, but one was at 60fps and the other was at 120fps. As soon as there was any movement (the video was of a football game) at 60fps the players turned into ghosts. Even at 120fps they turned into semi-ghosts. Honestly, HD would have looked better, because at least the players wouldn’t have vanished as soon as they started running. (Maybe I’m exaggerating here but my disappointment was tangible).
Let’s look at the idea that information is important to a picture.
In a way you can think of information as detail. Resolution is part of this, but only part. Dynamic range is detail. The more information you can include between whites and blacks, the better the picture will be - especially if you ever have to “push” it. A high dynamic range means that whites will be truly white and blacks can be really black. If you describe this high dynamic range with enough bits then there will genuinely be more information in the picture.
Not everyone realises that every time you process a picture - still or moving - you reduce its quality. Even though it might look better superficially, something has to give way to make room for the changes (there is a specific exception to this which I’ll mention in a minute).
It’s probably easiest to understand this if we think about audio. Imagine we’ve recorded some music and we want to apply some equalisation - maybe to give a voice more “presence” or to take make a bass guitar more throbbing. Equalisation is frequency selective gain - essentially raising the level at a specific range of frequencies. As a technique it’s very effective - no to say essential - but it can cause issues with the recording. First of all, if you boost too many frequencies on too many tracks, when you add them all together in the mix, they can cause distortion - because you’ve raised the overall level of the recording. You also have to remember that if you’re boosting something, it has to be there for you to boost in the first place. Once a recording - audio or video - has been made, you can’t add to what’s there. Boosting something in the mix or in colour correction doesn’t change the overall amount of information. If anything, it reduces it.
That may sound like a strange concept - that you can boost something and cause a reduction in the overall information content, but that’s what happens. Remember that you can’t add information that isn’t there. If you boost a hi hat in a music recording, you’ll bring up everything in neighbouring frequencies, including air conditioning noise and the vacuum cleaner in the next room.
When you apply gain to a digital recording, you’re stretching the distance between sampled levels, and if you do this enough, you’ll notice. That’s how digital recording works. It’s at its worst when you’re manipulating an image and using the same resolution as the source. With digital you only have a certain number of levels to describe a colour. These “steps” are always there, even though it might look like you can change a level continuously. If you tweak a colour even a small amount, the exact values that you started with have to be mapped onto discrete values again, sometimes rounding up or down to make them fit. So a signal that might already be quantized to eight bits per channel will have somewhat less than a full eight bits per channel of meaningful information once it’s been manipulated. The only way you can avoid this is to add additional levels by upscaling to a greater bit depth. Working in, say, ten bit or higher resolution helps to preserve the appearance of a high quality picture, although the same principles apply: there are limits to what you can do - it’s just that the limits are a little less constraining. The ultimate is working in thirty two bit floating point resolution, at which point, there are so many levels that you’re unlikely to introduce visible quantization distortion - but this still doesn’t mean that you can add information.
Given that nothing you do will actually add information to a picture, there are things you can do to make sure you capture the maximum information in the first place, mainly by not wasting your “bit budget”.
In practise this means that you have to get the picture as good as you possibly can before it’s digitised. Don’t rely on changing things afterwards - because you won’t actually be adding information. This is not to say that all post production is useless - far from it. It’s just that if you optimise the picture from the start - ie before you record it - they you’ll have the best possible material to work on in post.
But what does this mean in practice?
It means that you have to get the scene to look right, and mostly that’s to do with two things: set design and lighting. We’ve covered set design in detail here, but, briefly, what you should aim to do is make things look as much as possible they way you want them to look in your production. If you want to have a green and orange look, then furnish the set with green and orange things. Shoot in a sunny woodland in the autumn. Or dress the set with items that will emphasise the colour look you’re aiming for.
And at the same time as you’re doing that, watch out for lighting. Obviously how you light a scene depends on what sort of mood or atmosphere your script calls for, and you have to go with that. But don’t confuse “moodily lit” with “badly lit”. Even if you’re going for a particular lighting feel, you can still try to maximise the information that’s available for the camera to capture.
One very important thing to look for with modern equipment is the colour spectrum from your lights. This is an extremely technical subject, but, broadly, not all lights are equal, and this goes way beyond mere colour temperature. LED lights are an attractive prospect and some are extremely good, but some are pretty poor, and it’s not always obvious which is which. Our eyes are good - sometimes far too good - at adjusting to different lighting conditions. That’s why a white piece of paper lit by a setting sun still looks white to us. But to a camera, it looks orange. It can’t help but look orange to a camera, because a setting sun is orange. Our eyes perceive colour by looking at differences, not absolutes, so they’re not reliable objectively when measuring colour.
Simple white LEDs are not white. They’re actually blue and yellow, because they use natively blue LEDs to illuminate yellow phosphor. To our eyes, this combination approximates to white, but, as you can imagine, it leaves out large, important sections of the spectrum. Modern high quality LED manufacturers know about this and compensate using all sorts of techniques, all the way up to using Quantum Dot nanoparticles. Good LED lights are now capable of delivering a really great spectrum, but make sure, using a colour chart, that the ones you’re using have a decent colour performance. Imagine if a certain shade of green was missing from the spectrum and you’re trying to illuminate a green screen for later compositing. Similarly, if there are inadequacies in the spectrum needed for accurate skin tones, then nobody in the production is going to look right.
Your whole philosophy when filming is to capture what’s there optimally. If your lighting spectrum isn’t right, you’re going to miss bucketloads of information that would have made your pictures look better.
There are some wonderful cameras around today. For the price, it’s simply amazing what you can get compared to even ten years ago. Most of today’s cameras boast 4K resolution or higher, and that’s good, because having a higher resolution is never a bad thing. But it’s not an essential thing. There are some cameras on sale today that are resolutely less than 4K resolution, and their manufacturers are proud of the fact. These camera makers are Arri and Digital Bolex. Arri’s sensor simply makes gorgeous pictures. No-one ever complained that the likes of Skyfall looked rubbish. What it lacks in ultimate resolution it more than makes up for in terms of dynamic range and the sheer quality of the image. Coupled with a good lens, the Arri Alexa and its closely related siblings take lovely pictures. And then there’s Digital Bolex. It has a smaller sensor, uses smaller lenses, and is something of a bargain in comparison to the quality of pictures that it makes. People buy it because of its image quality. Most of the time, higher resolution doesn’t even come into the conversation.