RedShark Replay: Each still frame in an 8K video is a 32 megapixel still image. It may just be that 8K video is the best way to capture still images in the future.
With some people wondering why the new Hasselblad camera that has a 50 Mpixel sensor can’t shoot video at even 4K (hint - it’s a still camera) you would be right to question the sanity of the title of this article. But I very strongly believe that the ability to shoot at video frame rates at super-high resolutions is an important step for still photography to take.
There is simply no better way to shoot a still photo than to browse through successive frames of video so that you can pick the ideal frame. This is especially so with large sensor cameras that have an almost impossibly shallow depth of field, which can lead to stunning photos, but, more often than not, to shots that are out of focus, or which are in focus on the wrong thing.
I’ve only tried this with 4K and I’ve been very pleased with the results. I’ve used this picture before in RedShark but it does illustrate the point very well. It was shot with a Canon 1D C in 4K, and I remember browsing through about thirty second’s worth of video to find the right frame. There were lots that were just rubbish, some that were nearly right but not quite, and one - yes, just a single frame - from the entire thirty second clip that was good enough to use.
When you’re shooting a moving object (and that includes portrait photography) or if you’re shooting handheld and you have a large sensor, depth of field can be as small as a few millimetres. This might not be obvious in a small viewfinder, and it’s very easy to not notice that your chosen shot has missed that ideal point - in all probability only captured with a single frame where the shot is moving through the perfect focus point.
Having used this technique, it’s something I would definitely like to see built into more still cameras.
What might hold this back is that it’s not easy building 8K cameras. That’s an awful lot of data to deal with, and you have to have at least a 32 Mpixel sensor in the first place. Then there’s the issue that if you have a greater than 32 Mpixel sensor, you have to have enough processing power to reduce the resolution of the image down to 8K.
Except that you don’t, necessarily, for still images.
That’s because if you need to downsample, then you don’t necessarily have to do it in real time. Quickly would be good, but if you can only do it in one fifth or a tenth of real time, then it means you’re only going to have to wait a small fraction of a second to view the images. Or, the chances are, that the video can all be processed while you're getting ready to preview the shots.
And if you don’t downsample - and why would you, necessarily - then you can shoot at the maximum resolution of the frame. 12K video, anyone? (The alternative of shooting from a cropped 32 Mpixel section of the sensor would work but would change the field of view).
But the point is that this technique is not there to shoot video. It’s there to shoot stills. So while storing 8K or 12K raw video needs a lot of expensive, high speed storage, the reality is that you’re rarely going to need to shoot more than about five seconds of video in order to extract a suitable frame.
This means that it’s OK to use expensive storage, because you don’t need much of it. And nor do you have to have all the other stuff that goes into modern video cameras, other than the ability to capture a few seconds of super-high resolution.
Let’s hope that this genuinely useful technique takes off, and starts to be offered by manufacturers. I think it will be incredibly popular.
Tags: Production
Comments