Computational abilities have already transformed what we can do with cameras. Here are some of the most prevelant technologies and techniques that could revolutionise what we will be able to do in the future.
Some people are hailing a revolution in the most fundamental aspects of photography. Others describe the same changes as an interesting set of new ideas that supplement what's already available, which is possibly a bit more realistic. Either way, film and TV work has long relied on the same basic concepts as photography: cameras record a 2D projection of a 3D scene, and then show it to people. This hasn't changed since the dawn of photography, since before the dawn of photography in fact, when the camera obscura threw a 2D representation of a 3D scene onto a wall.
Now, more and more applications are emerging which take image data from one or more cameras and then do a huge amount of image processing, often creating entirely new capabilities. We're talking about a lot more than simple Bayer colour recovery or grading; modern devices and software may synthesise entirely new images which differ massively from the data that comes out of the sensor. Today, I'm going to look at some things which have already, quietly leaked out into general use, and what they hint at for the future.
HDR phones
Let's start simple. The idea of taking multiple exposures of a scene at different exposures is not a new one. Stills photographers have been playing around with it for years, but the limitation was naturally that the scene couldn't move between exposures. It was implemented in Red's HDRx mode, with careful timing to minimise the problems caused by motion.
It's attractive to cellphone manufacturers because even the best cellphone cameras are still tiny. The Nexus 5X, for instance, has a Sony Exmor IMX377 camera, comprising a 1/2.3” 12-megapixel sensor. That's good for a smartphone, but compares poorly to a DSLR. Using HDR techniques is tricky, though: cellphones are invariably used without any consideration of subject or camera motion.
In HDR+ mode, Google's camera app shoots several stills in quick succession and combines them. That's easy. What's less easy is the correction for motion in such a way that problems are mitigated or avoided, presumably achieved by analysing the difference between images and rejecting fast-moving objects which are likely to cause problems. There must be some limitations around the combination of high dynamic range and a moving subject, but it's pretty convincing. It works even in the face of deliberate attempts to make it fail by waving things in front of the camera very quickly.
There are practical limits to how good cameras can become. But better HDR could allow us to evade some of those limits.
Depth of field simulation
Sticking with cellphones, most people have now played with the depth of field simulation that's widely available. It's not strictly a true depth of field simulation, but it approximates one by blurring background objects.
A really convincing camera lens blur is difficult, even with GPU techniques. Fast approaches to rendering blur don't really simulate lens behaviour. The key issue, though, is the depth sensing, not the blur. It's often achieved with multiple cameras or by requiring the user to move the phone sideways in order to create some parallax shift, as with two separate eyes.
Techniques for depth sensing are likely to improve, and the next generation of phones is expected to offer specific camera hardware for doing it. Simply adding more cameras helps; the more different views of a scene, the more accurate our idea of its depth becomes, and the better it works on a moving subject (read about lightfields below.)
If this becomes good enough, it could expand into any circumstance where a depth-based matte is useful, as with greenscreen.
360 Degree Cameras
It is unclear whether headset-based 360-degree video is going to be any sort of thing, let alone the next big thing. Regardless, recording an entire sphere of video creates interesting possibilities, from animated lighting reference for VFX, to the creation of any arbitrary camera move in postproduction with framing that can be trimmed and tweaked.
360-degree cameras are probably the least mature of the things I have talked about here, not because they haven't been worked on but because maturity is so difficult. Devices targeting the surround-video market include Nokia's famous Ozo camera and, more affordably, things like the Ricoh Theta series. Ricoh updated the Theta to the 4K-capable S variant in 2016, but 4K still isn't enough to encode a full sphere of video that might need to be windowed down to HD later on, or use to create reflections in CG objects.
It's an interesting field because it's a genuinely useful application for enormously high res cameras and video streams.
Lightfield
Most of the cameras currently called “lightfield” are better described as a “sparse lightfield,” because they're made out of arrays of cameras spaced apart. A complete lightfield would measure the angle of incoming light rays over every pixel, something that isn't currently available. Current approaches are still very interesting, though, because they give a shopping list of features – virtual camera motion, post focussing, depth sensing, and more.
Probably the best theoretical work is being done by the German research organisation Fraunhofer, which we've talked about before on RedShark. Practical embodiments such as Lytro's cameras are a hint of things to come. The need to use a collection of small cameras in arrays like Fraunhofer's currently makes for some compromises in overall image quality, though those could perhaps be offset using the sort of HDR techniques that work so well in cellphone cameras.
Regardless, this technology suggests more than any other that one day, movies will not simply be made out of a 2D image that fell on a sensor.
What does all this mean
Right now, most of this means better cellphones. Competition between Apple and Google is fierce enough to provoke both to work very hard, and the generalisation of the resulting techniques into other industries is something to hope for. While it's unlikely to be an overnight revolution, the future looks good - though it may not look like a Betacam slung on someone's shoulder.
Tags: Production
Comments