Computational photography - using computers to generate images that were never "seen" - is allowing "miracles" to happen. Phil Rhodes examines one of the most recent methods developed by the University of California.
As it becomes more practical to throw enormous computational resources at the problem, computers have become more capable of understanding the content of an image in the same way that people do. Probably the most basic examples of this — at least by 2017 standards — are things like chroma key and motion tracking, although tricks like content-aware resizing in graphics programs are more recently developed examples of computers doing things, which allow a machine to demonstrate a little more understanding of what a picture is and what fun can be had with it.
One of the more recent examples of this is described in a paper by researchers at the Santa Barbara campus of the University of California who have shown image processing tools capable of altering the perspective of an image after it's been shot — in short, making objects larger or smaller than they originally were, as if shot with a longer lens and without altering the size of a foreground subject. The term chosen to describe this is “computational zoom,” although it doesn't necessarily have anything to do with a zoom lens.
There are ways in which this sounds like an interesting idea. The demonstration in the accompanying video (linked below) uses an example which will be familiar to anyone who's spent time behind almost any sort of camera. There's a foreground subject which we'd like to include (the roller-coaster, in this case) and a background subject that's interesting (the rail bridge). Shooting the scene with a longer lens, to emphasise the interesting area of the background, excludes the foreground roller-coaster. Shooting with a wider lens allows us to see the roller-coaster, but makes the background object far too small. Recomposing the scene allows us to make the rail bridge bigger without altering the perspective of the foreground.
On any sort of reasonably-funded dramatic production, these concerns are one of location selection, but since not all photography takes place on reasonably-funded dramatic productions, it's impossible to discount the existence of circumstances where this sort of thing might be useful. The demonstration material doesn't show it working on moving images, though, which is where the potential issues start to stack up.
Firstly, this technique fundamentally creates a scene where light doesn't travel in a straight line, as is interestingly and accurately noted in the video. As such, this can only work, even on stills, where the areas of the scene, which are treated separately, aren't joined together by straight lines other than in some fairly constrained circumstances. It only works with the demonstration image used, because the railway lines converge at infinity and because that point of convergence is the point around which the background section of the image is scaled. Any other approach would create a very obvious discontinuity. Camera motion is likely to create more complex problems since the vanishing point will move across the scene.
As such, this is not something that's of immediate interest to moving image photographers. There are other problems too — the technique relies on producing mattes for various parts of the frame. Without going into detail, there is some discussion of deriving the three-dimensional layout of the scene automatically, but the matte generation alone might have application in the improvement of current techniques. Shooting image stacks is clearly not a practical technique for moving picture material, nor frankly for most photography, although again, someone will probably come up with special circumstances where it's very useful.
The matting, if anything, is probably the single biggest problem. On occasion, foreground subjects (generally people) can look very much like a poorly-done chroma key. Still, depending on the degree of automation that's being proposed, this process, or at least some of the underlying techniques, might well add something interesting to the visual effects toolbox.