This week the release of the iPhone X with its Face ID system had garnered a lot of attention, not least because of future applications. But can new approaches that can achieve useful 3D scans from 2D images offer us even more possibilities?
The popular YouTube channel Tested, run by ex-Mythbusters maestro Adam Savage, recently ran an interesting piece on the 3D-scanning facility run by FBFX in London. The company is well-known for its work in the field where special effects transition into costumes. It has produced the space suits for The Martian and the hero costumes for Wonder Woman, among many other prominent and high-visibility projects. This is interesting, not particularly because of these projects, but because of the use of 3D-scanning technology in them. Curve-hugging hero outfits and space suits need to fit or they look ridiculous.
Because of all this, historically, the reaction of many people would be to produce a full body plaster cast of the victi... er, wearer. Actors destined for curve-hugging hero outfits are often themselves prominent people and suffer very limited availability and being dressed in an elasticated bodysuit, greased liberally and set in plaster, is an experience enjoyed by comparatively few. FBFX has therefore put together a rig of many dozens of DSLRs which can grab a 3D-scan of someone in the time it takes to take a photograph. It's not quite suitable for the microscopic precision fit of prosthetic makeup, but it's more than adequate for turning out a CNC-milled body form of someone and is generally clever and useful.
At least, it seemed particularly clever until Sony's recent announcement of their 3D-scanning cell phone app on the Xperia XZ1. OK, we're engaging in a little hyperbole here. It’s clearly the case that FBFX's setup is more precise and more capable than the Sony development, but even a fairly close squint at the cell phone approach suggests that the wide availability of 3D-scanning at reasonable quality for some value of reasonable, is probably not that far away. There's been much comment on the fact that the phone doesn't use a dual camera setup to do its depth mapping, although that's explainable in that it requires the phone to be moved around the subject, which has more or less the same effect once the images have been through some clever software.
This whole situation makes an interesting point about the general state of digital signal processing and the computer interpretation of images. First, computers gained the ability to track a single image feature. Then, we moved on to the multi-point tracking of known features, to extract camera position information. Then, we stopped needing to give the tracker features of known dimensions. After Effects can now perform 3D camera tracking without the user even having to nominate trackable marker points (though it helps.) Now, with the sort of 3D-scanning software we see here, we're starting to be able to recognise and record arbitrarily complicated three-dimensional shapes.
To be fair, Sony's development isn't the absolute first time this has been done on a cell phone, although crucially, previous approaches required the images to be sent off to a remote server, where the greater resources of a server farm could be brought in to bear on the extraction of a three-dimensional model of the object in view. What's new is that the XZ1 can do it all on the phone, using the Qualcomm Snapdragon 835 processor, an ARM-derived design.
Certainly, Sony hasn't done this with the idea that it will become part of a film and television effects workflow, they've done it so that people can make 3D selfies because that'll sell a lot of phones and make the R&D worthwhile. Even so, this all has implications for 3D, VR and complex hybrids such as augmented reality and perhaps even for special effects people as well.
More information about Sony's scanning app can be seen on their website.
Tags: Production
Comments