Here's a quick look at the Image+ system by Qinematiq, which aims to revolutionize how we pull focus. But is it really better than doing things the old fashioned way?
In order to get the wounded pride out of the way as early as possible, let's be clear that holding things in focus is really difficult and even the best people occasionally buzz a shot. There are certain circumstances under which this is more likely, such as with the slow, creeping motion of the camera or subject. The current keenness for ultra-shallow depth of field, the common belief that it's a good idea to shoot in very little light and a tendency to increase sensor sizes to improve the resolution-to-noise ratio don't help, either. Oh, and we haven't even talked about about the ability of directors to create and control blocking.
In short, the time has never been better for some sort of technological assistance with focus (I use that phrase only to avoid calling it auto-focus, which is a terribly loaded term). One such system was shown at the recent BSC Expo in London and comes from Austrian inventors Qinematiq, hosted at the show by Cintek. The system, called Image+, uses a stereoscopic camera with over-and-under lenses spaced six or eight inches apart. This is a rather different approach to something like the widely-used Cinetape, which relies on timing echoes of ultrasonic sound to determine distance (and which is not generally used to directly control the lens). With two cameras, the system can use the difference in apparent position between the same object as viewed from two slightly separated positions in order to determine its range, in much the same way a WWII tank's rangefinder might be based on binoculars with very widely separated lenses.
The key advantage of this is that it produces a complete depth image, not just a point reading. The people at Qinematiq have used this to provide a touchscreen interface in which any object in the field of view can be brought into focus simply by touching it. The conventional, visible-light image is overlaid on the depth map. As with any stereoscopic depth camera, the depth image is noisy and overlaying the visible-light image makes things easier to recognise.
The other use of the visible-light image is in feature tracking, wherein the system will do what it can to keep an object in focus by tracking its image around the field of view, focussing to the distance indicated by the depth map in that location. Doing this naturally requires that the lenses are properly mapped – that is, the system must know precisely which focus distance is represented by which rotational angle.
The accuracy of the system is claimed to be 20mm at 4m. This will accommodate most common situations, although the maximum range of the system is given to be 15m. It's easy to imagine that long-lens photography, where focus pulling is particularly difficult, might exceed this maximum range. The design compromise which creates this situation is one of size. Placing the stereoscopic cameras far apart would improve precision at long range by increasing the apparent difference in position of a single object as observed by both cameras, but also make the unit larger.
About the only technological alternative to this is a laser rangefinder, which could be steered on a servo mount using cueing from a reference camera image on a similar touchscreen. Small laser rangefinding modules exist, although to achieve range up to tens of metres on reasonably unreflective objects, fairly high laser power can be required. One type with a 40m range emits a whole watt of laser energy. Compared to laser pointers in the milliwatt range, this might represent an eye safety hazard and therefore not be suitable for use on a film set, at least without extra precautions. The stereoscopic camera arrangement avoids this issue.
So, in general, it's a well-made thing and at prices around £8000 (it wasn't clear whether this included the lens control motors, which are third-party), it ought to be. The thing is, variations on this sort of system have been possible for a while. The idea has faced a lot of opposition, whether that's from people concerned over how nicely a machine can pull focus or from focus pullers anxious to keep their jobs. Concerns over employment seem a little premature. Most of the systems that have ever existed still need someone concentrating solely on the task of operating them and perhaps the most obvious issue is setup time.
On a multi-week production, this becomes trivial, but configuring a remote lens control system as would be done for Steadicam or crane work for a one- or two-day shoot can already seem like a time sink. Building it all up and mapping all the lenses is something that can take more time than seems reasonable when there's someone standing by who can easily do it by hand. So, even if a technology like this worked incredibly well, there's likely to be continuing resistance to it, for good reasons or bad.
Still, this particular example behaves exactly as it should and, if anything were to break through the Luddism, something like this is likely to be it.