While cameras offer more resolution, dynamic range, frame rate options and features over successive generations, the viewfinder technology in them is struggling to maintain pace.
With the release of cameras such as the Sony FS7, Blackmagic Ursa Mini and Canon's C700, it's now pretty clear that the world's camera manufacturers have finally decided that a camera should be able to sit on the operator's shoulder without leaving a gaping wound. There's also the extremely encouraging lens releases from, again, Canon, with its 18-80mm, and the enticing 21-100mm lightweight zoom from Zeiss. Given all this, it would seem that we're finally within reach of a long-pursued ideal: being able to shoot big-chip pictures with the same convenience as we always shot news.
People from single-camera drama backgrounds might complain, but really there's no intrinsic disadvantage to a camera that can, if need be, come off the tripod and be slung on someone's shoulder. Certainly, the combination of an Ursa with the 18-80 sits very nicely on the shoulder and, although 80mm isn't really all that long, represents an extremely workable documentary outfit for an amount of money that's at least reasonably small. Great!
Only, there's still a problem and it's growing. Unlike the insatiable demands for more dynamic range, however, it's very fixable without needing fundamental advantages in technology. What we're talking about here is lag: the time elapsed between photons going in the lens and photons coming out of the viewfinder. As that time gets longer and longer, the camera becomes progressively more difficult to operate properly. Any perceptible delay is bad, but many modern cameras – and I expressly point no fingers here – have anything up to a few frames, which is frankly much, much too long. At that point, we don't have to be shooting high-energy action or sports; someone walking across a room can become tough to follow properly on a long lens.
Part of the reason it's become a serious problem is that it is absolutely invisible in demonstration footage and it isn't often feature on a specification sheet. Experience indicates that many manufacturers are simply unable to answer questions about it, even if they want to. The other reason the issue has become more severe of late is that we're starting to demand an enormous number of features from cameras. Taking an image from a Bayer-masked image sensor and even turning it into a viewable colour picture is a fairly big job; that's before we've even considered the issues of applying three-dimensional lookup tables, which invariably involves enough long division to sink a battleship full of math teachers.
Back in the day, when things were done in analogue electronics, lag was, to all practical purposes, zero. The amount of time taken for a signal to cross a transistor or a vacuum tube (or even dozens of those things) was so vanishingly minute that even the most complex cameras felt as if the picture in the viewfinder was rigidly locked to the motion of the camera itself. As we've demanded more and more from cameras, we've slowly become accustomed to this not being the case and that's a problem which needs to be recognised. It's one thing for there to be delay on a wireless video link, where the resulting picture is being watched by someone other than the camera operator. It's quite another for there to be a perceptible delay or even a really long, multi-frame delay between what's going on in front of the camera and what's coming out of the viewfinder.
Solving this problem probably requires that manufacturers do something that they're commonly rather reluctant to do: to compromise perceived image quality or remove features, in order that the picture can get from sensor to viewfinder faster. In many circumstances, it would be completely reasonable for the viewfinder image (or the image available on a monitoring output) to be derived coarsely and, therefore, quickly from raw sensor data. This might overlook the high-quality processing required for, say, the monitor in front of the agency people at the back of the room, in favour of getting the picture to the operator's eye quickly. On a UHD camera with a Bayer-filtered image sensor, if the viewfinder or monitoring were in HD, it might be reasonable to take solely the green channel and make the viewfinder image out of that.
but sometimes can't be avoided.
It would be monochrome and there would be some aliasing, but it would be fast because nothing would be required of the camera other than picking the green pixels out and perhaps applying rough luminance processing to them. There's no usability concern; after all, we operated news cameras with monochrome CRT viewfinders for years.
The appearance of the image in a film camera's (or Alexa Studio's) optical viewfinder has never had much to do with what the picture might finally look like. At this point, we're concerned about framing. Features such as 1:1 zoom mode for critical focussing might need to retain full quality and such a high-speed mode would, of course, be an optional alternative to the current full-quality approach, but it would be a great thing to have.
If the issue of laggy viewfinding were to become more prominent, we could potentially solve the problem simply by putting more time and effort into mitigating it, even at the firmware level. Some cameras have it worse than others, of course, and the amount of clever features the camera offers in terms of monitoring doesn't seem to be related to the degree of lag. This is also a fairly big ask from an engineering point of view, because a lower-quality, higher-speed viewfinding mode would require an entirely separate video pathway through the camera, which might require more, both in terms of hardware and firmware.
Ultimately, though, this is something that certainly can't get worse and urgently needs to get better. After all, at some point, there's no point in having cameras of high resolution and wide dynamic range, which shoot vibrant colours at staggering frame rates, if they're perpetually just behind the action.