RedShark News - Video technology news and analysis

All about sensors, lenses and depth of field

Written by Phil Rhodes | Feb 11, 2018 12:00:00 AM
All about Sensors, Lenses and Depth of Field

Replay: You can't go far without understanding the relationship between sensors, lenses and depth of field, only it's slightly more complicated than you might have thought!. Phil Rhodes makes it understandable.

Part two of this series on sensors and lenses is available here

Since the earliest camera obscura, humans have used technologies which project images of a real-world scene onto a screen. Only quite recently have we gained the technology to automatically record that image, although we should probably take a moment to think about the potential use of camerae obscurae by old masters such as Johannes Vermeer. The documentary feature Tim's Vermeer details attempts by NewTek founder Tim Jenison (yes, he of Tricaster and Lightwave fame) to work out how this might have been done, including an ingenious approach to colour matching which could almost be thought of as assisted manual photography. It's well worth a watch, although if some of the suppositions around the subject are correct, the effective sensor size applicable to Vermeer's paintings – in terms of a modern camera – would have been the same size as the finished canvas, over 15 by 17 inches. That's positively gigantic by any standards, and anyone with any knowledge of the attendant issues will already be frowning about light levels, depth of field, and other parameters. Back in the world of modern cameras, we currently enjoy (or perhaps we suffer) a huge number of sensor size options. The physics, though, is exactly the same, whether we're talking about Vermeer's canvases or the sub-fingernail-sized slivers of silicon in the average modern webcam.

Landing the image

Most people understand the idea of a lens projecting an image, usually circular. That part of the image which falls on the sensor becomes the picture we see. A mattebox with inserts to suit a particular aspect ratio might crop the projected image closer to the intended final frame, with the idea of limiting flares caused by extraneous light bouncing around inside the camera or lens. Overdoing this can cause problems because a lens focusses light reflected from the subject over various angles, not just those which happen to pass directly through the centre of the lens. A matte cut too closely to the shape of the frame might itself be out of shot, but might still darken the image or create vignetting.

But the principal real-world concern of landing an image correctly on a sensor is one of the flange focal distance, the distance between the lens and the sensor. Some lenses offer adjustable back focus, but many rely solely on the mechanical alignment between the mount and the sensor. Because the distance between the sensor and the lens is very small, but the distance between the lens and the subject is very large, a large ratio is involved, and the alignment must be very precise. Many lens mounts have the option of inserting shims (very thin sheets of metal) between the mount and the camera body, to allow fine adjustments to be made. If this is not done correctly, all may initially seem well, but issues such as inaccurate focus distance marks and an inability to reach infinity focus may occur.


Coverage issues

In general, then, it's sufficient that a lens is mounted the correct distance from the sensor and projects a large enough image that the entire sensor is covered. There are subtleties, however. It's mechanically quite possible to mount Canon EF lenses, designed to cover the large 36 by 24mm full stills frame, on a micro four-thirds body with a 17.3 by 13mm sensor. This is done all the time without issue, although it does allow light into the camera body that won't be used by the sensor, and that may cause more than usual flaring. Also, using a much smaller sensor than that for which the lens was designed will tend to exaggerate refractive errors – softness, distortion, etc. Assuming two sensors both produce a 1920x1080 image, but one is half the physical size of the other. Apart from changes in field of view, any imperfection in the lens will be twice as many pixels across in the smaller sensor, and so more visible in the output image. In effect, the central portion of the image is greatly magnified to fill the frame, which is why smaller sensors create a more telephoto (that is, zoomed-in) view for a given lens.

In the opposite case, the image projected by a lens may be smaller than the sensor, such as when using a B4-mount ENG zoom (with which there may be other problems) on micro four-thirds camera such as a Panasonic GH4. In extreme cases, the edges of the projected image may be visible, as if looking through a porthole, but even when the sensor is at least somewhat fully covered there may be shading errors – darkening – at the edges or in the corners, especially at higher f-stops where the hole through the middle of the iris is physically smaller. More problematically, refractive errors tend to become more pronounced towards the edge of a lens's coverage.

These concerns may not be – in fact, usually aren't - overwhelming. Applying a Canon EF series lens, with its large full-frame 36 by 24mm coverage, to the Blackmagic Cinema Camera, with its comparatively small 15.6 by 8.8mm (just larger than super-16) sensor gives good results assuming a quality lens. In many cases there are also ranges of sensor sizes – such as Super-35 (near to 25mm wide) and APS-C (based on the 25.1 by 16.7mm APS film format) – which are of broadly similar dimensions and where lens performance and field of view is therefore reasonably consistent.

Design concerns

Lenses, then, are designed for particular sensors. This is particularly true in the unusual case of 3-chip colour broadcast cameras, wherein each of the red, green and blue primaries are imaged by separate sensors, each of which may be a slightly different distance from the lens after taking into account the optical path created by other optical components. Lenses for three-chip cameras are designed with this in mind, and can cause various sharpness problems between the RGB channels when used on single-chip cameras because they don't all focus at the same point.

One important general issue is that it becomes increasingly difficult to make very wide-angle lenses on small sensors. Because the absolute focal length of the lens must decrease to maintain field of view as the sensor gets smaller, very small sensors can require extremely small focal lengths. It is quite normal for B4-mount broadcast zoom lenses to offer minimum focal lengths below 5mm, which is required to achieve a wide field of view on 2/3” (8.8 by 6.6mm) sensors. In the normal course of events, this would require a lens element to be placed only a few millimetres from the sensor, which is impossible because that space is occupied by the optical RGB splitter block. To overcome this, many 2/3” lenses – and many lenses in general – use clever retrofocal designs to land an effectively very wide-angle, short focal length image on a somewhat distant sensor. This is a lot easier to achieve if the lens isn't required to project a large image circle, although to some extent this is a zero-sum game. For instance, notice that the Fuji Cabrio 19-90, a PL-mount lens similar in ergonomic style to an ENG zoom, has a far longer short end than a typical lens of that type and must project a larger image circle to cover super35-sized sensors. However, that larger sensor will cause the 19mm short end to create a much wider field of view than the 2/3” sensors that ENG lenses usually see, more or less normalising the situation overall.


Telecentricity

Electronic image sensors frequently have a limited ability to see light which does not fall directly onto the face of the device, at right angles. In many designs, the sensitive part of each pixel is effectively at the bottom of a shallow depression, meaning that light falling at an angle may not make it all the way down to the bottom to strike the active area. Lenses not designed to accommodate this may provoke shading errors, particularly in the corner of frame where the deviation is likely to be largest. Lenses can be specifically designed to emit a parallel beam of light toward the sensor, where all parts of the image strike it at right angles. These are referred to as image-space telecentric,and avoid this problem. This is not a problem – or at least is less of a problem – with photochemical film, and attempts to use older lenses on modern cameras may reveal the problem.

So we've mounted a lens on our camera, and projected an image onto the sensor that satisfies requirements for sharpness and telecentricity. In the next part of this series, we'll look at the effects that various sensor sizes have on the resulting photography, and explore why larger pixels and larger sensors tend to produce more cinematic images, even without considering depth of field.

 

Part two of this series on sensors and lenses is available here