Think you know what anamorphic means? Phil Rhodes takes us through the mysterious world of none square pixels and cliched lense flares.
Most people understand the word 'anamorphic' as meaning 'horizontal lens flares.' Well okay, it doesn't, but for at least some people that's the operative term. It has also meant a wide variety of other things over the years, and this is therefore, both something of a historical retrospective and a guide to current use.
Going by at least some common uses of the word, all standard definition video was anamorphic. The word originates from anamorphosis, a term dating at least from the early 20th Century which was used to describe all sorts of distortion effects, particularly those created in shadows cast by oblique light, as at sunset. Doing this sort of thing to video is not new: if we take a conventional computer image comprising a grid of square pixels as our norm, absolutely all standard-definition video was anamorphic.
We're more used to applying the terms to widescreen, 16:9 standard definition video, which was sent using exactly the same 720 pixels per line as preexisting 4:3 video (a 1024 pixel per line format was specified for some post production environments but rarely used.) However, not even 4:3 video used square pixels. The effect was more obvious in NTSC video, usually stored at a resolution near 720 by 480 pixels. Displayed in an application expecting square pixels, this sort of video would show short, squat people at an aspect ratio near 1.5:1, which is visibly much wider than the 1.33:1 of 4:3 television. The difference in PAL isn't quite so marked – 720 divided by 576 works out to about 1.25:1, but a practised eye can still see the fact that people were taller and thinner than they should be.
So really when 16:9 video became popular in the late 90s, it wasn't so much an introduction of anamorphic signalling but a recapitulation of it, with much wider pixels to fit more picture down the same pipeline. It wasn't until 1920x1080-line digital HD (which followed 1035-line analogue systems, among many, many other things) that we finally achieved a digital video format with square pixels.
There are a couple of edge cases. A few pieces of equipment and software support the idea of using the entire 1080-line raster to encode an image of 2.37:1. This is near the 2.35:1 aspect ratio commonly called Cinemascope and uses 1.33:1 pixels in a 1080p image. Naturally, there are advantages here in terms of compression efficiency and resolution. Grass Valley's Viper camera was certainly one of a few – and possibly the only camera – to support directly shooting the format, but modern versions of Premiere still have presets for it.
Similar things can be achieved using 1.33:1 anamorphic lenses on cameras with 16:9 sensors, although the approach of a 2:1 anamorphic on a super-35mm, 4-perf sized (and thus 1.18:1 overall) sensor is most traditional. This, the sort of anamorphic we generally talk about so longingly, is the only real survivor of the 1950s dash for formats which would be wider, bigger and generally more splendid than the upstart television. Modern digital cameras are often more than capable of shooting high enough resolution images to allow the frame to be freely cropped to taste, so the creative intent of actually shooting anamorphic is often in pursuit of those lens artefacts. 1.33:1 lenses also produce things like horizontal flares – at least to some extent – but as Les Zellan of Cooke described it, they can be “neither one thing nor the other,” and tend to be expensive anyway.
So, the word “anamorphic” has meant more than a few things over the years. A book could be written on the subject, and probably has been, but with any luck, this will be a useful précis.