New technologies are making our film and video more immersive than ever, filling our field of vision and pushing the edge of the frame out to the peripheral, even making it disappear entirely. But then, argues Roland Denning, we haven’t been sure where the edges are for some time.
For those of us who are concerned with composing a shot where the edges of the frame can be seen, it would be nice to know just where those edges are. Sadly, when it comes down to it we are seldom really sure.
When I started shooting film for standard definition TV it was fairly straightforward; inscribed on the ground glass of the viewfinder was a frame, regarded as the safe action area. Because of the way TV screens worked and cathode ray tubes were made, you knew the very edges of the picture were never going to be seen but precisely where the frame ended was uncertain - it all depended on how the viewer’s TV was set up.
In analogue days you had to accept a certain vagueness about where the frame ended. With digital formats this really does not need to be the case - there is no reason why every pixel shot can not be transmitted and viewed; any notion of a ‘safe action area’ is really a legacy of the past.
The standard cinema formats are 1.85:1 (widescreen), 2.39:1 (scope) and the now rare 1.375:1 (Academy). It’s a sad fact that no standard cinema aspect ratios conform to the TV standards of 16:9 and 4:3, which in cinema terms are 1.77:1 and 1.33:1. In continental Europe 1.66:1 is also a common cinema format. Super 16mm, which was the standard film format for UK TV for more than a decade, is 1.67:1. Super 16mm, of course, was never designed to be projected in its raw state, but either blown up to 35mm or telecined for TV — both cropping the picture top and bottom (the cinema crop being slightly more severe than the TV crop).
The upshot of this is you can not compose accurately for both cinema and TV. There was a trend to shoot 35mm full frame for 4:3 TV screenings, cropping to the standard 1.85:1 for cinema. Films shot this way have a lot of dead space at the top and bottom of the screen, and it would be not uncommon to see a microphone in the top of the shot if the operator’s priority was on the frame that would be shown in cinema. Shooting this way had the additional undesirable consequence that the frame for the big screen was actually tighter than the frame for the small screen.
Things got more even complicated when widescreen came in. Widescreen (16:9) in the UK came in before digital transmission and before HD, but TV sets in the home were still mostly 4:3. Those who had 16:9 sets could choose how the TV dealt with 4:3 pictures - by default they stretched or cropped them to fit the space. During this phase you were asked to ‘shoot and protect’, which meant you tried to contain your action in a 4:3 space within the 16:9 frame with the knowledge there would be extra space to the left and right which may or may not be see - that was sort of limbo space which had to look okay but be void of anything crucial to the film.
For a brief period the BBC adopted 14:9 - a format which, as far as I know, did not exist anywhere else in the world. 14:9 was basically work originated in16:9 with the left and right edges cropped a little. It would be screened in a mild letterbox format on 4:3 TV (a thin black line top and bottom of frame). Letterboxing — allowing a widescreen film to be shown on TV with a black space top and bottom — preserves the correct aspect ratio but tends to be unpopular with the audience. There were reports that viewers demanded (unsuccessfully) that part of their TV licence should be refunded when the BBC screened letterboxed movies since they were not getting the full picture they had paid for. On the other hand, cropping a scope picture meant that around 50% of the picture could be lost to the audience.
In the era of 4:3 TV, ‘pan-and-scan’ was the norm for widescreen and scope movies. The 4:3 crop was moved by an operator across the wide frame to try follow the crucial parts of the action – sometimes this meant creating extra cuts if, say, a conversation was taking place between two people at the opposite edges of the frame. Done badly this could be extremely disruptive; the classic example is the 1959 ‘Ben-Hur’ in which Ben-Hur’s chariot is drawn by four horses in the cinema version and two in the pan-and-scan TV version. But a least pan-and-scan is preferable to leaving the crop fixed in the centre of the frame.
Don’t think that your frame will always be preserved on the cinema screen either; before digital became the norm, cinema projectionists would file the projector aperture plate so the picture would fit the screen masking. It was not unknown for cinemas to show both scope (2.39:1) and widescreen (1.85:1) on the same screen in an identical format.
So how do most cinematographers cope with this? I think mostly we concentrate on the end product which we regard as most important - a perfect frame in 1.85:1 might look a bit loose on TV, but that has to be accepted. Ideally, you attend the telecine or transfer and make adjustments the best you can. There are also some compositions you may be just don’t want to make if you can’t be sure where the frame line is: a diagonal that nearly meets the top corner might just look wrong if the frame is cropped horizontally.
Does it make a difference? I’m sure most readers of this site have noticed when a composition on TV just looks wrong because there is dead space top and bottom of frame, faces are cropped, the frame just looks unbalanced or you get the sense there’s something going on at the edges you just can’t see. Do the general public notice? You certainly hope so - even if they are not aware quite what is that is wrong, they sense the picture is not quite right.
But there is something else to be taken into account. As I mentioned towards the end of this piece, not only is VR on the horizon but home screens are getting bigger and bigger. To justify 4K in the home you really need a very large screen or sit very close - and for 8K, if it happens, even more so. Will this affect the way we shoot? If you have an immersive screen, do you really want to fill it with a close-up or do you shoot so the main action is concentrated in the centre of the screen allowing the edges to disappear into our peripheral vision? If so, do we really need different version for large screen and small screen presentation? Would it be feasible to encode to different versions within a 4K or 8K file, one for immersive viewing, one for small screens?
VR obviously fits gaming and interactivity where we can decide what to concentrate on and where to travel within the visual field, but if VR is going to be used for more conventional narrative, how do we direct the viewers’ attention to what we want them to see? If the viewer can’t take in everything on the screen at once, do we need to lead them to the area we want them to concentrate on? Will it necessitate a whole new approach to composition and framing? Or is audio going to be key?
We were never quite sure where the edges of the frame were; losing the edges entirely is, to say the least, an interesting challenge.
Image: Shutterstock.com