Replay: Guest author John Clark of Berlin Picture Company explores why there's so little consistency of colour across broadcast platforms and what the industry may need to remedy the problem.
Filmmakers seem to have greater potential to control colour than at any time since the early development of colour photography, yet more and more productions are being made in a kind of semi-monochrome style with a general colour cast per scene and occasional patches of distinctive hues. Of course, there are exceptions, but Daniel Craig's Bond is a man in a grey suit and thrillers inhabit a muted universe of beigey greys and gloomy beige.
Is this a matter of fashion and taste, or a defensive, even perhaps unconscious reaction to the 'honky tonk' of colour reproduction across the hundreds of different kinds of screens and monitors where movies are eventually seen? Would audio be tolerated pitched-changed unpredictably by half a tone here and a quarter tone there? Squeezing colour data to the point of no return has become a primary tool in compression which maybe needs questioning as 4K becomes a de facto standard in production. Are we throwing out the baby of creative colour with the bathwater of technology?
Deep in the early history of electronic imaging, documentation defining the NTSC system used an unfortunate phrase, which persists in technical literature today. It reappeared in Apple's April 2014 White Paper on ProRes, "Because the eye is less sensitive to fine chroma detail, it is possible to average together and encode fewer CB and CR samples (than Luminance) with little visible quality loss for casual viewing." They might have encouraged different technical outcomes by rephrasing it "Because human perception is extraordinarily flexible in its tolerance of colour aberrations, it is possible...etc."
We are all so familiar with the notion of cameras loosely approximating the structure of the human eye (a focusing lens, luminance response and colour sampling mimicking the distinction between rod and cone cells in the retina) that it is easy to lose sight of the enormous differences between human perception and movies as a visioning system. No one really has a clue how information about colour is transmitted from the eye to the rest of the brain or how colour impressions arise as conscious perception. This might be reason for technologies to exploit our knowledge of the physics of light, but instead, they have been framed within the trivariant notion of colour vision first articulated by Thomas Young in the eighteenth century, which has been extremely successful for many purposes.
People are able to identify objects by colour under enormously different lighting conditions and this rich environment of colour distinctions and associations is complex, subtle and difficult to categorise. However, the quality of research into culturally defined colour systems is disappointing, especially when recognising that almost all the objects around us are coloured by carefully defined paints or dyes, while artificial lighting is everywhere. We no longer inhabit a natural landscape, if indeed that has ever really been the case since the evolution of modern humans.
A useful way to think about colour is to distinguish between 'something coloured', such as an even area of colour defined by a graphic designer, and something 'in colour', where the system is more or less reproducing a scene before the camera. Games and CGI occupy a space somewhere between the two.
Pixel based screens make dramatic reinterpretations compared to the camera data, even when a colour profile has been set in post. A camera pixel very rarely maps to the same pixel on a screen, though it would seem logical to imagine that a 1920x1080 (or 2k, 4k, 5k 8k etc) image is coherent between the two. The system is based on a series of approximations. Depending on the pixel structure on the screen, the amended camera data will be sampled, mapped and re-presented in widely varying arrangements, including Bayered micropixels, the dots of traditional CRT's, or the overlapping bands of LED's. If that data originated as 4k with 4.4.4 sampling after finishing, the transcoding to HD 4.2.0 at even the highest quality levels involves a radical transformation. Bring in the issue of compression, intra, short or long GOP, the signal minimalisation through multiplexing for cable and satellite or even the fairly mundane task of cramming data onto a Blu-Ray disc or DVD and the issues compound rapidly.
There are at least three stages to the perception of screen pixels with respect to colour, part of which begins with the individual micro-pixel structure of the screen. The Pen-tiled amoled RGBW (red, green, blue, white) colour space brings screen images a similar increase in contrast from a darkish screen made possible with four colour printing, with three primaries and black on white paper. For screen presentations, like text, which are 'something coloured' rather than 'in colour', there are huge rewards. Graphic designers seek clearly defined boundaries and edges, between one area of colour and another. Movies seek to control detailed variations of shading and form to enhance veracity and evade the 'uncanny valley', when unsettling grain, compression artifacts and other perceptually undesirable elements disturb the illusion of scenes being 'in colour'.
For something 'in colour', even uncompressed data brings in a second level of colour mixing between individual pixels to create small patches of colour, which is then open to the kind of colour mixing we associate with 'Pointillism', a post-impressionist approach to painting associated with George Seurat in the nineteenth century. Impressionism was an approach to painting based on identifying the colour of a particular element in the field of vision and reproducing it as accurately as possible on the canvas, rather than merely creating an impression of the scene. Seurat's innovation was to recognise the possibility of creating small mosaics like spots of colour, rather than the elegant brushwork of his predecessors.
How 'pointillism' behaves for the viewer depends a great deal on their distance from the screen. For this final stage of 'pointillist' interaction, the viewer is responding to the interaction of perceived pixels, which are in turn based on the averaged character of groups of pixels, which are themselves the product of sub-sampled micro-pixels based on data transformed for the monitor electronics. Apple have a point when they categorise 'Retina' screens according to the typical viewing distance and pixels per degree of the field of vision, rather than a simple pixel count across the screen.
Huge efforts go into establishing the visual style of 'high end' productions via design and cinematography, followed by detailed amendments in post and finishing. Filmmaking is partly about sustaining a consistent and coherent experience for the viewer by balancing a bundle of creative compromises. Anyone who expects that to change is being naively optimistic or unnecessarily purist. In most situations, amendments involve a reduction in the diversity of colour impressions from shot-to-shot and scene-to-scene. However, there two cultural issues I think it are worth recalling, if improvements to handling colour are not to be limited to minimising unwanted artifacts or systemic aberrations.
There are relatively few examples of 'high end' digital productions tackling the same dramatic material. One of these has been the 'Wallander' tv series produced both by BBC and in Scandinavia. Anyone lucky enough to see both versions will notice a similarity of dramatic style, costume, performance and pacing, but there is one big difference – the landscape, grass, sea and sky are all consistently more intensely coloured in the BBC version. Should we be thinking about general patterns of colour variation from country to country, or people to people and producer to producer that are worth valuing, just as we recognise the artist's palette?
For decades, editors have added colour bars to the head of programmes enabling transmission engineers to make a quite easy match between incoming material and their station standards. For obvious reasons, VOD services like Vimeo, or YouTube want the movie to stand alone. Services like Netflix create dozens of versions of a programme for different bandwidths, framerates, or resolutions, as well as regional versions in NTSC or PAL. Is there a case for incorporating a simple metadata set at the head of a programme to do the same job that colour bars have served, in addition to the general colour profile of the system, while ensuring that the multitude of devices people use can read that data?
Since chips are expensive to design and bring into production, it seems unlikely that the move away from specifically designed high end video chips towards mass produced 'photographic' chips (adapted for the purposes of video) will lead to Pen-tiling and Bayering or similar graphics prioritized technology being superseded. For movies, might it be worth manipulating 'photo' chips to provide 4.4.4 at lower display resolution rather than concentrating on the notion of enhancing the progression from HD to UltraHD and beyond? If increased bandwidth will be available, how might it best be exploited? Could one direction be the creation of Bayer type pixels based on more than three primaries, which might enable finer colour separations for the process of 'pointillist' interaction? Would additional primaries also make sense when amended colour in post, localising regions of the image for correction? Which regions of colour space might be prioritised for development and the identification of additional primaries? One approach might be based on the density of named colours from different cultures, which have neighbouring colour values, but distinctive identities and enhancing the chance of them being successfully reproduced.
There are numerous situations which call for quite specific colour characteristics. Think of the distinctive colour of a football club shirt, corporate style books and branding or colour symbolism in religion. Ideally, filmmakers should be able to refer to them precisely, but is that really so?
In the nineteenth century, an enormous effort was made to investigate and understand language by philologists, ethnographers, phoneticists and many other academic specialists, creating a huge pool of knowledge about the history, development and use of language, including oral and written literature around the world.
The same level of attention has never been given to colour and its place in human experience or cultural identity, although the branding and advertising industries now pay close attention to colour and medicine has applied fine colour distinctions in illustrations to diagnose and follow the progress of diseases for centuries. But colour science is a success based on precise analysis and technical innovation, rather than a cultural field of enquiry. Systems defining colour space have a fairly short history, from Munsell or the CIE . Their predecessors are attempts to order colour as pigments and their mixtures for artists (Ostwald, Windsor and Newton) and printers (eventually Pantone ) or highly personalised colour explorations by people like Goethe in the eighteenth century.
This fragmented history of colour scholarship means there is still no comprehensive, well systemised atlas of colour terms across the world's cultures, referring to specific constructs of colour presentations and relationships between colours in comparable terms of reference. This may seem surprising when we take into account the importance of colour for heraldry, flags and uniforms, which originated at the dawn of civilisation. For many cultures, there is no real record at all. The linguists can identify thousands of specific colour names across the world's cultures, some of which seem odd to modern viewers. Many colour names are based on variations in proportional mixes of colour pairs, say red and blue, leading to a mass of purpley associations. For a modern European, a pairing based on black and yellow seems difficult to imagine, but it arises in several languages, including Ancient Greek.
At a time when people's hunger for artisan produce is making headway against mass produced food, is it worth thinking about artisan colour in the same light? After all, movie-making is a 'motion picture art', not just mass media. High end perfectionism is expensive and not always possible. If the technology is embedding limitations to the reproduction of colour, perhaps minimising colour aberrations during production could mean a come-back for artisan tools like light meters, tape measures and pots of paint in low end production.