A world of colour. The process of colourising photos has been around almost as long as photography itself. Done skillfully and the results can be amazing, but it is fraught with difficulties, and sometimes the results can be a mixed bag. Could artificial intelligence eventually automate the process?
It's becoming clear that appending the words “in color” to the title of a piece of motion picture footage has reached a level of popularity not seen since the 1950s, when it was necessary to remind monochromatic holdouts that they were missing something by failing to upgrade their equipment. That was understandable, but the phrase has become popular on YouTube mainly because of a surge of interest in early colour photography – although what's billed as early colour might, in many cases, actually be a rather more recent application of colourisation.
Before we jump into a discussion of colourisation, one thing this does make clear is that there's a very worthwhile public demand for seeing things in colour. Don't worry, this isn't going to turn into a manifesto on the subject of besmirching Citizen Kane with a bumper pack of magic markers. The public's love of colour largely seems to apply to historical material, the earlier the better, and often of very everyday scenes. The strange implication is that it should be surprising to discover that people strolled along the streets of the early twentieth century wearing clothes that weren't grey, like a sort of pre-Pleasantville monochromatic world suddenly turned to colour.
Part of the interest in old colour is the widely-held understanding that colourising photos represents a staggering amount of human effort (though more on that below.) There's also presumably some interest in the technical history of photography, although video titles such as “40 must see historic photos in color – this will change how you feel about the past” put the lie to the idea that all this is really all aimed at photo nerds. To indulge in a moment of complete subjectivity, there's something about seeing the past in colour which makes it more immediate, the humanity of the subjects less filtered by the obscuring screen of black and white photography. People like it.
Much of it, though, is being promoted as original colour photography, which much of it is probably not. By the time we get to World War 2 In Color! we're probably looking at Kodachrome, which was launched in 1935 and (staggeringly) was manufactured until 2009. Although much black-and-white motion picture film was shot until the 1960s, colour photography had been quite everyday (and quite expensive) for decades prior. Before Kodachrome, though, it was pretty rare; the first thing called Kodachrome was actually a two-colour process in 1915. There were colour processes before then, but they were vanishingly rare.
It's usually quite clear whether a piece of black-and-white cinematography has been colourised. Modern approaches to colourising stills can be highly convincing, but generally rely on a degree of human intervention to impose varying hues on the picture. Someone's face, for instance, isn't generally a consistent hue on the brown-to-beige spectrum, but has redder or browner areas depending on the epidermal physiology of the person involved. It's possible to touch that sort of thing in by hand on a still, but tremendously difficult on a motion picture, where it's hard enough work to rotoscope enough different parts of the frame to create a convincing combination of colours in the first place. The giveaway on colourised film, usually, is that too many things have exactly the same hue.
As with so many things, all of this may be soon to change. Richard Zhang, Phillip Isola and Aleei A. Efros, from UC Berkeley, published a paper in October 2016 called “Colorful Image Colorization” in which they describe the use of a neural net to automatically add colour to images. It's a fundamentally impossible problem to solve if we demand accuracy, but Zhang et. al. targeted plausibility as opposed than the impossible task of reconstructing the original colour perfectly. In tests it fooled human observers a little more than a third of the time, which doesn't sound great but is considerably better than the prior art. They don't talk about moving pictures, but it's probably only a matter of time.
You can try their software here.
Title image by kind permission of Glenn Moore.