Replay: We take video imaging for granted. But the pioneers didn't have things so easy. How on earth were images created before the CDD? Phil Rhodes tells all.
The way modern video cameras work is fairly easy to understand. Most people are aware of solar panels, which demonstrate the fact that silicon reacts to light. In modern cameras, we use sensors made up of millions of – well – not quite solar panels, but devices using roughly the same effect.
That approach, though, wasn't developed until the middle of the 20th Century. The very earliest ideas for electronic television date back to the 1900s, when the fantastically-named Alan Archibald Campbell-Swinton wrote to Nature describing his ideas for long-distance image transmission using cathode ray tubes (CRTs). The CRT had been known since the late 1890s when English physicist J. J. Thompson had demonstrated the deflection of cathode rays (an electron beam) using an electrical charge.
Most people are familiar with the idea of a CRT display. An electron beam sweeps the inside of the tube where it illuminates phosphors, which glow. On the other hand, capturing pictures using a related technology isn't quite so easy. The vacuum tubes used in cameras up until the early 1990s are cathode ray tubes, scanning a picture area with an electron beam, but detecting an image took a bit more work.
The famous Philo Farnsworth was probably the very first person to figure this out. The image dissector tubes of the 1920s experimenters worked by focussing the image onto a substance that emits electrons when light falls on it, often caesium oxide. The electrons flying off that detector plate are then pushed around with magnets so that the whole image scans over a little hole in a plate, behind which is a detector.
This is simple, but most of the electrons hit the detector plate and are wasted, meaning that only those which are emitted from the imaging plate, precisely as that part of the image is scanned, are detected. Farnsworth was able to demonstrate television to journalists on September 3, 1928, but the pictures required an absolute blast of light. Later demos showed pictures of Farnsworth's wife with her eyes closed, perhaps in desperation.
Better pictures came from tubes with storage, where charge from the incoming light built up on a plate and was then read out using an electron beam. This was the iconoscope, the first widely-applied video imager. The idea here was to coat one side of an insulating plate with tiny particles of photoelectric material and the other side with a coat of something conductive. Two conducting things separated by an insulator creates a capacitor, a device capable of storing charge. Focus an image on the plate and the charge on all of these tiny capacitors (think of them as sort of pixels) represents the image that falls on the plate.
It isn't simple to read the contents of a load of capacitors with a scanning electron beam, so the process worked backwards. The electron beam would first scan over the particles, charging up the whole plate. The fall of light on the plate during the exposure would then release electrons so that the brightest parts of the image had the lowest charge. Then, when the beam scanned over the plate again, the parts which were already charged caused the electron beam to bounce off, where it could be detected by a big electrode surrounding the plate. The current on that electrode was a representation of the picture brightness as the beam scanned the plate. This crucial development was based on work done by the Hungarian physicist Kálmán Tihanyi, who, unfairly, rarely gets the coverage Farnsworth enjoys.
At this point – the mid-1930s - television imaging devices become complicated enough that we'll skim over a lot of the details. Still, the image orthicon was essentially a wartime development, transferring the picture as a cloud of electrons from a light-sensitive plate to a storage plate. This allows more sensitive materials to be chosen for the light-sensitive plate and the storage plate can be bigger. A high voltage is used to attract the electrons from the imaging plate toward the storage plate, so that they impact hard enough to cause secondary emission, blowing a whole bunch of electrons off the surface for each electron which hits it, creating a bigger, easier-to-read charge difference. Then, the storage plate is scanned out using an electron beam, with some added hardware using the principles of photomultipliers used by the military to see in the dark. The result was the best available all the way up to the early 1960s.
The final stage of tube development, displaced by the CCD in the 1980s, was the vidicon. It uses an imaging screen which reacts sufficiently well to light that it can be directly scanned with an electron beam, resulting in a system that's much simpler than an image orthicon. The camera depicted alongside the Ursa Mini is a Marconi Mk. IX colour studio camera which debuted at NAB 1978. As a colour camera, the Mk. IX has three 30mm vidicon-type tubes. Two of the tubes, blue and green, can be seen sticking out the top with the cover removed.
It was the last television camera made by Marconi and part of the last generation of tube cameras. While early CCDs couldn't match tube performance, they developed rapidly and the bulk, fragility, power supply and maintenance requirements of tubes could never hope to keep up. Even in factory fresh condition, tube cameras struggled to hit seven stops of dynamic range, which is why standards such as Rec. 601 are so unambitious in that regard. Even specialist applications such as infra-red imaging have largely moved over to solid state detectors, although it's still hard to beat the performance of a photomultiplier for viewing things in the dark.
The Mk. IX camera shown here is part of the collection at the Sandford Mill Museum, and appears courtesy of the volunteer staff responsible for the upkeep of the museum's Marconi collection.
Ursa Mini 4.6K courtesy Blackmagic.