Replay: About a year ago, we covered the ISO and ASA systems as they apply differently to film and video cameras. There's been a lot of interest in that article, especially when it was republished more recently, with a lot of discussion about the details of how modern cameras implement their sensitivity settings. Phil Rhodes delves in even further.
Let's start at the very beginning. At the most fundamental level, light detected by a sensor is represented by a number of electrons sitting in each photosite. To put this in perspective, Fairchild's LTN4625A sensor has a “full well capacity,” a maximum number of electrons per photosite, of over 40,000. It's easy to assume that if a sensor has (say) 16-bit raw output, the 40,000 electron capacity would read as full scale, or 65,535, and no electrons would read as zero. At this stage, the sensor is pretty linear; double the light and you get double the electrons.
There are a couple of problems with that explanation. The first issue is that all sensors have a “dark current,” representing an electron count that might be read from a photosite regardless of whether any light fell on it or not. Also, they have “read noise,” which is the sum total of all the sources of inaccuracy throughout the entire device. The LTN4625A has an average dark current of 15 electrons, and an average read noise, in rolling-shutter mode, of 2 electrons.
The bigger problem is how we actually turn those 40,000 (or fewer) electrons into a digital number. The LTN4625A has a resolution of 4608 by 2592 photosites, so at its maximum speed of 60 frames per second, it needs to read 4608 × 2592 × 60 = 716,636,160 photosites per second. Often this is done by allowing the electrons from each photosite to flow into a (very, very tiny) capacitor; this converts the current of flowing electrons into a voltage, which can be read by an analogue-to-digital converter.
The problem is, a single analogue-to-digital converter would have to sample at around 717 megahertz in order to read the value of every photosite on the sensor. That's very fast, and it's very difficult to make an analogue-to-digital converter which can go that fast without adding lots of noise. Many sensors make this easier by breaking the photosites down into smaller groups and using separate A/D converters for each of them, but modern sensors are very high resolution, and demand very high frame rates and very high dynamic range.
This means that the analogue to digital converter needs to handle that high dynamic range – a large range of voltages – at very high frequency. One approach is to use two capacitors: a small one, which will be charged to a higher voltage, and one larger one, which will be charged to a lower voltage by the same current. It's easier to make two A/D converters that can handle smaller dynamic ranges, as opposed to one A/D converter capable of doing it all at once. This is where (at least some kinds of) “dual gain” sensors come from. The LTN4625 is probably doing something like this, since it has two 11-bit outputs for a total of 22 bits of data.
It is up to the camera designer to fuse those together. Because of small inevitable manufacturing variances, some manipulation is required to ensure a smooth transition between the “bright” parts of the image and the “dark” parts of the image. Get this wrong, and highlights will have bands of mismatched colour and brightness around them. Usually this would be set up as a factory calibration procedure, but either way, all of this demonstrates the fact that it isn’t quite as simple as assuming the full capacity of the photosite is “white.”
The absolute sensitivity of a sensor is controlled by its quantum efficiency – that is, how many electrons end up in each photosite, compared to how many photons hit it. That's fixed at manufacture. However, the effective sensitivity can change depending on other, associated electronics. Some sensors have the ability to switch in several different sizes of capacitor. We'll call this gain, rather inaccurately. A sensor engineer might call it conversion factor, but it explains why some cameras have “native” sensitivity settings which are then augmented by software processing to create a larger range of ISO options.
This (or something having the same effect) was found by the Magic Lantern people when they started poking around with the internal settings on the Canon 5D MkII. There were, it was found, two settings for sensor gain, each of which affected an alternate row of pixels. Set the gain higher, and shadow detail improves (but highlights blow out.) Set the gain lower, and highlight detail improves (but shadows may be lost.) Set the alternate rows to different gains, and the same sensor views the same scene in two different ways. The photosites do what they do, but the analogue-to-digital converters behave differently. Whether that's a “variable sensitivity sensor” is largely a semantic argument, but it's all happening in the analogue world.
There were some attempts to interpolate the alternate rows together, so that the 5D MkII's raw video mode as realised by Magic Lantern could have some of the characteristics of a dual-gain sensor, but the already sub-HD results ended up looking too coarse. More recent designs make it possible to do high and low-gain readouts simultaneously.
Finally, let's be clear that none of this has anything to do with ISO, ASA or sensitivity in the context of conventional photography. The truly raw data that comes off a sensor might be two rather mismatched streams of highlight and shadow detail; that's the untouched, virgin raw data that people get so excited about. It needs processing so that the two streams match, then it needs to be less than (say) 22 bits. Very, very often, “raw” data has at least some basic brightness processing done to it so that it makes better use of the 12 or 16-bit raw recording, and that's to say nothing of compressed raw, which can be even more heavily processed.
It's as well not to become too puritanical about this stuff. Uncompressed is good. Raw is good. But neither is a panacea, and no matter how raw a raw file is, there will, in almost all cases, have been some work done to the data that someone might consider destructive.