Each time a new top of the range smartphone is released, it seems as though mobile video and photography takes another step forward. The Huawei P20 Pro looks to be no exception to this. Phil Rhodes looks further.
It's impossible to compare cinema cameras and cell phone cameras without being laughed at, but if it is possible to keep the snickering down at the back it's worth considering for a moment. The sheer scale of cell phone sales means that development funding is vast. The situation in 2018 recalls what happened to things like hardware colour correctors in the mid-2000s when home computers started to outperform them. It's not exactly the same situation with phones, since we aren't anywhere near expecting phones to directly replace traditional cameras, but Huawei's new P20 Pro is likely to turn a few heads in the worlds of both still and moving pictures.
The most discussed feature is low light performance. This is done by recording a six-second sequence of images which is compiled using motion tracking so that the long effective exposure can be shot handheld. In principle, this doesn't require artificial intelligence, though AI is being talked up a lot. It's perhaps best thought of as a tripod simulation. The really interesting technology is in the way that the P20 combines several cameras in various ways to improve its images, with the result that it achieves some really impressive results, including the best-ever marks for cell phone still image acquisition over on DxOMark.
It does this, using no less than three cameras on the back of the phone. Two is common enough, but three is rarer and it doesn't seem to have been done simply in pursuit of a number on a spec sheet – there's some fairly deliberate engineering going on. The main camera is listed as having a 40-megapixel sensor, although various sources suggest that it reads out two-by-two pixel combinations, so it's probably better to view it as a ten-megapixel camera with better noise performance. The sensor is unusually large for a phone, stated at 1/1.78” in that rather strange notation that refers to the diameter of an equivalent camera tube. Still, that is pretty large for a cell phone, with attendant benefits to noise and dynamic range.
The other colour camera, of 8-megapixel resolution, is there to provide the longer-lens option that most cell phones conspicuously lack. It's specified as having an 80mm “equivalent focal length,” which presumably refers to the performance of a notional full-frame 35mm camera with an 80mm lens. As such, it's not a tremendously long lens in general terms, but it provides a much, much tighter shot than most cell phones, which is very welcome. Equally welcome is the fact that it has optical image stabilisation, which is increasingly useful on longer lenses.
The third camera is a 20-megapixel monochrome type. Monochrome cameras enjoy better sensitivity, better resolution and lower noise than colour types since their photosites don't labour behind filters which must absorb up to two-thirds of the light (practical designs absorb less) in order to create a colour image. The monochrome camera is used in various ways, including to improve the sharpness of magnified images when the other cameras are zoomed and is to create high-precision depth data. This approach presumably uses the physical offset between the monochrome and colour cameras to do the best possible job of post-processing techniques such as depth-of-field simulation. The creation of depth maps is very sensitive to noise, so the fact that the monochrome camera is essentially a stop and a half faster – implying a stop and a half less noise – is very welcome.
On a modern phone, though, the hardware is only part of the equation. The trend is currently very much towards the creation of an image through sensor fusion, where various sources of information about the world are analysed in order to create the final image. This approach is not without caveats. Taking a lot of pictures of a scene, then averaging them together to reduce noise, is straightforward enough, but it won't work on moving subjects, especially with six-second effective exposures. The results of simulating depth of field are still subject to some minor unpleasantness at the edges of the foreground subject and generally won't create attractive effects with point light sources, for instance. That's not to decry the approach entirely, but it is not yet a complete solution to more conventional cameras.
But might some of these techniques make their way into DSLRs and cinema cameras? People scowl at the idea, but the sensors at the highest end are beginning to approach fundamental physical limits which can, to some extent, be worked around with sensor fusion. There are still caveats, but it's not impossible to think that, one day, cell phones and other small consumer cameras will start to enjoy performance that the high-end world wants. At that point, look for techniques like this to start appearing on film sets as well as on Instagram.
Watch Huawei's video about the phone below.