The intriguing L16 camera and whether computational photography will usher in a new age of third party software development for cameras.
We're quite used to the idea of a plugin. It's a piece of software that's bolted in to an application to give it additional capabilities. Plugins are so much part of standard practice that it's hard to think of some applications, like Photoshop, for example, existing without them. It's not that the host applications can't function on their own - far from it - but with plugins, you don't have to be constrained by a set of functions that the host application designer gives you (or sells to you). Plugins let third party developers bring their specialist and often unique talents to your personal toolkit.
It's a well-established principle, but one which is pretty unique to conventional or laptop setups (but increasingly mobile devices, too). With other types of devices, like cameras, for example, plugins tend to be literal: hardware, in other words. For a removable lens camera system like Micro Four Thirds or Canon EF, third party lenses literally plug in to the camera, as do filters and all kinds of rigging, all of which can dramatically modify and enhance the original equipment.
What we think of as hardware devices often run software these days, especially cameras, which have multiple layers of code to power the user interface and, crucially, most of the deeply embedded functions that we don't see at all. For example, when the raw pictures come off the sensor, they're processed to remove the Patterning from the Beyer filter, all sorts of "colour science" is applied, all digitally and completely hidden to us, except via arcane menus if the camera provides such access.
Cameras with built-in lenses often include lens correction software. This is a big and sometimes surprising advantage of fixed lens cameras: they're able to use an intimate knowledge of the only lens they're ever going to use to correct details and enhance performance. For example, if there's a known amount optical distortion, it can be largely corrected by applying an equal and opposite amount of distortion in software.
There's no reason why cameras shouldn't allow software plugins from other manufacturers. They'd need to open up the architecture of the camera, provide APIs (easy and well documented "hooks" into the camera's system) and there would need to be something useful for them to do.
This mostly doesn't happen for the very good reason that that is what post production is for. And with raw video, you wouldn't want to impose fixed characteristics on your footage in the camera because that would reduce your flexibility later.
Now, just possibly, there's a very good reason why you might want to allow plugins in cameras: to change the optical characteristics completely.
The revolutionary Light L16 camera that was announced recently throws the camera optics rulebook away. When you look at the front of this thing, it seems insane: a seemingly random array of optical devices that makes even less sense when you look inside. With a total of sixteen lenses and sensors, it's hard to make sense of it at all.
Luckily, it comes with software that makes complete sense. The camera is able to use a selection of the optical elements and sensors to construct high resolution pictures at multiple focal lengths. Like the earlier Lytro cameras, it's possible to change the characteristics of the photographed scene radically in software. So, focusing becomes something you can alter in post production.
We're going to write a lot more about so-called "computational photography" because it's a complete and radical paradigm change. It essentially means that you can use multiple, cheap lens devices and, as long as you (the camera, that is) have a detailed knowledge of their optical characteristics, you can combine the resolving power of some or all of them to create pictures that are much, much better than with a single element.
In fact, as long as you have a way to measure the characteristics of an optical device, you could use virtually anything, theoretically even the bottom of a beer bottle, to add information to the scene. Again, theoretically, you would only have to point the optical device at a known test pattern, under controlled lighting, and you could extract useful information from it. You wouldn't, perhaps, be able to turn it into a Zeiss prime, but that isn't the point. All you need to do for this to work is to merely add some information from each component part of the 'compound' lens.
All of this being the case, it means that we will have immense flexibility in what we can use to make pictures. It takes a lot of processing power and, at this stage in the history of computational photography, a lot of prior research too.
We are entering an era where using arrays of relatively cheap optical devices we can create extremely fine pictures. And this is a great opportunity for third party software developers.
Because a basic camera using a multiple lens array will be set up to give a clear, "normal" picture. It will be tuned to give a sharp, high resolution image. But there's no reason that I can think of why third party developers shouldn't build their own software to modify the output of the optical array.
It's important to realise that this is not the same as taking an image from a single lens and changing it as you would in Photoshop. With Computational Photography and multi-lens arrays, you can change the apparent optical behaviour of the lens. You might, for example, want to add some vintage-like distortion or just make it softer - or sharper, for that matter.
These would, literally, be software plugins that change the way your lenses behave. And I think it's a very exciting prospect.
Whether or this is as exciting for lens manufacturers is another matter. After all, if you can use some old bottle glass and some software to make high resolution pictures, then there is a sense in which people might think you don't need 'proper' lenses any more.
They may indeed think that. And it may be true at the lower end of the market. It might also mean that, in a few years, there is a huge variety of cameras in specific niches that are able to do things that they can't do today, using the power of computational photography.
I think what's more likely is that single and multiple lens photography will become separate schools within photography overall, just as we have black and white and colour. There will always be a role for single lens photography in the same way that virtual reality will never completely replace the idea that films are shown on a (two dimensional) screen.
In fact, computational photography opens up new ways for lens manufacturers to use their knowledge and expertise.
There are certain things about light typically only lens manufacturers know. There is now an opportunity for them to package this up in software and apply optical 'looks' to multi-lens arrays. This would be a better way to apply looks than in post production, because it is done at the earliest possible stage in the workflow. In fact, with sufficient precision in the calculations (remember that this is taking place essentially on raw images), the results would be as good - or even better - than if a conventional lens had been used.
Who knows how all this will turn out or how long it will take to become commonplace. But with the Light L16 camera, the genie is out of the bottle.