Predicting the future is notoriously hard, some would say impossible. But we can look at the technological trends that will be at the forefront of imaging technology. RedShark Editor in Chief David Shapton gives his view on where we are headed.
I always feel slightly guilty when I write pieces like this. Because I don't like giving the impression that I have all the secrets, or that I have some magical way of predicting what's going to happen in the next year.
But what I do have is the privilege of being the Editor of RedShark, which means that I speak to a lot of people, and get shown a huge number of products.
On top of this, I do my own research, from the point of view of someone who isn't tied to any particular manufacturer or platform. My opinions are my own.
I can see a few trends emerging which I think are of seismic importance, and what will be characteristic of all of them is that you won't necessarily know they're there.
Sorry if that all sounds obscure. I'll get down to some facts.
First, the space between the sensor and whatever comes out of the camera (either live or as a file) is going to become more powerful and more important.
I'm going to write more about this separately, but in a nutshell, this space is where the camera's image processing goes. And you can do almost anything with enough processing and the right algorithms.
What you can certainly do is make the best use of the information that's gathered by the lens, the sensor and the Bayer filter if there is one (the Bayer filter is what allows us to extract colour pictures from what is essentially a black and white sensor).
In a way, improving the processing that goes on here is almost inevitable given that 1) Processing power - especially in the mobile space - continues to improve. 2) Software techniques improve too. 3) Cameras are in many ways "good enough" now. I suspect this will give space for R&D to be looking at new ways to process images. We may find that cameras become even more upgradeable though having bigger processors with more capacity for upgrades during a product's lifetime.
Of course this trend will be amplified by the fact that if you take the raw footage from a camera, you'll be able to apply even more finessed processing to it externally. You only have to look at the way that Blackmagic's Resolve is bundled with their cameras to see how this might work.
The two remaining trends I'm going to mention are really subsets of the first one, but no less important for that.
This is a very wide field and a gigantic subject. It's enough here merely to say that we are currently in the foothills of an AI revolution that has no visible or predictable end in sight. Already we're seeing smartphones with AI chips built-in. What they can do already is amazing and would have seen like science fiction five years ago. Just taking an iPhone as an example: it's able to take pictures and then improve them not by manipulating the data that's been captured, but by imagining what a great picture would look like based on the information it's been given. (With apologies for a gross oversimplification here.)
That's a huge step. Essentially, if you want to turn a mundane and averagely lit portrait into one that looks like it was taken in a studio, just tell the camera to make it so. The software will look at the subject of the portrait and "imagine" what it would look like if it were superbly lit in a studio. This is not supposed to be possible. You're not supposed to be able to add information after a picture is taken. Until now, all image processing has been "subtractive".
This isn't new but it's becoming more powerful, and cleverer. Essentially, if a camera knows the characteristics of a transparent object in front of its sensor, it will be able to calculate an image. It might not be very good, or it might be surprisingly good. Either way, expect to see cameras and imaging devices with all sorts of unconventional lenses - all the way from smartphones to lightfield cameras with vast resolutions.
It's all going to be a bit confusing because this amazing technology won't be visible on the surface. Some manufacturers are remarkably coy about their use of these new techniques. Sometimes you'll only notice them when you buy a product and it's inexplicably better than you were expecting.
Ironically, you may even find that things seem to be slowing down, as conventional performance reaches the point of "easily good enough", while more resources are poured into AI and Computational Imaging. But this is a good thing.
Prepared to be amazed, yet again!