RedShark Replay: First published in 2013, RedShark's Editor in Chief David Shapton fired up his crystal ball, and it still rings as true today as it did back then. It dawned on me just the other day: we are living in the future. There is so much innovation and new technology around us that it no longer feels like we're merely living in the present. It's as if we're in a science fiction movie where we've been transported forward by ten or twenty years
The industry and the activity that RedShark is all about - video, film, the technology and craft of moving images - is so dependent on technology that the changes (and the rate of change) don't just affect us glancingly - they are right at the centre of what we do.
I'm not going to try to make predictions in this article - in fact one of the big conclusions is that it's rapidly getting harder to predict anything. What I am going to do is look at the evidence that the future is here and that the more distant future is approaching over the horizon like a hypersonic aircraft.
The Past Year
Let's just look at what's happened in the past year. The first year of RedShark, in fact.
Just under a year ago, Sony announced the F55 and F5 cameras. These 4K devices were somewhat cheaper than people were expected and represented a major commitment from Sony to 4K.
We've seen not just one, but three new cameras from Blackmagic Design. That's right, a company more famous for building I/O boards and converters now makes cinema-quality cameras. Where did that come from? Actually, we do know. It was part of a careful strategy that involved the purchase of Davinci on the software side and (we suspect) the "trojan horse" development of the Hyperdeck Shuttle field recorder, which more than a few people said at the time it was announced, was the back end of a camera, missing only a lens mount and a sensor (and - it has to be said - an awful lot of precise and meticulous development and integration!).
And we've seen raw HD video from Canon EOS cameras. Raw is probably the "word of the year". Just a few years ago, very few people even knew what raw video was. Even if you did know, that you could have access to it from your DSLR with just a software download would have seemed incredible (at least to anyone without prior knowledge of what Magic Lantern had been doing to other cameras like the GH2, where they upped the data rate of its recordings - nothing to do with raw but everything to do with how clever they are).
We've seen the release of films shot on an 8K sensor: the one in Sony's F65. Oblivion looked stupendous; a good job, the critics said, given the underwhelming nature of the plot.
Adobe's Creative Cloud has ushered in a new way to deliver software, and a somewhat different way to pay for it. Sony has launched Ci, their cloud media production system. It won't be long before cameras have no storage at all, because all their media will be uploaded at the time it's acquired. (Really? This one might take some time, but only this week we read about the prospect of100Gb/s Wifi.)
The iPhone 5s was announced. On the outside, it's pretty much the same, give or take an option to have it in gold. But inside, it's gone 64 bit. Apple's new A7 chip is going to be underused for this phone. It's now at the core of the new iPad Air and perhaps soon a new generation of Macbook Airs (or iPads with keyboards? Whatever shape or form they take, it's looking likely that we'll be seeing hyper-portable laptops using the feisty chips soon). It's worth noting, just to orient ourselves, that the new iPhone is not five times more powerful than the first one; not ten times, but fifty six times faster than the iPhone 1. That's in six years. Which brings us on to our next point: the rate of change.
We've written before about our belief that the power and capability of technology is growing exponentially. What that means is that we're not seeing a steady, even slope as things progress, but an upward curve. Moore's law is an example of this but it's a wider phenomenon. Just to remind ourselves, Moores law (there are several versions of it - just looking at the same thing from a different angle) says that every eighteen months or so, the density of components on a chip doubles. When you double the density you get four times the number of components in the same space. So, without anything else changing, this is an automatic increase.
Corresponding Advances
Then, when you look at the stuff you need around the chips - communication, software, storage, I/O, you see corresponding advances.
It's not a smooth process; in fact it's pretty jerky, but when you zoom out and look at everything that's happening, you do get a very consistent curve, that's traceable back for a few hundred years: it's just that, back then, the slope was too shallow to be noticeable. Now, it's almost vertical. We're reaching the point where we can't now predict what's going to happen more than a year or so in the future.
Other Developments
Let's step away from this for a moment and look at some more things that have happened over the last year.
Only this month, we've seen the first curved phones. We don't really know what they're for, and, given that there's no obvious demand for them (just like there wasn't an obvious demand for an iPad!) it's at this stage surprising that so much effort has been put into making them. This has happened partly because smartphones are so competitive in the marketplace that anything, anything to distinguish one device from another is probably worth the hundreds of millions it costs to develop it. With curved phones, the effort must have been eye-watering. Making OLED screens bend is the easy part (they're bendy anyway) but what about making a curved battery? And the circuit boards? In fact, the first batteries aren't really curved - they're just long and narrow, so that they'll still fit in a moderately curved body.
We've seen tablets shoot straight through Full HD (The Google Nexus 7 is only £200 and has a full HD screen - on a seven inch tablet!) and one of Amazon's Fire tablets has a higher resolution even than Apple's retina iPad Air. And it's not just a matter of fitting more pixels into a screen: you have to be able to feed each and every one of these with data, so if your screen resolution quadruples, so too must your graphics chip's power.
Elsewhere, we've seen video drones that can automatically organise themselves to map large, complex 3D objects like mountains (and probably the inside of people's houses too, through the windows) and the art of real-time video stitching means that it won't be long before we have 360 degree coverage of sports matches that viewers can pan around.
And in high street shops, you can buy 4K televisions alongside the more usual HD ones.
Another sign of rapid change: viewing habits have changed almost overnight. Close to a majority of us now use streaming and catch-up services for TV. Netflix, Hulu, Apple TV, once a minority choice have been given a massive boost not just by the availability of whole series (like Netflix's House of Cards) for "binge viewing" and by the overall increase in broadband speeds. We are now at the point where house resale values are starting to suffer if there is no prospect of high speed broadband in the location. Those that do have it quickly get used to entire families where each person streams their TV show or film of choice.
Future Trends
Social networking means that an idea (right or wrong) can spread across the globe in seconds. PR companies have minutes to respond where they used to have hours or days.
Of course, it's a logical contradiction to say that we're living in the future. The only real time is now; the present. The past is history and the future hasn't happened yet.
But if you define the future in terms of things that you didn't expect to see around yet, then, yes, we're definitely there already.
You can look at broader trends and make predictions based on them, but predicting the obvious isn't massively useful. For example, resolutions will get higher. Bandwidth, processing power and storage will get bigger. Screens will be huge, but unobtrusive because they're so thin.
Computer power will be almost free.
Electric cars still won't have enough range.
This is the easy stuff. Where it gets more difficult is with specifics. I can see two main reasons for this.
Firstly, the stuff we have today is already so good that it's quite often hard to see how (and sometimes why) it should be improved. Do we really need 8K smartphones? Is a 4K iPad really necessary? Perhaps what we'll see is simply that our stuff will get cheaper?
The second reason is that a very prominent side effect of exponential progress is convergence (or should it be that a very prominent side effect of convergence is exponential progress?). Convergence is where parallel fields of development compliment each other. For example, the internet and improved bandwidth have converged on telecoms (think of Skype) and on TV (think of Hulu, Netflix and the BBC iPlayer). And it's all converging on TV production, with Sony and other big players looking at replacing SDI in the studio (and for outside broadcasts) with conventional network cables and routers, and using IP for a video transport.
Either way, what this means in practice is that it is very difficult to predict the next big thing.
Predictions in the Past
If you rewind ten years, and look at what technology gurus were predicting then, it almost certainly wasn't Facebook. And it's very doubtful that Microsoft, developers of Microsoft Office, saw Google Docs coming. And while they might have foreseen Apple giving away their operating systems and iWork for nothing, I'm guessing they're wishing that hadn't happened.
Mobile devices - tablets and phones, are now so powerful that they're able to do almost anything that might have previously required dedicated hardware. In fact, you have to wonder why camera manufacturers still design their own user interfaces, when you could control a camera from an app, and probably do it better as well.
Modularity and Interchangeable Sensors
Perhaps the most remarkable thing we've seen this year was the modular camera concept from Apertus. It's exciting because it "gets it". By modularising all the functions in a camera, it means that each one can be developed at its own optimal pace.
Perhaps this is the way forward. Instead of the entire camera being made by one manufacturer, we could have greater interchangeability. It's not for everyone - there's a reason why (eg) Sony's cameras work so well - it's "vertical integration": making everything from the sensor to the storage and everything in between.
But we've seen what can be done with lenses and lens adaptors. Why not have interchangeable sensors? And you can go as granular as you like with this. Just this week, Sony announced interchangeable Optical Low Pass Filters for their sensors in the Sony F55. And earlier in the year, we saw their "whole camera in a lens" system, ostensibly to use with a smartphone - itself a great illustration of modularity - but perhaps pointing the ultimate way ahead for modular cameras where the lens and the sensor are one sealed and optimised unit that can give far better performance than a (random) lens being used with a (random) sensor. When you apply digital processing to lens correction, you can achieve results that approach perfection.
The near Future
So, what's going to happen in the near future?
It's very hard to say. All you can do is expect progress in unexpected places, and some bigger jumps than before. There will be exceptions to this: Arri's new camera, the Amira, is more consolidation than innovation and it's none the worse for that. You can't expect a single company to reveal new and groundbreaking products all the time. But what is very likely to happen is that smaller, flexible companies will take advantage of the fact that components are available to anyone that wants to buy them. This is how companies like Atomos, AJA and Convergent Design are able to build digital video recorders that are smaller and more powerful (and run on batteries) than tape decks of ten years ago. And they're a 20th of the price.
Tags: Technology
Comments