Initiatives like Adobe's Camera to Cloud focus attention on where we're going and how far we can go. The answer to that question is not obvious, but it's compelling.
The media industry tends to be on the cutting edge of technology. Shooting extremely high-resolution video, at high frame rates, and then delivering it without losing too much quality are activities that are about as demanding as it gets. Remarkably, technological progress has kept up with demand (and often leads it), but it is inevitably patchy and experimental at the coal face.
Even with Camera to Cloud, it's not the original camera files being moved; it's tiny proxies. The cleverness in the technique is holding everything together, in sync, so that the process is reliable and transparent.
Over time, we will likely be able to buy cameras with no internal (or local external) footage. Instead, we can "assume" blanket wireless coverage that exceeds even the most demanding aspects of digital media delivery.
To understand what lies beyond these advanced but current methods, it's necessary to envisage new architectures and new methodologies. I've long believed pixel-based video is only a transitory step towards something better, and the cloud is just infrastructure. That's not to dismiss its importance - it's been transformative and will continue to be so for a long time - but it doesn't end with the cloud. There's a lot to talk about beyond that.
There's no single destination on the horizon. It all depends on which journey you're on. If you're wondering about the future of the internet, then the answer is some sort of metaverse. What about the future of computing? Quantum computing is frustratingly close but even more frustratingly far away for real-world applications. There's a remarkable amount of ingenuity going into semiconductor research that's bending the definition of Moore's law to include dimensions way beyond the original, simple observation, which is that the density of components on a piece of silicon tends to double every eighteen months or two years.
But let's focus for now on how we move information around. Single isolated computers are a rarity, so we must accept that whatever makes the connections between computers is at least as necessary as processors.
Just a quick note: by "connections", I mean not just external connections between computers, but connections inside computers, typically providing data transport between multiple processor cores, multiple processors of the same type, processors and different types of specialist processors and processors and other devices either on the same circuit board or external to it.
Right now, on a large scale, we have the internet, which is literally a network of networks. It's arguably the world's largest machine, and it works extremely well. Predictions that the internet will collapse under the weight of daily data have not turned out to be accurate, but video has always been the payload that pushes available bandwidth to the limit.
Meanwhile, on your desktop, things are much faster than they used to be. Thunderbolt and recent variants of USB have brought plug-and-play configuration to the mass market. But while you can daisy-chain USB and Thunderbolt devices, these protocols remain resolutely point-to-point.
And that's absolutely fine because it tends to mirror our actual usage.
But what if you could connect anything to anything via the shortest possible route? At the very least, it would be fast. But it would also be incredibly complicated and probably expensive, too. Imagine having a road system where everyone had their own private motorway from wherever they happened to be to wherever they wanted to go.
That sounds impractical, and it is. But with electronics and networks, it's not so far-fetched.
There are already chips that effectively behave like a local fabric. So instead of a traditional bus that you'd find in a PC - the one you plug expansion cards into - you'd have an architecture where everything could connect directly with everything else.
That could lead to a massive speed-up, but the effect would be very local. Where the concept of a fabric gets really interesting is when you virtualise the concept. And that's something you definitely can do with networks.
In the future, we will be able to "assume" the bandwidth we need to get anything to and from anywhere. That's not to say that speeds will be infinite, but simply that they will be fast enough to do most things, and that will probably include being able to move high-resolution video files. At some point, the combination of fibre, 5G and 6G and satellite internet will appear to us as a continuous, contiguous fabric. We won't have to think about how our data gets to its destination: it will just get there.
You might be thinking that this sounds a bit like the cloud, and it is in the sense that when you subscribe to a cloud service, you don't have to think about how it works - even though you know that the cloud is actually a bunch of servers in a data centre somewhere.
Eventually, the communications services offered by the internet and cloud services will become so fine-grained that they will all merge. To the end user, it will be a massive, worldwide fabric of interconnectivity. And it will also become the data communications layer that supports the metaverse.