David Shapton on why there are no small steps anymore when it comes to progress.
When RED Digital Cinema recently revealed that it had been purchased by Nikon, it’s probably fair to say that it was unexpected. Who’d have thought a respected, traditional, measured company like Nikon would buy the maverick Californian upstart that broke through the barrier of decades-old broadcast standards with a viable 4K cinema camera?
Only those within the two companies currently know the reasoning and forces behind the acquisition, and anything anyone else has written is speculation. This acquisition probably illustrates the very high levels of uncertainty in the technology and media industries. There will almost certainly be more surprises like this.
But this is not an article about the state of the camera industry. Instead, it is a warning to expect the unexpected. Being taken by surprise is likely to become the norm, and the reasons are increasingly clear.
It won’t surprise anyone to know that AI is behind all this. It is by far the most disruptive force in our lives, possibly ever. Almost daily, there are new breakthroughs and surprising revelations about novel and previously undreamed-of capabilities. It’s hard to keep up with it, not just in the sense of being able to read all the AI news in a single day but also in digesting and assimilating the information.
Popular internet sites like Engadget and Gizmodo have been around for a couple of decades, bringing news of new consumer devices. Twenty years ago, these were typically new types of MP3 players or “Netbooks”—those underpowered miniature laptops that struggled to boot up but could, over an almost geological timescale, connect to the internet and let you read your Hotmail.
But now, quite literally every day, we see developments in generative AI with existential implications. Who in the creative industries is not concerned about Sora?
In these febrile times in AI, it is quite normal to feel disengaged, bypassed and increasingly powerless. The recent video of the Figure Robot - now infused with OpenAI reasoning and language - is more than impressive, but none of the people I’ve spoken to about it have spontaneously welcomed it. That might be because, if my reaction was typical, it’s too much to take in, with the diffident but precise vocal responses, the physical dexterity and the delayed but very real human-to-machine interaction. It genuinely feels like this is a person talking to another life form.
It's not going to slow down. The once-in-a-lifetime is about to become the everyday.
AI is not on a linear path. The way it interacts with our known world - and with itself - is and will continue to be unquantifiable. Not only are there too many variables but there is something fundamentally different about AI progress from any other type of progress we have experienced. There are at least two reasons. First, AI can improve itself, and second, somewhat related to the first, is emergent properties. As AI models increase in number and complexity, we start to see behaviours that we weren’t expecting. Out of the blue, an AI model can do arithmetic, write code or translate between languages it was never taught. Sora has shown us that understanding how physical objects interact with each other is an emergent property. It seems likely that consciousness is an emergent property. As such, it will probably take us by surprise, and although I feel it is currently unlikely, it is possible that it has already happened. How would we necessarily know? You can’t measure something if you don’t know what it is - and we don’t know what it is.
If machine consciousness did emerge, we would talk about it with language. Language is amazing: a formalised but flexible way to communicate. It is the best way to exchange ideas and concepts between strangers, friends and now, seemingly, robots. Now that machines have language, you can start to imagine spontaneous conversations between not only software AI models but between those models embodied inside seeing, hearing, and walking robots equipped with touch and the rudiments of perception. These will have experiences. They will be able to talk not just about abstract concepts but about the physical world - the one we live in. Where will those conversations go? I don’t know how to begin to answer that. Did you see how easily we started talking about “Having experiences”? Having experiences (as opposed to merely doing things) is a necessary, and perhaps sufficient, hallmark of consciousness.
All of which barely scratches the surface of the new world we are living in. Are we on the verge of achieving utopia or on the edge of a dystopian abyss? There’s no way to know that. In almost any situation where there is nothing but uncertainty, “take small steps” is a good approach. Don’t bite off more than you can chew. Don’t move forward until you know you have a strong foothold. The problem is, with AI on its current trajectory, there are no small steps.