Whatever your political standpoint, we can perhaps all agree on one thing regarding technology: politicians aren't taking AI seriously.
Anyone who's worked in the media business in the last three decades knows that it's normal for things to change a lot. As the BBC celebrates surviving 100 years of broadcasting, a person in the street might think TV and radio haven't changed much, give or take colour and big HD screens. The studio is still all about lights, cameras and microphones. Superficially it's the same enduring recipe.
But behind the scenes, everything - literally everything - has changed. Editing suites that used to cost half a million pounds are now the price of a laptop and some cheap software. Documentary filmmakers now have infinite shooting time compared to the crippling cost of shooting on film. And it's a good job, because instead of one or two channels, there are now thousands: all hungry for content and competing with the streaming services' gravitational pull.
There's no such thing as a steady state in this corner of the technology universe. But even though we're starting to get used to the massive and continuous rate of change, something is about to make that nascent intuition wholly inadequate. It is, of course, AI.
Nobody in their right mind would take Twitter as a single source of truth. But as an indicator of early trends, it's been unbeatable up till now. That's why you have to take tweets like this seriously:
In the media industry, we are right to focus on the aspects of AI that will affect us the most. But we shouldn't do this in isolation because AI is going to affect everything and has the potential to be the biggest threat, as well as - let's hope - the most incredible opportunity for us and the planet.
(To be clear about this: climate change is the biggest and most present threat to the mid-to-long term health of the planet; the threat from AI comes from its rapid and unpredictable growth.)
But where is AI in the political conversation? In the UK last week we had a new fiscal budget announcement, which will have dramatic consequences. Perhaps not quite as dramatic as the previous disastrous budget that the new one is having to fix, but it will likely plunge us into a new period of financial drag and raw hardship for many individuals and families.
How much time did the UK chancellor spend talking about the economic effects of AI? And how much discussion did we hear about the hyper-exponential growth of AI-based technology? For that matter, what did the government say to reassure us about long-term stability and security in the face of technology that is growing by around a millionfold every decade? The answer is none. Not a single mention. Not even the myriad of financial and economic predictive models seem to consider AI.
I'm only talking about the UK because that's where I live. As far as I can tell, it's the same worldwide. So why is it that politicians are so typically the category of citizens that are the least equipped to deal with AI?
It could be because it's not a vote-winner. But it could indeed become a vote influencer. There's never been more potential for a Luddite-like reaction to the prospect of AI taking all our jobs. If you are a politician, how do you deal with this gut feeling that AI is bad? Maybe with a knee-jerk reaction. Maybe by banning it.
But that would be a monumental mistake. Not least because AI is already baked into our futures; as we all know, you can't un-bake a cake.
In any case, anyone - let's say a government - that doesn't understand AI doesn't know how to ban it, either. AI is largely hidden from us right now, and it's very easy to argue that what might look like AI actually isn't, you know, intelligent. So if you wanted to pretend that AI isn't going to be that big of a deal, then go ahead: politicians will probably believe you.
But that may change very soon. Big things are coming. One thing in particular. It's called GPT-4, and it's the successor to the current state-of-the-art in Large Language Models, called, unsurprisingly, GPT-3, which was itself a massive leap ahead of GPT-2. You get the idea.
These Large Language Models (LLMs) sound a bit like autocorrect. What they do, essentially, is predict the next word. So, for example, if I were to write, "I think I need a cup of ***, it's unlikely that the missing word would be "toenails". So why is this technique so powerful? It's because it's working better than expected. No one expected it to be able to do stuff like arithmetic. It's even beginning to write code. Think about that for a minute!
GPT-3 is already so good that it can write moderately convincing essays and articles, and GPT-4 is likely to be able to pass the Turing Test.
That, again, might sound like a narrow scope of abilities. But remember that words are about concepts, and concepts are about thinking. The Austrian Philosopher Wittgenstein said, "The limit of my language is the limit of my world", and he meant it literally. It means if you can't find the words for something, then, for you, it doesn't exist.
It is this relationship between words, concepts and thinking that is going to make GPT-4 so powerful. And that's just one field in AI. We also have Generative AI, which can produce astonishingly detailed images from a sparse text input. When you combine these models, you end up with something that comes very close to looking genuinely intelligent.
We are starting to see AI experts who have gone on holiday for two weeks and then come back to find radical, unexpected developments in their own field. It's making Moore's law in its heyday look glacial in comparison.
Brace yourselves. If we approach this in the right way, the benefits could be off the scale. If we align the aims and intentions of AI with our own benevolent wishes, then the world could end up immeasurably better. But in the wrong hands (or even left to its own devices), we could be in for a bumpy and unpredictable ride.
Meanwhile, you might want to write to your MP or democratic representative about this...