<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Is AI running out of steam? 

Steam powered robot:
3 minute read
Steam powered robot: Shutterstock
Is AI running out of steam? 
5:21

Is AI reaching a plateau? The short answer is “no.” The long answer is also, “no,” but even more definitely.

Have you ever noticed those awkward moments during live broadcasts when an interviewee stops to think before they speak? In the white-hot turmoil of a live broadcast, there’s no such thing as a pause. Indeed, on certain commercial radio stations, if the station output is quiet for more than a few seconds, an emergency “tape” - we’ll still call it that - will kick in.

More often than not, the silence is merely a gap in the conversational flow. Sometimes, it’s because there’s a delay on the line. Only rarely is it a genuine technical glitch.

This is a long-winded way of saying that pauses happen and are not always a sign of something badly wrong. Very few phenomena have a regular pulse, especially scientific and technical breakthroughs. It would be surprising if there were a new scientific revelation every five days, nine hours, and fifty-seven minutes. If there were, it would be a completely new phenomenon that would itself need investigation. 

Pause for thought?

I’ve seen quite a few reports suggesting that AI is pausing. They suggest that the breakthroughs are not coming as fast as expected. I’ve seen no evidence for that whatsoever. I still expect AI to grow hyper-exponentially as models get bigger and cleverer and as the state-of-the-art develops novel techniques. I’m not alone in this view: Jensen Huang of the multi-trillion dollar Nvidia has recently said that AI is now in a positive feedback look, where AI itself is improving AI technology. The result? Exponentiallity squared.

Optimising existing models and techniques probably has a few orders of magnitude left in it. One example is the recently-announced GPT o1, a model that uses an existing LLM as a basis but (this sounds far too anthropomorphic!) spends time “thinking” about its prompts before answering. By creating a “chain of thought”, it is capable of something that could be described as reasoning. That’s a kind of optimisation; this being the era of AI, it’s a huge one.

We collectively have an extraordinary talent for becoming normalised to even the most extreme phenomena, and there’s no doubt that the progress we’ve seen in the last couple of years in generative AI qualifies as extreme. We’ve become accustomed to waking up and finding headlines that say, “You won’t believe what just happened in AI,” on our phones. So, if there’s a gap of, say, a couple of days, it’s tempting to say, “It’s all ground to a halt. That’s it for AI. We all got caught up in a bubble”.

In addition, there are plenty of reports about existing models not being as good as they were, with some seeming to struggle to give the same quality of answers as they used to. I can’t say I’ve noticed this, and it could all be subjective—it kind of depends on your expectations.

The limits of knowledge

So, let’s assume this is a real phenomenon: foundational generative models essentially losing the edge, their powers seemingly leaking away. What could possibly explain this?

One suggestion from the AI community is that we’re reaching the outer limits of knowledge: that there isn’t much more to learn. I don’t think that will ever happen for reasons we’ll look at later. We might even have to generate synthetic data to generate “new knowledge”. In which case, what is the nature and status of that knowledge? Are we in danger of merely diluting what we already have and, even worse, “inbreeding” our knowledge base so that it becomes unreliable?

Are we really reaching the limits of our knowledge? That would be like saying, “Well, I’ve read the Encyclopaedia Britannica from end to end, so I know everything”. I’ve no doubt the venerable encyclopaedia is a great resource, as indeed is the internet in its totality. But it’s a mistake to think of knowledge as something that can be contained in a finite entity like a book or even a global network. First, everyone reading a book will have a different interpretation of it. Perhaps someone reading a history of World War II will recall something their grandfather told them about their own grandfather’s experiences in that very war. That, for the purposes of training AI (or any of us, for that matter), will be new knowledge. You don’t have to extrapolate much further to realise that an almost infinite amount of knowledge is being generated all the time.

AI isn’t running out of steam. It will probably not follow a true exponential curve because of real-world limitations like computing power (these days called “Compute”) or even electrical power. But it will more likely jump beyond mere exponentiality into hyper-exponentiality. It will exceed our expectations as the general public and even those of experts in AI. Arguably, we’re already at that point where it is some sort of singularity. We haven’t even started to figure out what happens when you begin federating AI models - allowing disparate models to negotiate amongst themselves how to do things better. This will probably happen without our express consent when increasingly powerful AI agents come into contact with each other.

And then, of course, there are robots… Running out of steam? Far from it.

tl;dr

  • AI is not reaching a plateau, and there is no evidence to suggest that the breakthroughs are slowing down.
  • Optimizing existing models and techniques still has significant potential for growth.
  • There are suggestions that existing models may be losing their edge, but this could be subjective and depend on expectations.
  • AI is unlikely to follow a true exponential curve due to real-world limitations, but it is expected to exceed expectations and enter a phase of hyper-exponential growth.

Tags: Technology

Comments