Moore's law has been unreliable for a while, so how are processor manufacturers going to be making chips faster in the future?
Image: Shutterstock.
Computers don’t improve the way they used to. That’s been a reality for more than a decade, as it’s become more and more difficult to squeeze sheer clock rate out of computer chips, and it’s fairly well understood that Moore’s Law, the guide that used to give us an idea of upcoming performance increases, has been unreliable for a while.
A number of things have made that seem less bad than it might. Most obviously, the popularisation of GPU computing has made some workloads – mainly very repetitive ones - far easier. Blazingly fast solid-state storage has also made computers seem faster in some circumstances. And more recently, in big, industrial server farms, we’ve seen the introduction of programmable logic devices – FPGAS - that can be configured to do specific tasks.
In film and TV, we mainly encounter those devices in cameras and other pieces of electronic hardware. The parts used in cameras tend to differ from those found in servers intended for, say, artificial intelligence, but the principle and the underlying technology are similar.
The question is whether all of this is likely to filter down from the rarefied world of servers and science experiments to make everyone’s Fusion compositions render faster. The short answer is that given the end of Moore, we’d better hope it does. There’s some sign that all this might actually signal a big change in the way that computer systems are designed from the ground up, with some interesting implications for capabilities in the future. Perhaps most significantly, FPGA designers, are currently wrestling with choices about what to include in their devices, from simple serial data input-output devices all the way up to pairing them with Xeon CPU cores.
Making things on an FPGA is expensive and power-hungry, so many of them include hardwired, non-reprogrammable sections too. Sometimes that might include a CPU core, probably ARM CPU rather like those used in cellphones. It’s possible to make one using the FPGA itself, but it’ll be slower, hotter, hungrier and more expensive that way. Since so many FPGA applications require a CPU, it’s worth including one on the silicon. Some designs might also include an implementation of Ethernet or other connectivity, or memory. Again, it’s possible to build those things using the reprogrammable parts of the device, but it’s much better to build it in if we’re confident it’ll be needed.
Tricky construction
There are problems. Making a single piece of silicon with all these separate parts is tricky; the design process is longwinded and some of the parts may not work on every chip we make. Conceptually, too, there are issues. The more specific parts we put on an FPGA, the less reprogrammable it is, and the more specifically built it becomes for a single task. If people buy chips that include things they don’t need, they’re wasting money, and if there are changes in what the world needs, then the whole design risks becoming obsolete.
That pretty much defeats the object of what an FPGA is supposed to be about.
One solution is to change the whole architecture of a computer. Right now, we have a CPU which, as its name suggests, is central. Other parts of the system may have intelligence of their own, and access main memory without the direct supervision of the CPU, at least in small chunks. Plug-in devices can only talk to the system via something like a PCIe bus. One idea is to improve that to the point where individual devices can talk to the CPU, and to each other, on a much deeper level, almost like multiple cores in a single CPU do now.
Enter the chiplets
This can be done, to an extent, by combining several separate pieces of silicon inside one physically packaged chip. That lets us manufacture parts and select what we need much more easily than making a single silicon slab. Also, because the individual elements are small and very close to one another, it’s easier to have very high performance connections between them. Manufacturing becomes easier because a single fault only makes one small area of silicon unusable, which can be replaced with another.
This approach, sometimes called chiplets, has already been seen in AMD’s Epyc processors, which enjoy reduced manufacturing cost as a result – if one core doesn’t work, we don’t have to throw away eight others, just one other.
This would change the face of applied computer science, and in a way that isn’t always easy for software engineers to deal with. We’re still struggling to find ways of writing code to make best use of multiple cores, let alone a variable selection of very closely-integrated chiplets offering us highly specialised capabilities.
Even before we get to that point, though, Intel isn’t the only company struggling to go faster. IBM recently announced the tenth revision of its POWER series of processors, which are more or less the only high-end CPU line left that isn’t Intel. It has staggering specifications in terms of core count and cache memory (though tellingly, it still doesn’t go much faster than 4GHz.)
Apple’s move to ARM processors is probably much more to do with business than technology; it would be beneficial to Apple to have control over its CPU designs, and recent supply problems with Intel chips will have raised eyebrows. Still, while current ARM designs are more than capable of doing most of the work that most Apple computers mostly do, there really aren’t any big, scary, 64-bit ARMs suitable for Mac Pros around, and Apple will need to do some work to make that happen. Even if it does that, though, Moore will still apply.
So, the fundamental nature of desktop computers – inasmuch as they continue to exist at all – is likely to change over the next decade. With any luck, we won’t suffer much change to the way they seem to work; with any luck, they’ll just get faster.
Tags: Technology
Comments