So many numbers and options! How do you go about choosing a CPU for your self build workstation?
Late 2019 is a slightly odd time in computing. Intel’s Xeon processors, long a mainstay of the high end, have to some extent been supplanted by their rocket-propelled Core i9 series, and even that is suffering intense competition from AMD. Ascendant from a period of perhaps less competitive releases, AMD’s Ryzen and Threadripper lines are snapping at Intel’s heels, with a new Threadripper lineup, the Xeon-competing CPU intended for workstations, due next month.
Fear not. It looks complicated, but most of the things will only fit in the slots they're intended for
What I'm talking about here is the high-end desktop or HEDT. Machines of this sort are found running computer-aided design software, processing data sets in scientific research, and of course in the media, doing publishing, visual effects and both video and audio post production. There are two ways of getting hold of one: go to one of the companies who act as systems integrators, pulling parts together from a variety of manufacturers to build a system, or plug those parts together personally on the kitchen table. Sometimes the second option saves money, although at the higher end of desktop systems the gap can narrow somewhat because there’s more margin in the higher-end parts and, therefore, more discount available for buying a complete system. Either way, there are a lot of parts to choose from.
Modern film and TV post production owe a lot of its hardware development money to video games. Let's all doff our hats to the Playstation Generation
Often, computer system specifications start with a CPU. Sometimes that’s a bit of a mistake in modern practice since the lion’s share of the actual computing work will happen on a graphics card or two, but let’s pick two comparable CPUs and get into the details of that comparison. There are lots to choose from, but examining a couple tells us something about many of the things we might encounter about while evaluating others. Happily, there’s a pretty obvious pair to look at in October 2019: an Intel Core i9-9920X and an AMD Ryzen 3900X.
The X-suffix on Intel CPUs indicates what the company calls an enthusiast-grade part. That means the ability to set clock speeds over what the company recommends to see what it’ll stand, but that isn’t something that’ll be of interest to people doing post production work. What is of interest is core count: the only Core i9 CPUs that have more than eight cores are X-suffix and the 9920X is a twelve-core device. The usefulness of this is entirely dependent on the software in use; some software – which shall remain nameless – which really could make great use of multi-core CPUs is actually not that great at it. Other software does a solid job; the only way to figure this out is to look around for benchmarks.
The 9920X has 19.25MB of cache memory, which is used as a scratchpad area to keep data available without going to (much slower) main memory. It’s clocked at 3.5GHz, boosting to 4.4GHz under load and fits in a 2066-pin socket. It’s also reasonably expensive, just over 1000 units of many currencies, as befits a device with such big numbers associated with it. Those are the headline numbers and many people look no further.
GPUs are great, but no grading application on the planet has ever needed to look like this. Except for the 8K 120fps ones
And that’s why things like the AMD Ryzen 3900X are sometimes surprising. It, too, has 12 cores, runs at 3.8GHz boosting to 4.6GHz for a fractional speed advantage, and enjoys a really large 64MB level-three cache. The PCIe controller, which talks to plug-in expansion boards and other peripherals, is built-in on modern CPUs and the 3900X has a generation 4 PCIe controller, over the Intel’s PCIe 3. It’s at least as good and better in several key areas, and it’s half to two-thirds the price of the Intel option. What’s going on here?
Perhaps the biggest difference is in the way that PCIe controller is set up. Intel intends the Core i9 series to be an industrial-strength solution, aimed directly at the high-end desktop market. Plug-in expansion devices go in slots that each connect to the CPU via a number of communications channels called lanes, either 1, 2, 4, 8 or 16. 16-lane devices include things like GPUs. Things like hard disk controllers or an HD-SDI input-output card are often four.
The Core i9 9920X has 44 PCIe lanes, enough for two GPUs and some other bits and pieces besides. The Ryzen 3900X has 24. Some of the things which are permanent parts of the computer’s main board might also be PCIe devices and soak up some of those lanes, but it’s clear that the Core i9 has the potential to run two high-power, 16-lane GPUs at full bandwidth, whereas the Ryzen does not.
Whether or not that matters is down to whether we think that we actually need two high-power GPUs. Smaller, simpler, cheaper GPUs which only need a couple of lanes exist and can generate a desktop display quite happily. Often, with GPU-heavy post production applications, it’s very normal to use a simple GPU to produce the on-screen display and a high-power one to do the heavy lifting. Furthermore, even if we wanted to use two 16-lane GPUs with the Ryzen CPU, it can do that – it’ll just use eight lanes for each. Whether that matters is down to whether the speed of the system is limited by the computational power of the GPUs, or the ability of the computer to transfer data to and from them. That is very much controlled by the way the software is written and something that can only be ascertained from comparative benchmarks with that software on that hardware.
Not that long ago, this sort of situation could only be supported by seven figures in custom circuit boards
There are some results available which suggest that applications such as grading are usually compute-limited rather than bandwidth-limited, possibly because every single lane of a PCIe version 3 slot can transfer roughly two gigabytes of data per second, which is a lot. The problem isn’t sending the frames to the GPU or retrieving them, it’s doing the processing once they’re there, so the PCIe lane shortfall might not make much difference.
And, of course, there’s one final complication. The Ryzen CPU has PCIe 4.0, whereas the Core i9 uses PCIe 3.0. The version bump essentially doubles bandwidth, so that a GPU using an eight-lane PCIe 4.0 connection actually has the same amount of bandwidth available as one using a 16-lane PCIe 3.0 connection. That sounds great and would make the Ryzen a solid bet – apart from the fact that the only GPUs available that actually support PCIe 4 are AMD’s own Radeon range, and sometimes we might want one from Nvidia.
There’s also the vexed question of Thunderbolt, which has to date been principally, though not entirely, an Apple thing, and which I’ll discuss next time when I talk about motherboards.
Tags: Technology
Comments