1’s and 0’s
Thanks to more powerful CPUs, we’ve jumped from barely being able to display an image on a computer screen to Netflix, video chat, streaming, and increasingly lifelike video games.
The CPU is a wonder of engineering, but, at its core, it still relies on the basic concept of interpreting binary signals (1’s and 0’s). The difference now is that, instead of reading punch cards or processing instructions with sets of vacuum tubes, modern CPUs use tiny transistors to create TikTok videos or fill out numbers on a spreadsheet.
The Basics of the CPU
CPU manufacturing is complicated. The important point is that each CPU has silicon (either one piece or several) that houses billions of microscopic transistors.
As we alluded to earlier, these transistors use a series of electrical signals (current “on” and current “off”) to represent machine binary code, made up of 1’s and 0’s. Because there are so many of these transistors, CPUs can do increasingly complex tasks at greater speeds than before.
The transistor count doesn’t necessarily mean a CPU will be faster. However, it’s still a fundamental reason the phone you carry in your pocket has far more computing power than, perhaps, the entire planet did when we first went to the moon.
Before we head further up the conceptual ladder of CPUs, let’s talk about how a CPU carries out instructions based on machine code, called the “instruction set.” CPUs from different companies can have different instruction sets, but not always.
Most Windows PCs and current Mac processors, for example, use the x86-64 instruction set, regardless of whether they’re an Intel or AMD CPU. Macs debuting in late 2020, however, will have ARM-based CPUs, which use a different instruction set. There are also a small number of Windows 10 PCs using ARM processors.
RELATED: What is Binary, and Why Do Computers Use It?
Cores, Caches, and Graphics
Now, let’s look at the silicon itself. The diagram above is from an Intel white paper published in 2014 about the company’s CPU architecture for the Core i7-4770S. This is just an example of what one processor looks like—other processors have different layouts.
We can see this is a four-core processor. There was a time when a CPU only had a single core. Now that we have multiple cores, they process instructions much faster. Cores can also have something called hyper-threading or simultaneous multi-threading (SMT), which makes one core seem like two to the PC. This, as you might imagine, helps speed up processing times even more.
The cores in this diagram are sharing something called the L3 cache. This is a form of onboard memory inside the CPU. CPUs also have L1 and L2 caches contained in each core, as well as registers, which are a form of low-level memory. If you want to understand the differences between registers, caches, and system RAM, check out this answer on StackExchange.
The CPU shown above also contains the system agent, memory controller, and other parts of the silicon that manage information coming into, and going out of, the CPU.
Finally, there’s the processor’s onboard graphics, which generate all those wonderful visual elements you see on your screen. Not all CPUs contain their own graphics capabilities. AMD Zen desktop CPUs, for example, require a discrete graphics card to display anything on-screen. Some Intel Core desktop CPUs also don’t include onboard graphics.
The CPU on the Motherboard
Now that we’ve looked at what’s going on underneath the hood of a CPU, let’s look at how it integrates with the rest of your PC. The CPU sits in what’s called a socket on your PC’s motherboard.
Once it’s seated in the socket, other parts of the computer can connect to the CPU through something called “buses.” RAM, for example, connects to the CPU through its own bus, while many PC components use a specific type of bus, called a “PCIe.”
Each CPU has a set of “PCIe lanes” it can use. AMD’s Zen 2 CPUs, for example, have 24 lanes that connect directly to the CPU. These lanes are then divvied up by motherboard manufacturers with guidance from AMD.
For example, 16 lanes are typically used for an x16 graphics card slot. Then, there are four lanes for storage, such as one fast storage device, like an M.2 SSD. Alternatively, these four lanes can also be split. Two lanes could be used for the M.2 SSD, and two for a slower SATA drive, like a hard drive or 2.5-inch SSD.
That’s 20 lanes, with the other four reserved for the chipset, which is the communications center and traffic controller for the motherboard. The chipset then has its own set of bus connections, enabling even more components to be added to a PC. As you might expect, the higher-performing components have a more direct connection to the CPU.
As you can see, the CPU does most of the instruction processing, and sometimes, even the graphics work (if it’s built for that). The CPU isn’t the only way to process instructions, however. Other components, such as the graphics card, have their own onboard processing capabilities. The GPU also uses its own processing capabilities to work with the CPU and run games or carry out other graphics-intensive tasks.
The big difference is component processors are built with specific tasks in mind. The CPU, however, is a general-purpose device capable of doing whatever computing task it’s asked to do. That’s why the CPU reigns supreme inside your PC, and the rest of the system relies on it to function.