Understanding Modern CPU Architecture (Part 1) by@mrekole

Understanding Modern CPU Architecture (Part 1)

image

image

When most people hear the term "CPU" they automatically limit their thinking to a computer. Some see it as that giant box that accompanies your desktop computer.

But in essence, CPUs are everywhere. They are found in your mobile phones, the electronic scanners, the laptops and computers we use, the calculators, in the electronic components powering your vehicle. Basically everywhere, CPUs are everywhere.

What is a CPU?

The Central Process Unit is usually referred to as "The Brain" of the computer. The CPU is at the heart of all data interpretation and processing operations in a computer. The CPU sits at the center of the computer and is capable of turning inputs from memory like an MP3 file on your hard drive into outputs on your peripherals like the speaker.

CPUs are a general-purpose flexible architecture that takes in a stream of instructions from all types of workloads and compute and processes information based on those instructions. Simply put, CPUs are our servants. They do what we ask them to do.

CPUs run the web browser that you are using to read this article, all your mobile and desktop applications.

The CPU contains three major components: memory, control unit, and the ALU(arithmetic and logic unit).

image

“The good news about computers is they do what you tell them to do. The bad news is they do what you tell them to do". Ted Nelson

The modern CPU architecture is implemented on ICs(Integrated Circuits), with one or two single metal-oxide-semiconductor IC chips. Microprocessor chips with multiple CPUs are called multi-core processors. In this article, we will be referring to the architecture of a single CPU core.

An IC that contains a CPU may sometimes possess peripheral interfaces and other components of computers. Such IC's are usually referred to as Microcontroller or SoC(systems on a chip).

Brief History of CPU Architecture

Let's revisit where it all started. In 1945, the early beginnings of digital computing, most computers were very large, slow, fragile. ENIAC(Electronic Numerical Integrator and Computer), the first multipurpose digital computer of 1946, covered approximately 168square meters of surface area, compared to the size of a modern house.

ENIAC was built using vacuum tube technology, which made them huge and unreliable. These were program-controlled computers. An operator will program the computer with switches and wires for each new calculation (Ah, tedious, right! Well, that was computing in 1946). These computers were generally purposed; programming them was complicated and error-prone.

In the mid-1940's John Von Neumann proposed the stored-program computer model. This model reimagined the general-purpose computer as three separate components:

  • Memory: A memory component for storing data and instructions

  • Central Processing Unit: for decoding and executing instructions

  • Inputs/Outputs: and a set of inputs and output interfaces.

The Von Neumann architecture also introduced the four-step cycle which includes: fetching instructions from memory, decoding the instructions, executing the instructions, and storing the results back to memory.

The late 1940s saw the advancement of semiconductor-based transistor technology, which replaced the old vacuum tube technology design of most CPU designs.

image

Computing Abstraction Layers

Two basic key concepts are essential to comprehend how a CPU works; the binary system and computing abstraction layers.

Computers use a number system based on zeros and ones. All information, data, and numbers must be represented in simple ON or OFF states for computers to function critically. The characters and numbers we use to communicate can easily be translated into a binary system like the ASCII representation of characters of the alphabet.

Computational abstractions layers refer to how you can start up with very simple things like atoms and transistors, and add an abstraction layer, an abstraction layer to build things up to complex applications running in large data centers. At the foundation of abstraction layers, we have atoms combined in materials like silicon from which we build simple transistors. These transistors act as switches that turn on or off with the application of an electric current or voltage signal. By connecting numerous switches together in a precise arrangement we form what we call the GATES, the fundamental Boolean logic operators for performing calculations (AND, OR, NOT GATES).

We can now abstract ones and zeros to a language of logic that is more efficient to understand than the language of physics and flow of electrons. Using transistors as switches and connecting the output of one to the input of another, we can build a variety of logic circuits or functional blocks. These functional blocks can take the form of adders, multiplexors, flip-flops, latches, registers, counters, decoders, etc. Chaining functional blocks together allows for even more complex logical functions.

With these complex logical functions, we can build custom execution units that perform specific calculations. One of the most important execution units in a CPU is the ALU.

Designing a whole CPU comes down to building multiple specialized processing elements and connecting them together in ways to permit complex computation to be done. The combination of those elements into a system that can fetch instruction from memory, decode instruction, execute those instructions and store those results back into memory is simply referred to as a micro architecture or the system on a chip.

image

Instruction Set Architecture The instruction set architecture (ISA) is a set of instructions that defines what kind of operations can be performed on hardware. It is more like the language of the computer. Just like English, French has dictionaries that describe the words, the format, the grammatical syntax, meaning, and pronunciation. ISA is an abstract model sometimes referred to as computer architecture.

The ISA defines the memory model, supported data types, registers, the behavior of machine code i.e., the sequences of zeros and ones that the CPU must execute. Several types of ISAs include x86, ARM(Advanced RISC Machines), MIPS(Microprocessor without interlock pipeline stages). The ISA acts as a bridge between hardware and software. On the software end, the ISA uses a compiler to transform code written in a high-level language like C, Python, or Java into machine code instructions that the CPU can execute. On the hardware end, when designing the CPU microarchitecture, the ISA serves as a design specification that guides the engineer on what set of instructions, datatypes, this CPU architecture is supposed to interpret and execute.

The instructions in the ISA are implementation-independent. This means that if a different company creates different microarchitecture designs, they can all run the same code based on the same ISA. Computer architects continue to evolve ISAs through extensions to the instruction sets. These additional instructions are often created to perform certain operations more efficiently, leveraging new processing elements in their microarchitecture.

These ISA extensions increase CPU performance by streamlining operations for a particular arrangement of processing elements. Modern CPUs support thousands of different operations, many of which are related to arithmetic operations(addition, subtraction, division, multiplication), logical operations(AND, NOT, OR), memory operations(loading, storing, moving), and flow control like branching. This is a simplified explanation of ISAs. It is important to note that modern ISAs are much more complex than I could hope to explain in this short article. ISAs are one of the most critical parts of modern CPU design as they are the linchpin between hardware and software that allows high-performance computation and seamless software experiences across a variety of CPU microarchitectures.

Also published here.

Tags