We turn lights on and off almost every day, but this simple switch is the basic unit that makes up the CPU.
Share this article and see how to build the CPU, a switch world that is either 0 or 1.
From transistors to gates
The transistor, a small but great invention, appeared in the last century.
With transistors, which are switches, we can build AND, OR, and NOT gate circuits on this basis.
Any logical function can ultimately be expressed through AND, OR, and NOT. In other words, a computer can ultimately be constructed through simple AND, OR, and NOT gates.
AND-OR-N gates to implement calculation and storage circuits
calculate
Take addition as an example.
Since the CPU only recognizes 0 and 1, that is, binary, what are the combinations of binary addition:
0 + 0, the result is 0, and the carry is 0
0 + 1, the result is 1, and the carry is 0
1 + 0, the result is 1, and the carry is 0
1 + 1, the result is 0, and the carry is 1
Pay attention to the carry column. The carry is 1 only when the values of both inputs are 1. This is the AND gate!
Let’s look at the result column again. When the two input values are different, the result is 1, and when the input results are the same, the result is 0. This is XOR!
As shown below, binary addition can be achieved using an AND gate and an XOR gate:
The above circuit is a simple adder. Addition can be implemented using AND or NAND gates.
In addition to addition, we can also design different arithmetic operations according to needs. The circuit responsible for calculation has a general name, which is the so-called arithmetic logic unit ALU (arithmetic/logic unit). It is a module in the CPU that is specifically responsible for calculations. In essence, it is no different from the simple circuit above, but it is more complicated.
Now, through the combination of AND and NOR gates we get computing power, and that's how computing power comes.
But computing power alone is not enough, the circuits need to be able to remember information.
storage
So far, the combinational circuits you have designed, such as adders, have no inherent way of storing information. They simply derive outputs from inputs, but the inputs and outputs must be stored somewhere, which is why the circuits need to be able to store information.
How can a circuit store information? One day, a British physicist came up with a magical circuit:
This is a combination of two NAND gates.
What is quite unique is the combination of this circuit. The output of one NAND gate is the input of another NAND gate. This combination of circuits has an interesting characteristic. As long as 1 is input to the S and R terminals, the circuit will only have two states:
The a end is 1, then B=0, A=1, b=0;
The a end is 0, then B=1, A=0, b=1;
There is no other possibility, we take the value at terminal a as the output of the circuit.
After that, if you set the S end to 0 (R remains 1), the output of the circuit, that is, the a end, will always be 1. At this time, we can say that we have stored 1 in the circuit; and if you set the R segment to 0 (S remains 1), then the output of the circuit, that is, the a end, will always be 0. At this time, we can say that 0 is stored in the circuit.
In this way, the circuit has the ability to store information.
Now to save the information you need to set both the S and R terminals, but your input is one (to store one bit), so you modify the circuit:
In this way, when D is 0, the entire circuit stores 0, otherwise it is 1.
Registers and Memory
Now your circuit can store one bit. Storing multiple bits is easy. Just copy and paste:
We call this combinational circuit a register.
If we continue to build more complex circuits to store more information and provide addressing capabilities, memory is born.
Registers and memory are inseparable from the simple circuit mentioned above. As long as the power is on, the information is saved in this circuit, but after the power is off, the saved information is obviously lost. Now you should understand why the memory cannot save data after the power is off.
Building the CPU
Hardware Platform
From the above explanation, we know that the circuit can realize the general functions of data calculation and information storage. But now there is a question: is it really necessary to use AND or NOR gates to realize all logical operations? This is obviously unrealistic.
There is no need to implement corresponding hardware for all computing logics. The hardware only needs to provide the most common functions.
Next, let’s look at how the hardware provides so-called general functions.
Let's think about a question. How can the CPU know that it needs to add two numbers and which two numbers to add?
Obviously, you have to tell the CPU, but how?
The CPU also needs machine instructions to tell itself what to do next, and the instructions are executed through the combinational circuits we implemented above.
Instruction Set
The instruction set tells us what instructions the CPU can execute and what operands are required for each instruction. Different types of CPUs have different instruction sets.
The instructions in the instruction set are actually very simple, and the style is generally like this:
Read a number from memory, address is abc
Add two numbers
Check if a number is greater than 6
· Store this number in memory at address abc
· etc
It looks a lot like mumbling, doesn't it? This is a machine instruction. The program we write in a high-level language, such as sorting an array, will eventually be converted into the above mumbling instructions, and then the CPU will execute them one by one.
Next, let's look at a possible machine instruction:
This instruction occupies 16 bits, of which the first four bits tell the CPU that this is an addition instruction, which means that the CPU's instruction set can contain 2^4, or 16 machine instructions. These four bits tell the CPU what to do, and the remaining bits tell the CPU how to do it, which is to add the values in register R6 and register R2 and then write them to register R6.
As you can see, machine instructions are very complicated, and modern programmers use high-level languages to write programs.
Clock signal
Now that our circuits have computing and storage capabilities, and we can also use instructions to tell the circuits what operations to perform, there is still one problem that has not been solved.
What is used to coordinate or synchronize the various parts of the circuit so that they can work together?
The clock signal is like the baton in the hands of a conductor. When the baton is waved, the entire orchestra will move in unison. Similarly, every time the voltage of the clock signal changes, each register in the entire circuit (that is, the state of the entire circuit) will be updated. In this way, we can ensure that the entire circuit works together without the problems mentioned here.
Now you should know what the CPU's main frequency means. The main frequency refers to how many times the baton is waved in one second. Obviously, the higher the main frequency, the more operations the CPU can complete in one second.
You're done!
Now we have ALUs that can perform various calculations, registers that can store information, and clock signals that control their coordinated work. These are collectively called Central Processing Units, or CPUs for short.
A small switch can actually build a powerful CPU. The breakthrough in theory and manufacturing technology behind this is a milestone in human history. It is absolutely correct to say that the CPU is the crystallization of wisdom.
丨The article is organized to spread relevant technologies, the copyright belongs to the original author丨
丨If there is any infringement, please contact us to delete丨
|