Visit Paul's website
The book From Computing to
can be ordered at: computize.org
Digital computers are logic machines. They use bits to store information. A bit is nothing but a switch with two states, on and off. In computer speak, we use 1 to represent the on state, 0 the off state. In logic speak, 1 is called true, and 0 false. Combining the two, a bit can either be in the state on/1/true or in the state off/0/false. In other words, inside the computer 1 means true and 0 means false.
A CPU is a thumbnail-size chip containing Integrated Circuits (IC) formed with transistors and capacitors. Modern CPUs are extremely complicated using VLSI (Very Large-Scale Integration) containing many basic units called logic gates. A logic gate takes truth value inputs and produces truth value outputs. Computations on truth values have simple rules specified by Boolean algebra.
In this article of our Computational Thinking blog (computize.org/ctblog), we shall look under the hood and reveal the heart of the CPU built with logic gates. Among the seven logic gates, NAND is the most outstanding and, in fact, a universal building block. We shall explain in simple and easy-to-understand terms.
As always, along the way, principles of CT will be illustrated.
Digital Electronic Circuits
A CPU processes digital signals represented by the presence (1) or absence (0) of an electric current or voltage. Digital electronic circuits process input signals and produce output signals. Each signal always has a value of either true or false. CPU registers hold input signals as well as store output signals.
Treating 1 as true and 0 as false, the most basic computations are logic operations (AND, OR, NOT ...). Each logic operation takes two input signals A and B and produces an output signal.
The complete list of basic logical operations are as follows.
Operation (A AND B) outputs true only when A and B are both true.
Operation (A OR B) outputs true if at least one of A and B is true.
Operation (NOT A) outputs true if A is false and false if A is true.
Operation (A XOR B) outputs true if only one of A and B is true.
Operation (A NAND B) outputs false only when both A and B are true.
Operation (A NOR B) outputs false if at least one of A and B is true.
Operation (A XNOR B) outputs false if only one of A and B is true.
The above are definitions of these operations on truth values.
We see seven logic operators listed. The operator NAND is right there. It means NOT AND. Similarly NOR means NOT OR, XNOR means NOT XOR.
The complete computational behavior of each logic operator can be described by a truth table specifying the output for all possible input values.
Table 1 shows the truth tables for AND, OR, XOR. Try it yourself and write down the truth tables for the other logical operators.
A gate is a basic building block in integrated circuits constructed with semiconductor technologies. A logic gate implements one of the basic logic operations as we have just described. A logic gate takes one or two inputs and produces a single output.
The seven basic gates are as follows. Each gate, except the NOT gate, takes two inputs.
• AND gate—Outputs 1 only if both inputs are 1. Otherwise outputs 0.
• OR gate—Outputs 0 only if both inputs are 0. Otherwise outputs 1.
• NOT gate—Outputs the opposite of the input.
• XOR (exclusive OR) gate—Outputs 1 only if the two inputs are different. Otherwise outputs 0.
• NAND, NOR, or XNOR gate—Outputs the opposite of the corresponding AND,OR, or XOR gate.
Standardized graphical symbols can be used to design simple digital circuits. The following diagram shows the traditional distinctive shape symbols. Note that a little circle at the tip of a gate indicates an inversion, turning a 1 to 0 and a 0 to 1.
More complicated digital circuits are built by wiring together logic gates. Pulse signals, typically produced by a crystal oscillator, provide a processing rhythm. The next processing step won’t start until a new clock pulse arrives. The pulse interval is needed for all digital signal transitions to complete and the circuit components to settle into their new states. Typical clock rates for modern CPUs are in the GHz range.
Building CPUs with Logic Gates
Computations in a CPU are done with clever combinations of logic gates. We can get a taste of this by looking at simple arithmetic operations such as adding two binary numbers. For example, adding 011 (the number three) to 001 (the number one) must result in 100 (the number four). A quick Web search will land you more information on binary numbers.
An adder circuit provides the ability to add binary numbers which in turn can be used repeatedly to perform multiplication of binary numbers. But first, we will build a half adder. A half adder takes two input bits A and B and outputs a sum bit S and a carry bit C (carry to the next bit position).
Here we used an XOR gate to produce the sum bit and an AND gate to compute the carry bit. Table 2 lists the input and output values for the half adder. The half adder shows how logic gates are used to add two bits.
The half adder can be used to build a more capable full adder that can take three input bits A, B, and Cin (carryin from the previous bit position) and will produce a sum bit Sout and a carry bit Cout. A full adder consists two halfadders and an extra OR gate.
The Bottom-Up Approach
Building simple components first then combine them into more complicated components is a solution method that often works well. This leads to the CT principle: Consider the bottom-up approach for problem solving. Start by solving the small and most basic problems. Then combine those solutions to solve bigger problems. The toy LEGO™ is a vivid illustration of the bottom-up approach—using tiny blocks to build anything you can imagine. Here, we first built a half adder. Then we combine two half adders to build a full adder. Then we can combine a number of full adders to solve the more complicated problem of adding two n-bit numbers.
NAND Is A Universal Gate
A logic gate is universal if it alone can be used to build all other logic gates. The NAND and NOR gates are both universal. Because NAND and NOR gates are economical and easier to fabricate, they are the basic gates used in all IC (integrated circuit) digital logic families. For many reasons, the NAND gate is used in most CPU designs.
To give you an idea, here is how NAND can be used to build an OR gate.
The input A is fed as the two inputs to the first NAND gate and similarly B to the second NAND gate. The third NAND gate produces the correct output for A OR B. It may be interesting for the reader to verify that this implementation works for all possible values of A and B.
The take away here is: modern CPUs use NAND to implement logic gates that in turn can build all other functions.
One Good Turn Deserves Another
Computer hardware design is a complicated engineering activity. Software tools are used to help automate various phases of the design, testing, synthesis, and fabrication of integrated circuits. Better hardware leads to faster software which leads to even better hardware, a wonderful virtuous cycle indeed.
Take advantage of new knowledge, new tools, new circumstances and apply them everywhere possible. Don’t miss the chance to create a positive feedback loop!
A more general concept is iteration—repeated application of the same procedure. The power of iteration cannot be overstated. Let’s revisit our favorite example.
Iteration has led to the invention of the polymerase chain reaction (PCR), a technique in molecular biology to generate thousands to millions of copies of a particular DNA sequence.
Developed by Dr. Kary Mullis in 1983, PCR is now indispensable in medical and biological research and applications, including DNA testing and genetic finger printing. The impact of automated PCR is huge and far-reaching. Mullis was awarded the 1993 Nobel Prize in Chemistry for his part in the invention of PCR.
In recounting his invention, Kary Mullis wrote in his book Dancing Naked in the Mind Field:
I knew computer programming, and from that I understood the power of a reiterative mathematical procedure. That’s where you apply some process to a starting number to obtain a new number, and then you apply the same process to the new number, and so on. If the process is multiplication by two, then the result of many cycles is an exponential increase in the value of the original number: 2 becomes 4 becomes 8 becomes 16 becomes 32 and so on. If I could arrange for a short synthetic piece of DNA to find a particular sequence and then start a process whereby that sequence would reproduce itself over and over, then I would be close to solving my problem.
At the time of the invention, the “polymerase” and other related DNA duplication techniques were already known. It was the “chain reaction” part that was missing. Well, we have Dr. Mullis and his computational thinking to thank for the invention. And what a significant invention! The New York Times described it as “highly original and significant, virtually dividing biology into the two epochs of before P.C.R. and after P.C.R.”
Duplicating DNA sequences is key to modern use of DNA technologies. For instance, COVID-19 virus tests depend on PCR.
Looking under the hood of a modern CPU, we found tens of billions of transistors forming logic gates (mostly NAND) that perform computations at lightening speed. All this is made possible by using computer software to design next generation CPUs—a virtuous cycle that is still on-going.
Now, imagine a computer running to perform any task such as playing a video game or navigating a map. It all comes down to a dizzying blur of numerous NAND operations in the CPU! And the task is done.
Isn’t this a concrete example of the power of the bottom-up problem solving method—combining NAND gates to do almost anything? Thus, in a real sense, NAND Rules The World. How mysteriously wonderful.
Bonus: N-Bit Adder
For motivated readers, here is a bonus: how full adders are combined to add two n-bit binary numbers (unsigned or two’s complement). Basically, we combine nfull adders, each taking its Cin from the Cout of the adder to its right. Here we see how this works for input X (XnXn−1...X0) and Y (YnYn−1...Y0) producing a result S (SnSn−1...S0).
Because the carry bit may need to propagate from the rightmost adder all the way to the leftmost, a delay proportional to n is needed until the last output bit settles down. Such an adder is known as a ripple adder. More complicated adders can reduce the delay and make binary addition faster, especially if addition is performed as a subprocess of multiplication.
A Ph.D. and faculty member from MIT, Paul Wang (王 士 弘) became a Computer Science professor (Kent State University) in 1981, and served as a Director at the Institute for Computational Mathematics at Kent from 1986 to 2011. He retired in 2012 and is now professor emeritus at Kent State University.
Paul is a leading expert in Symbolic and Algebraic Computation (SAC). He has conducted over forty research projects funded by government and industry, authored many well-regarded Computer Science textbooks, most also translated into foreign languages, and released many software tools. He received the Ohio Governor's Award for University Faculty Entrepreneurship (2001). Paul supervised 14 Ph.D. and over 26 Master-degree students.
His Ph.D. dissertation, advised by Joel Moses, was on Evaluation of Definite Integrals by Symbolic Manipulation. Paul's main research interests include Symbolic and Algebraic Computation (SAC), polynomial factoring and GCD algorithms, automatic code generation, Internet Accessible Mathematical Computation (IAMC), enabling technologies for and classroom delivery of Web-based Mathematics Education (WME), as well as parallel and distributed SAC. Paul has made significant contributions to many parts of the MAXIMA computer algebra system. See these online demos for an experience with MAXIMA.
Paul continues to work jointly with others nationally and internationally in computer science teaching and research, write textbooks, IT consult as sofpower.com, and manage his Web development company Webtong Inc.