Great moments in microprocessor history
The history of the micro from the vacuum tube to today's dual-core multithreaded madness
The evolution of the modern microprocessor is one of many surprising twists and turns. Who invented the first micro? Who had the first 32-bit single-chip design? You might be surprised at the answers. This article shows the defining decisions that brought the contemporary microprocessor to its present-day configuration.
At the dawn of the 19th century, Benjamin Franklin's discovery of the principles of electricity were still fairly new, and practical applications of his discoveries were few -- the most notable exception being the lightning rod, which was invented independently by two different people in two different places. Independent contemporaneous (and not so contemporaneous) discovery would remain a recurring theme in electronics.
So it was with the invention of the vacuum tube -- invented by Fleming, who was investigating the Effect named for and discovered by Edison; it was refined four years later by de Forest (but is now rumored to have been invented 20 years prior by Tesla). So it was with the transistor: Shockley, Brattain and Bardeen were awarded the Nobel Prize for turning de Forest's triode into a solid state device -- but they were not awarded a patent, because of 20-year-prior art by Lilienfeld. So it was with the integrated circuit (or IC) for which Jack Kilby was awarded a Nobel Prize, but which was contemporaneously developed by Robert Noyce of Fairchild Semiconductor (who got the patent). And so it was, indeed, with the microprocessor.
Before the flood: The 1960s
Just a scant few years after the first laboratory integrated circuits, Fairchild Semiconductor introduced the first commercially available integrated circuit (although at almost the same time as one from Texas Instruments).
Already at the start of the decade, process that would last until the present day was available: commercial ICs made in the planar process were available from both Fairchild Semiconductor and Texas Instruments by 1961, and TTL (transistor-transistor logic) circuits appeared commercially in 1962. By 1968, CMOS (complementary metal oxide semiconductor) hit the market. There is no doubt but that technology, design, and process were rapidly evolving.
Observing this trend, Fairchild Semiconductor's director of Research & Development Gordon Moore observed in 1965 that the density of elements in ICs was doubling annually, and predicted that the trend would continue for the next ten years. With certain amendments, this came to be known as Moore's Law.
The first ICs contained just a few transistors per wafer; by the dawn of the 1970s, production techniques allowed for thousands of transistors per wafer. It was only a matter of time before someone would use this capacity to put an entire computer on a chip, and several someones, indeed, did just that.
Development explosion: The 1970s
The idea of a computer on a single chip had been described in the literature as far back as 1952 (see Resources), and more articles like this began to appear as the 1970s dawned. Finally, process had caught up to thinking, and the computer on a chip was made possible. The air was electric with the possibility.
Once the feat had been established, the rest of the decade saw a proliferation of companies old and new getting into the semiconductor business, as well as the first personal computers, the first arcade games, and even the first home video game systems -- thus spreading consumer contact with electronics, and paving the way for continued rapid growth in the 1980s.
At the beginning of the 1970s, microprocessors had not yet been introduced. By the end of the decade, a saturated market led to price wars, and many processors were already 16-bit.
The first three
At the time of this writing, three groups lay claim for having been the first to put a computer in a chip: The Central Air Data Computer (CADC), the Intel® 4004, and the Texas Instruments TMS 1000.
The CADC system was completed for the Navy's "TomCat" fighter jets in 1970. It is often discounted because it was a chip set and not a CPU. The TI TMS 1000 was first to market in calculator form, but not in stand-alone form -- that distinction goes to the Intel 4004, which is just one of the reasons it is often cited as the first (incidentally, it too was just one in a chipset of four).
In truth, it does not matter who was first. As with the lightning rod, the light bulb, radio -- and so many other innovations before and after -- it suffices to say it was in the aether, it was inevitable, its time was come.
Where are they now?
CADC spent 20 years in top-secret, cold-war-era mothballs until finally being declassified in 1998. Thus, even if it was the first, it has remained under most people's radar even today, and did not have a chance to influence other early microprocessor design.
The Intel 4004 had a short and mostly uneventful history, to be superseded by the 8008 and other early Intel chips (see below).
In 1973, Texas Instrument's Gary Boone was awarded U.S. Patent No. 3,757,306 for the single-chip microprocessor architecture. The chip was finally marketed in stand-alone form in 1974, for the low, low (bulk) price of US$2 apiece. In 1978, a special version of the TI TMS 1000 was the brains of the educational "Speak and Spell" toy which E.T. jerry-rigged to phone home.
Early Intel: 4004, 8008, and 8080
Intel released its single 4-bit all-purpose chip, the Intel 4004, in November 1971. It had a clock speed of 108KHz and 2,300 transistors with ports for ROM, RAM, and I/O. Originally designed for use in a calculator, Intel had to renegotiate its contract to be able to market it as a stand-alone processor. Its ISA had been inspired by the DEC PDP-8.
The Intel 8008 was introduced in April 1972, and didn't make much of a splash, being more or less an 8-bit 4004. Its primary claim to fame is that its ISA -- provided by Computer Terminal Corporation (CTC), who had commissioned the chip -- was to form the basis for the 8080, as well as for the later 8086 (and hence the x86) architecture. Lesser-known Intels from this time include the nearly forgotten 4040, which added logical and compare instructions to the 4004, and the ill-fated 32-bit Intel 432.
Intel put itself back on the map with the 8080, which used the same instruction set as the earlier 8008 and is generally considered to be the first truly usable microprocessor. The 8080 had a 16-bit address bus and an 8-bit data bus, a 16-bit stack pointer to memory which replaced the 8-level internal stack of the 8008, and a 16-bit program counter. It also contained 256 I/O ports, so I/O devices could be connected without taking away or interfering with the addressing space. It also possessed a signal pin that allowed the stack to occupy a separate bank of memory. These features are what made this a truly modern microprocessor. It was used in the Altair 8800, one of the first renowned personal computers (other claimants to that title include the 1963 MIT Lincoln Labs' 12-bit LINC/Laboratory Instruments Computer built with DEC components and DEC's own 1965 PDP-8).
Although the 4004 had been the company's first, it was really the 8080 that clinched its future -- this was immediately apparent, and in fact in 1974 the company changed its phone number so that the last four digits would be 8080.
Where is Intel now?
Last time we checked, Intel was still around.
RCA 1802
In 1974, RCA released the 1802 8-bit processor with a different architecture than other 8-bit processors. It had a register file of 16 registers of 16 bits each and using the SEP instruction, you could select any of the registers to be the program counter. Using the SEP instruction, you could choose any of the registers to be the index register. It did not have standard subroutine CALL immediate and RET instructions, though they could be emulated.
A few commonly used subroutines could be called quickly by keeping their address in one of the 16 registers. Before a subroutine returned, it jumped to the location immediately preceding its entry point so that after the RET instruction returned control to the caller, the register would be pointing to the right value for next time. An interesting variation was to have two or more subroutines in a ring so that they were called in round-robin order.
The RCA 1802 is considered one of the first RISC chips although others (notably Seymore Cray -- see the sidebar, The evolution of RISC -- had used concepts already).
Where is it now?
Sadly, the RCA chip was a spectacular market failure due to its slow clock cycle speed. But it could be fabricated to be radiation resistant, so it was used on the Voyager 1, Viking, and Galileo space probes (where rapidly executed commands aren't a necessity).
IBM 801
In 1975, IBM® produced some of the earliest efforts to build a microprocessor based on RISC design principles (although it wasn't called RISC yet -- see the sidebar, The evolution of RISC). Initially a research effort led by John Cocke (the father of RISC), many say that the IBM 801 was named after the address of the building where the chip was designed -- but we suspect that the IBM systems already numbered 601 and 701 had at least something to do with it also.
Where is the 801 now?
The 801 chip family never saw mainstream use, and was primarily used in other IBM hardware. Even though the 801 never went far, it did inspire further work which would converge, fifteen years later, to produce the Power Architecture™ family.
The evolution of RISC
RISC stands for "Reduced Instruction Set Computing" or, in a more humorous vein, for "Relegate the Important Stuff to the Compiler," and is also known as load-store architectures.
In the 1970s, research at IBM produced the surprising result that some operations were actually slower than a number of smaller operations doing the same thing. A famous example of this was the VAX's INDEX instruction which ran slower than a loop implementing the same code.
RISC started being adopted in a big way during the 1980s, but many projects embodied its design ethic even before that. One notable example is Seymour Cray's 1964 CDC 6600 supercomputer, which sported a design that included a load-store architecture with two addressing modes and plenty of pipelines for arithmetic and logic tasks (more pipelines are necessary when you're shuttling task instructions in and out of the CPU in a parallel manner as opposed to a linear way). Most RISC machines possess only about five simple addressing modes -- the fewer addressing modes, the more reduced the instruction set (the IBM System 360 had only three modes). Pipelined CPUs are also easier to design if simpler addressing modes are used.
Moto 6800
In 1975, Motorola introduced the 6800, a chip with 78 instructions and probably the first microprocessor with an index register.
Two things are of significance here. One is the use of the index register which is a processor register (a small amount of fast computer memory that's used to speed the execution of programs by providing quick access to commonly used values). The index register can modify operand addresses during the run of a program, typically while doing vector/array operations. Before the invention of index registers and without indirect addressing, array operations had to be performed either by linearly repeating program code for each array element or by using self-modifying code techniques. Both of these methods harbor significant disadvantages when it comes to program flexibility and maintenance and more importantly, they are wasteful when it comes to using up scarce computer memory.
Where is the 6800 now?
Many Motorola stand-alone processors and microcontrollers trace their lineage to the 6800, including the popular and powerful 6809 of 1979
MOS 6502
Soon after Motorola released the 6800, the company's design team quit en masse and formed their own company, MOS Technology. They quickly developed the MOS 6501, a completely new design that was nevertheless pin-compatible with the 6800. Motorola sued, and MOS agreed to halt production. The company then released the MOS 6502, which differed from the 6501 only in the pin-out arrangement.
The MOS 6502 was released in September 1975, and it sold for US$25 per unit. At the time, the Intel 8080 and the Motorola 6800 were selling for US$179. Many people thought this must be some sort of scam. Eventually, Intel and Motorola dropped their prices to US$79. This had the effect of legitimizing the MOS 6502, and they began selling by the hundreds. The 6502 was a staple in the Apple® II and various Commodore and Atari computers.
Where is the MOS 6502 now?
Many of the original MOS 6502 still have loving homes today, in the hands of collectors (or even the original owners) of machines like the Atari 2600 video game console, Apple II family of computers, the first Nintendo Entertainment System, the Commodore 64 -- all of which used the 6502. MOS 6502 processors are still being manufactured today for use in embedded systems.
AMD clones the 8080
Advanced Micro Devices (AMD) was founded in 1969 by Jerry Sanders. Like so many of the people who were influential in the early days of the microprocessor (including the founders of Intel), Sanders came from Fairchild Semiconductor. AMD's business was not the creation of new products; it concentrated on making higher quality versions of existing products under license. For example, all of its products met MILSPEC requirements no matter what the end market was. In 1975, it began selling reverse-engineered clones of the Intel 8080 processor.
Where is AMD now?
In the 1980s, first licensing agreements -- and then legal disputes -- with Intel, eventually led to court validation of clean-room reverse engineering and opened the 1990s floodgates to many clone corps.
Fairchild F8
The 8-bit Fairchild F8 (also known as the 3850) microcontroller was Fairchild's first processor. It had no stack pointer, no program counter, no address bus. It did have 64 registers (the first 8 of which could be accessed directly) and 64 bytes of "scratchpad" RAM. The first F8s were multichip designs (usually 2-chip, with the second being ROM). The F8 was released in a single-chip implementation (the Mostek 3870) in 1977.
Where is it now?
The F8 was used in the company's Channel F Fairchild Video Entertainment System in 1976. By the end of the decade, Fairchild played mostly in niche markets, including the "hardened" IC market for military and space applications, and in Cray supercomputers. Fairchild was acquired by National Semiconductor in the 1980s, and spun off again as an independent company in 1997.
16 bits, two contenders
The first multi-chip 16-bit microprocessor was introduced by either Digital Equipment Corporation in its LSI-11 OEM board set and its packaged PDP 11/03 minicomputer, or by Fairchild Semiconductor with its MicroFlame 9440, both released in 1975. The first single-chip 16-bit microprocessor was the 1976 TI TMS 9900, which was also compatible with the TI 990 line of minicomputers and was used in the TM 990 line of OEM microcomputer boards.
Where are they now?
The DEC chipset later gave way to the 32-bit DEC VAX product line, which was replaced by the Alpha family, which was discontinued in 2004.
The aptly named Fairchild MicroFlame ran hot and was never chosen by a major computer manufacturer, so it faded out of existence.
The TI TMS 9900 had a strong beginning, but was packaged in a large (for the time) ceramic 64-pin package which pushed the cost out of range compared with the much cheaper 8-bit Intel 8080 and 8085. In March 1982, TI decided to start ramping down TMS 9900 production, and go into the DSP business instead. TI is still in the chip business today, and in 2004 it came out with a nifty TV tuner chip for cell phones.
Zilog Z-80
Probably the most popular microprocessor of all time, the Zilog Z-80 was designed by Frederico Faggin after he left Intel, and it was released in July 1976. Faggin had designed or led the design teams for all of Intel's early processors: the 4004, the 8008, and particularly, the revolutionary 8080.
Silicon Valley lineage
It is interesting to note that Federico Faggin defected from Intel to form his own company; meanwhile Intel had been founded by defectors from Fairchild Semiconductor, which had itself been founded by defectors from Shockley Semiconductor Laboratory, which had been founded by William Shockley, who had defected from AT&T Bell Labs -- where he had been one of the co-inventors of the first transistor.
As an aside, Federico Faggin had also been employed at Fairchild Semiconductor before leaving to join Intel.
This 8-bit microprocessor was binary compatible with the 8080 and surprisingly, is still in widespread use today in many embedded applications. Faggin intended it to be an improved version of the 8080 and according to popular opinion, it was. It could execute all of the 8080 operating codes as well as 80 more instructions (including 1-, 4-, 8-, and 16-bit operations, block I/O, block move, and so on). Because it contained two sets of switchable data registers, it supported fast operating system or interrupt context switches.
The thing that really made it popular though, was its memory interface. Since the CPU generated its own RAM refresh signals, it provided lower system costs and made it easier to design a system around. When coupled with its 8080 compatibility and its support for the first standardized microprocessor operating system CP/M, the cost and enhanced capabilities made this the choice chip for many designers (including TI; it was the brains of the TRS-80 Model 1).
The Z-80 featured many undocumented instructions that were in some cases a by-product of early designs (which did not trap invalid op codes, but tried to interpret them as best they could); in other cases the chip area near the edge was used for added instructions, but fabrication methods of the day made the failure rate high. Instructions that often failed were just not documented, so the chip yield could be increased. Later fabrication made these more reliable.
Where are they now?
In 1979, Zilog announced the 16-bit Z8000. Sporting another great design with a stack pointer and both a user and a supervisor mode, this chip never really took off. The main reason: Zilog was a small company, it struggled with support, and never managed to bank enough to stay around and outlast the competition.
However, Zilog is not only still making microcontrollers, it is still making Z-80 microcontrollers. In all, more than one billion Z-80s have been made over the years -- a proud testament to Faggin's superb design.
Faggin is currently Chairman of the Board & Co-Founder of Synaptics, a "user interface solutions" company in the Silicon Valley.
Intel 8085 and 8086
In 1976, Intel updated the 8080 design with the 8085 by adding two instructions to enable/disable three added interrupt pins (and the serial I/O pins). They also simplified hardware so that it used only +5V power, and added clock-generator and bus-controller circuits on the chip. It was binary compatible with the 8080, but required less supporting hardware, allowing simpler and less expensive microcomputer systems to be built. These were the first Intel chips to be produced without input from Faggin.
In 1978, Intel introduced the 8086, a 16-bit processor which gave rise to the x86 architecture. It did not contain floating-point instructions. In 1980 the company released the 8087, the first math co-processor they'd developed.
Next came the 8088, the processor for the first IBM PC. Even though IBM engineers at the time wanted to use the Motorola 68000 in the PC, the company already had the rights to produce the 8086 line (by trading rights to Intel for its bubble memory) and it could use modified 8085-type components (and 68000-style components were much more scarce).
Moto 68000
In 1979, Motorola introduced the 68000. With internal 32-bit registers and a 32-bit address space, its bus was still 16 bits due to hardware prices. Originally designed for embedded applications, its DEC PDP-11 and VAX-inspired design meant that it eventually found its way into the Apple Macintosh, Amiga, Atari, and even the original Sun Microsystems® and Silicon Graphics computers.
Where is the 68000 now?
As the 68000 was reaching the end of its life, Motorola entered into the Apple-IBM-Motorola "AIM" alliance which would eventually produce the first PowerPC® chips. Motorola ceased production of the 68000 in 2000.
The dawning of the age of RISC: The 1980s
Advances in process ushered in the "more is more" era of VLSI, leading to true 32-bit architectures. At the same time, the "less is more" RISC philosophy allowed for greater performance. When combined, VLSI and RISC produced chips with awesome capabilities, giving rise to the UNIX® workstation market.
The decade opened with intriguing contemporaneous independent projects at Berkeley and Stanford -- RISC and MIPS. Even with the new RISC families, an industry shakeout commonly referred to as "the microprocessor wars," would mean that we left the 1980s with fewer major micro manufacturers than we had coming in.
By the end of the decade, prices had dropped substantially, so that record numbers of households and schools had access to more computers than ever before.
RISC and MIPS and POWER
RISC, too, started in many places at once, and was antedated by some of the examples already cited (see the sidebar, The evolution of RISC).
Berkeley RISC
In 1980, the University of California at Berkeley started something it called the RISC Project (in fact, the professors leading the project, David Patterson and Carlo H. Sequin, are credited with coining the term "RISC").
The project emphasized pipelining and the use of register windows: by 1982, they had delivered their first processor, called the RISC-I. With only 44KB transistors (compared with about 100KB in most contemporary processors) and only 32 instructions, it outperformed any other single chip design in existence.
MIPS
Meanwhile, in 1981, and just across the San Francisco Bay from Berkeley, John Hennessy and a team at Stanford University started building what would become the first MIPS processor. They wanted to use deep instruction pipelines -- a difficult-to-implement practice -- to increase performance. A major obstacle to pipelining was that it required the hard-to-set-up interlocks in place to ascertain that multiple-clock-cycle instructions would stop the pipeline from loading data until the instruction was completed. The MIPS design settled on a relatively simple demand to eliminate interlocking -- all instructions must take only one clock cycle. This was a potentially useful alteration in the RISC philosophy.
POWER
Also contemporaneously and independently, IBM continued to work on RISC as well. 1974's 801 project turned into Project America and Project Cheetah. Project Cheetah would become the first workstation to use a RISC chip, in 1986: the PC/RT, which used the 801-inspired ROMP chip.
Where are they now?
By 1983, the RISC Project at Berkeley had produced the RISC-II which contained 39 instructions and ran more than 3 times as fast as the RISC-I. Sun Microsystem's SPARC (Scalable Processor ARChitecture) chip design is heavily influenced by the minimalist RISC Project designs of the RISC-I and -II.
Professors Patterson and Sequin are both still at Berkeley.
MIPS was used in Silicon Graphics workstations for years. Although SGI's newest offerings now use Intel processors, MIPS is very popular in embedded applications.
Professor Hennessy left Stanford in 1984 to form MIPS Computers. The company's commercial 32-bit designs implemented the interlocks in hardware. MIPS was purchased by Silicon Graphics, Inc. in 1992, and was spun off as MIPS Technologies, Inc. in 1998. John Hennessy is currently Stanford University's tenth President.
IBM's Cheetah project, which developed into the PC-RT's ROMP, was a bit of a flop, but Project America was in prototype by 1985 and would, in 1990, become RISC System/6000. Its processor would be renamed the POWER1.
RISC was quickly adopted in the industry, and today remains the most popular architecture for processors. During the 1980s, several additional RISC families were launched. Aside from those already mentioned above were:
CRISP (C Reduced Instruction Set Processor) from AT&T Bell Labs.
The Motorola 88000 family.
Digital Equipment Corporation Alpha's (the world's first single-chip 64-bit microprocessor).
HP Precision Architecture (HP PA-RISC).
32-bitness
The early 1980s also saw the first 32-bit chips arrive in droves.
BELLMAC-32A
AT&T's Computer Systems division opened its doors in 1980, and by 1981 it had introduced the world's first single-chip 32-bit microprocessor, the AT&T Bell Labs' BELLMAC-32A, (it was renamed the WE 32000 after the break-up in 1984). There were two subsequent generations, the WE 32100 and WE 32200, which were used in:
the 3B5 and 3B15 minicomputers
the 3B2, the world's first desktop supermicrocomputer
the "Companion", the world's first 32-bit laptop computer
"Alexander", the world's first book-sized supermicrocomputer
All ran the original Bell Labs UNIX.
Motorola 68010 (and friends)
Motorola had already introduced the MC 68000, which had a 32-bit architecture internally, but a 16-bit pinout externally. It introduced its pure 32-bit microprocessors, the MC 68010, 68012, and 68020 by 1985 or thereabouts, and began to work on a 32-bit family of RISC processors, named 88000.
NS 32032
In 1983, National Semiconductor introduced a 16-bit pinout, 32-bit internal microprocessor called the NS 16032, the full 32-bit NS 32032, and a line of 32-bit industrial OEM microcomputers. Sequent also introduced the first symmetric multiprocessor (SMP) server-class computer using the NS 32032.
Intel entered the 32-bit world in 1981, same as the AT&T BELLMAC chips, with the ill-fated 432. It was a three-chip design rather than a single-chip implementation, and it didn't go anywhere. In 1986, its 32-bit i386 became its first single-chip 32-bit offering, closely followed by the 486 in 1989.
Where are they now?
AT&T closed its Computer Systems division in December, 1995. The company shifted to MIPS and Intel chips.
Sequent's SMP machine faded away, and that company also switched to Intel microprocessors.
The Motorola 88000 design wasn't commercially available until 1990, and was cancelled soon after in favor of Motorola's deal with IBM and Apple to create the first PowerPC.
ARM is born
In 1983, Acorn Computers Ltd. was looking for a processor. Some say that Acorn was refused access to Intel's upcoming 80286 chip, others say that Acorn rejected both the Intel 286 and the Motorola MC 68000 as being not powerful enough. In any case, the company decided to develop its own processor called the Acorn RISC Machine, or ARM. The company had development samples, known as the ARM I by 1985; production models (ARM II) were ready by the following year. The original ARM chip contained only 30,000 transistors.
Where are they now?
Acorn Computers was taken over by Olivetti in 1985, and after a few more shakeups, was purchased by Broadcom in 2000.
However, the company's ARM architecture today accounts for approximately 75% of all 32-bit embedded processors. The most successful implementation has been the ARM7TDMI with hundreds of millions sold in cellular phones. The Digital/ARM combo StrongARM is the basis for the Intel XScale processor.
A new hope: The 1990s
The 1990s dawned just a few months after most of the Communist governments of Eastern and Central Europe had rolled over and played dead; by 1991, the Cold War was officially at an end. Those high-end UNIX workstation vendors who were left standing after the "microprocessor wars" scrambled to find new, non-military markets for their wares. Luckily, the commercialization and broad adoption of the Internet in the 1990s neatly stepped in to fill the gap. For at the beginning of that decade, you couldn't run an Internet server or even properly connect to the Internet on anything but UNIX. A side effect of this was that a large number of new people were introduced to the open-standards Free Software that ran the Internet.
The popularization of the Internet led to higher desktop sales as well, fueling growth in that sector. Throughout the 1990s, desktop chipmakers participated in a mad speed race to keep up with "Moore's Law" -- often neglecting other areas of their chips' architecture to pursue elusive clock rate milestones.
32-bitness, so coveted in the 1980s, gave way to 64-bitness. The first high-end UNIX processors would blazon the 64-bit trail at the very start of the 1990s, and by the time of this writing, most desktop systems had joined them. The POWER™ and PowerPC family, introduced in 1990, had a 64-bit ISA from the beginning.
Power Architecture
IBM introduced the POWER architecture -- a multichip RISC design -- in early 1990. By the next year, the first single-chip PowerPC derivatives (the product of the Apple-IBM-Motorola AIM alliance) were available as a high-volume alternative to the predominating CISC desktop structure.
Where is Power Architecture technology now?
Power Architecture technology is popular in all markets, from the high-end UNIX eServer™ to embedded systems. When used on the desktop, it is often known as the Apple G5. The cooperative climate of the original AIM alliance has been expanded into an organization by name of Power.org.
DEC Alpha
In 1992, DEC introduced the Alpha 21064 at a speed of 200MHz. The superscalar, superpipelined 64-bit processor design was pure RISC, but it outperformed the other chips and was referred to by DEC as the world's fastest processor. (When the Pentium was launched the next spring, it only ran at 66MHz.) The Alpha too was intended to be used in both UNIX server/workstations as well as desktop variants.
The primary contribution of the Alpha design to microprocessor history was not in its architecture -- that was pure RISC. The Alpha's performance was due to excellent implementation. The microchip design process is dominated by automated logic synthesis flows. To deal with the extremely complex VAX architecture, Digital designers applied human, individually crafted attention to circuit design. When this was applied to a simple, clean architecture like the RISC-based Alpha, the combination gleaned the highest possible performance.
Where is Alpha now?
Sadly, the very thing that led Alpha down the primrose path -- hand-tuned circuits -- would prove to be its undoing. As DEC was going out of business, , its chip division, Digital Semiconductor, was sold to Intel as part of a legal settlement. Intel used the StrongARM (a joint project of DEC and ARM) to replace its i860 and i960 line of RISC processors.
The Clone Wars begin
In March 1991, Advanced Micro Devices (AMD) introduced its clone of Intel's i386DX. It ran at clock speeds of up to 40MHz. This set a precedent for AMD -- its goal was not just cheaper chips that would run code intended for Intel-based systems, but chips that would also outperform the competition. AMD chips are RISC designs internally; they convert the Intel instructions to appropriate internal operations before execution.
Also in 1991, litigation between AMD and Intel was finally settled in favor of AMD, leading to a flood of clonemakers -- among them, Cyrix, NexGen, and others -- few of which would survive into the next decade.
In the desktop space, Moore's Law turned into a Sisyphean treadmill as makers chased elusive clock speed milestones.
Where are they now?
Well, of course, AMD is still standing. In fact, its latest designs are being cloned by Intel!
Cyrix was acquired by National Semiconductor in 1997, and sold to VIA in 1999. The acquisition turned VIA into a processor player, where it had mainly offered core logic chipsets before. The company today specializes in high-performance, low-power chips for the mobile market.
CISC
CISC was a retroactive term. It was coined and applied to processors after the fact, in order to distinguish traditional CPUs from the new RISC designs. Then in 1993, Intel introduced the Pentium, which was a pipelined, in-order superscalar architecture. It was also backwards-compatible with the older x86 architecture and was thus almost a "hybrid" chip -- a blend of RISC and CISC design ideas. Later, the Pentium Pro included out-of-order code execution and branch prediction logic, another typically RISC concept.
Where are we now? The 2000s
The 2000s have come along and it's too early yet to say what will have happened by decade's end. As Federico Faggin said, the exponential progression of Moore's law cannot continue forever. As the day nears when process will be measured in Angstroms instead of nanometers, researchers are furiously experimenting with layout, materials, concepts, and process. After all, today's microprocessors are based on the same architecture and processes that were first invented 30 years ago -- something has definitely got to give.
We are not at the end of the decade yet, but from where we sit at its mid-way point, the major players are few, and can easily be arranged on a pretty small scorecard:
In high-end UNIX, DEC has phased out Alpha, SGI uses Intel, and Sun is planning to outsource production of SPARC to Fujitsu (IBM continues to make its own chips). RISC is still king, but its MIPS and ARM variants are found mostly in embedded systems.
In 64-bit desktop computing, the DEC Alpha is being phased out, and HP just ended its Itanium alliance with Intel. The AMD 64 (and its clones) and the IBM PowerPC are the major players, while in the desktop arena as a whole, Intel, AMD, and VIA make x86-compatible processors along RISC lines.
As for 2005 and beyond, the second half of the decade is sure to bring as many surprises as the first. Maybe you have ideas as to what they might be! Take this month's chips challenge, and let us know your predictions for chips in 2005.
The history of microprocessors is a robust topic -- this article hasn't covered everything, and we apologize for any omissions. Please e-mail the Power Architecture editors with any corrections or additions to the information provided here.
Resources
Wikipedia offers a broad history of the microprocessor in its many architectures and incarnations and CPU Design.
Other Wiki entries germane to today's story include Fairchild Semiconductor and William Shockley, as well as RISC andCMOS.
Contemporaneous (and not so contemporaneous) independent discovery and invention plagues historians at every turn. Did you think Ben Franklin invented the lightning rod? He did, of course - but he was beat to production by Prokop Divis (whose lightning conductor was torn down by an angry Moravian mob who was convinced it had caused a drought). The light bulb was not invented by Edison any more than radio by Marconi or the mechanical calculator by Pascal (or even Schickard!).
Independent discovery is not the only thing that makes tracing these histories difficult: inventors also quite naturally have a habit of building on the work of others: DeForest refined Fleming's vacuum tube while investigating observations that had been made by Edison. The first transistor was made by Bardeen and Brattain in 1947, and refined by Shockley in 1950 - and none of them would have been possible without prior work on the conductivity of germanium by AT&T Bell Labs' Russel Ohl. Although Bardeen, Brattain, and Shockley shared a Nobel Prize for their work, AT&T never was granted a patent, because of Lilienfeld's prior art (there is some evidence that project lead Shockley was familiar with Lilienfield's work).
Tracing the invention of the integrated circuit and its evolution into the microprocessor leads you through a similarly tangled web: the idea of putting an entire circuit on a chip was proposed as early as 1952 by G. W. A. Dummer, and by 1959 Richard Feynman was urging his colleagues to go nano in his well-known talk There's Plenty of Room at the Bottom. Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor both invented integrated circuits independently in 1958. Kilby got the Nobel Prize, Noyce got the patent.
Things get even more muddied tracing the lineage of the Intel 4004. The official version usually places the credit on the shoulders of Marcian "Ted" Hoff and Stanley Mazor. Others insist that as the designer, Federico Faggin deserves the most credit (a few will say that Matasoshi Shima deserves equal billing with Faggin). More recently, Wayne D. Pickette, who also worked with Faggin and Shima, has stepped forward to say that the 4004 also owes a great deal to the PDP-8. Of all of the people named above, only Federico Faggin's initials are etched directly onto the 4004.
Things are much calmer over Texas Instruments' way, where Gary Boone's 1973 patent for the single-chip microprocessor architecture has not been, as far as we know, ever been disputed.
Links to Microprocessor Resources provides a comprehensive list of histories of microchip development. As well, micro history sites you will enjoy include John Bayko's Great Microprocessors of the Past and Present (V 13.4.0), Mark Smotherman's Who are the computer architects?, and The History of Computing Project's History of hardware (to name but a few). The COMPUTERS' HISTORY page has a nice timeline (alas, only to 1995) while the Computers & Microprocessors site employs shocking colors, but includes good links.
Some individual chips that are less-often mentioned in those histories include the Fairchild f8 which was included in the Channel F home video system. You will also find the history of Intel's star-crossed 32-bit 432 intriguing. Texas Instruments' new TV tuner chip is described at CNN.
Read the developerWorks article, The year in microprocessors, which outlines the themes of the microprocessor industry in 2004 (developerWorks, December 2004).
For IBM's role in processor production see "POWER to the people (A history of chipmaking at IBM") (developerWorks, August 2004), and PowerPC Architecture: A high-performance architecture with a history both from IBM. A Look Back at the IBM PC (IT Management, 2004) and 27 years of IBM RISC and Andrew Allison's A Brief History of RISC.
Have experience you'd be willing to share with Power Architecture zone readers? Article submissions on all aspects of Power Architecture technology from authors inside and outside IBM are welcomed. Check out the Power Architecture author FAQ to learn more.
Have a question or comment on this story, or on Power Architecture technology in general? Post it in the Power Architecture technical forum or send in a letter to the editors.
Get a subscription to the Power Architecture Community Newsletter when you Join the Power Architecture community.
All things Power are chronicled in the developerWorks Power Architecture editors' blog, which is just one of many developerWorks blogs.
Find more articles and resources on Power Architecture technology and all things related in the developerWorks Power Architecture technology content area.
Download a Power Architecture Pack to demo a SoC in a simulated environment, or just to explore the fully licensed version of Power Architecture technology. This and other fine Power Architecture-related downloads are listed in the developerWorks Power Architecture technology content area's downloads section.
|