Power Outage

Looking back at Apple’s transition from PowerPC to Intel CPUs, and considering why Intel now finds itself in the same position PowerPC did 15 years ago.

By Ernie Smith

Today in Tedium: Perhaps the highly anticipated moment that I’m going to contextualize today is totally inevitable, in a way. For years, there’s been a rumbling that Apple would take its knowledge of the ARM processor architecture and bring it to its desktop and laptop computers. Next week, at a virtual Worldwide Developers Conference, the iPhone giant is expected to do just that. Of course, many will focus on the failed partner, the jilted lover of the business relationship that led to Apple’s move to vertically integrate: Intel. But I’m interested in the demise of the platform Intel vanquished on its way to taking over Apple—and the parallels that have emerged between PowerPC and Intel over time. Today’s Tedium dives into Apple’s long list of jilted processor partners, leaning closely on the shift from PowerPC to Intel. Keep Apple happy, or else. — Ernie @ Tedium

The Prepared is a very good newsletter that’s ostensibly about manufacturing.

Since 2013, The Prepared has been sending weekly links & analysis on engineering, logistics, and humanity-scale problems in the physical world. If you like your manufacturing videos served with a side of business strategy and your tech news with a healthy dose of realism, sign up for free here!

Today’s Tedium is sponsored by The Prepared.

Apple Scorpius Processor Specification

It took Apple a couple of decades to make good on this general concept. (Internet Archive)

Apple’s first in-house CPU project surfaced nearly 35 years ago

In many ways, the advantages of vertical integration, and the perceived weaknesses of supplied CPUs from vendors, have long attracted Apple, a company that has long been a vertical integrator at its heart.

But many people may not have been aware how early that interest in producing its own CPUs emerged—and that it started internally. And last year, a document landed on the Internet Archive that laid out just how ambitious that was. Uploaded by an anonymous user with apparent ties to Apple, the “Scorpius Architectural Specification,” published in 1989, explained the general concepts of multi-core CPU architectures more than a decade before those technologies came into wide use by PC users.

“Work started in the mid ’80s, and continued until the end of the decade,” the leaker of this confidential document noted. “Now, obviously this project never saw the light of day. But some very smart technical people contributed to it, and from what I heard at the time the design was solid.”

While given the name Scorpius on the front, it was long better known under another name by Apple diehards: Aquarius.

Here’s the story: In the years after Steve Jobs was led away from Apple, an R&D project began in earnest to produce a multi-core CPU architecture. At the time the project began, this was extremely theoretical stuff, and computer CPUs with more than a single processor core baked in did not actually appear in PCs until the early 2000s.

Apple Scorpius Processor Graphic

An illustration from the document explaining the multi-core capabilities of the Aquarius/Scorpius processor. (Internet Archive)

As Low End Mac explained in 2006, Project Aquarius was an attempt by the company—at this time led by John Sculley and deeply influenced by Macintosh development head Jean-Louis Gassée—to regain its technical prowess as the Macintosh lost its luster compared to emerging processors developed around the RISC (reduced instruction set computer) chip. RISC-based processors, of which the ARM chipset is one example, aim to speed up the operation process by minimizing the number of available instructions.

Apple’s proposed implementation was highly technical and had the initial support of two of its highest-level executives, but it was basically doomed from the start, one of many R&D “money pits” that the company had worked on during the late 1980s. The problem, as Low End Mac author Tom Hornby put it:

Apple was not a microchip company, and it didn’t have the resources to become one. It would have to hire a staff familiar with microprocessor design, buy the equipment required to implement the designs, then manufacture the final products (or hire a firm like Fujitsu or Hitachi to do so). Companies like Intel and Motorola spent billions of dollars a year designing and manufacturing microprocessors. Apple was well off, but it didn’t have billions to spend.

To even get started on such a complex project cost millions—in the form of a $15 million Cray supercomputer that Gassée authorized, along with a staff of dozens of employees. Vintage Mac software developer (and Gopher enthusiast) Cameron Kaiser, who spotted the document last winter, noted that the primary engineer who started the project, Sam Holland, had produced something perhaps too high-level even for Apple.

“Holland’s complex specification worried senior management further as it required solving various technical problems that even large, highly experienced chip design companies at the time would have found difficult,” he explained.

In development until 1989, the project showed little in the way of actual silicon (which would have been out of Apple’s financial reach, anyway), but produced an in-depth technical document explaining the potential of the Scorpius architecture, a custom chipset that was capable of many things we take for granted today. Beyond the multiple cores and the parallel execution they allowed, Aquarius also was very early to another key concept that is quite common in processors today: integrated graphics, something Intel didn’t work into its chips until the low-end Intel i810 chipset, released in the 1990s.

The project was unsuccessful, of course, but in 2018, Gassée noted on his website, Monday Note, that later developments showed the spirit of the project was in the right place:

Although the quad processor development work didn’t produce direct results, the Aquarius project stands as an example of Apple’s abiding desire to control the future of its hardware, a yearning that would again manifest itself, successfully this time, when Jobs bought Palo Alto Semiconductor to develop the Ax series of microprocessors that power iPhones and iPads, microprocessors now widely considered the industry’s best in their category.

Gassée is certainly correct that Aquarius likely played a historical precursor to Apple’s current processor ambitions, but it likely also played an indirect role in its first major processor shift—that from Motorola’s 68000 series of processors used in the Apple Macintosh of the time, to the PowerPC, which eventually took off in a big way in the 1990s.

1990

The year IBM released the hardware overview for its RISC System/6000 processor, which became the first processor to use the POWER instruction set architecture. This instruction set became the basis of the PowerPC processor technology that Apple, IBM, and Motorola developed together in the “AIM Alliance” that the three companies formed to develop next-generation computing technologies starting in 1991. IBM and others still make processors based on this instruction set technology even today, despite Apple dropping it 14 years ago. (This wasn’t the only bet Apple made on processors at the time; Apple also invested in ARM, which would come in handy decades later.)

Power PC G3 processor

A 300 MHz Motorola PowerPC 750 processor, better known as the PowerPC G3. Apple, notably, tended to simplify the processor names for the PowerPC chips it used, leading to the G3, G4, and G5 nomenclature. (Wikimedia Commons)

In many ways, the PowerPC chipset was the consumer-level introduction into our 64-bit multi-core processor world

Taking a step back to think about PowerPC at a high level, it’s worth considering that on paper, it was supposed to give Apple the kind of control over its processor destiny that it clearly craved.

On its own, Apple was not powerful enough to build the kinds of CPUs it hoped to build its computers around, so it teamed with two companies with known track records for producing chips and had them do much of the heavy lifting.

The PowerPC architecture was the most successful part of a partnership that aimed to produce the future of computing in numerous forms, including with hardware and software.

Upon its 1994 release in the Mac, the PowerPC impressed the heck out of Mac users thanks to the significance of the upgrade compared to the 68000 line. As longtime public-television tech journalist Stewart Chiefet put it at the start of a Computer Chronicles episode:

Is the PowerPC worth it? You’ll get a resounding “yes” from the folks here at Star Graphics, a pre-press shop in Foster City, California. They used to use a pretty powerful computer here, a Quadra 950, but using the 950 it used to take a minute or so to preview a complex graphic like this one. Now it takes about 10 seconds. Using the 950, it would take hours to perform a function called “trapping.” Now it takes about 20 minutes. Why? Because they’ve switched to the PowerPC.

But even then, it was not exactly an unvarnished success. IBM and Motorola joined Apple in this endeavor with the goal of creating a next-generation standard to drive the technology industry forward … but after all that, the only big company that was primarily using PowerPC chips in its PCs was Apple. You could certainly get a PowerPC-based computer from IBM in the mid-1990s (here’s an example), but its personal computer business, the one it later sold to Lenovo, was largely x86-based.

A 1995 InfoWorld article diagnosed the problem facing IBM with the PowerPC architecture: “It needs volume to build the necessary infrastructure to compete in price and third-party support with Intel and Microsoft, and without that infrastructure, third parties are unlikely to support the PowerPC,” authors Ed Scannell and Brooke Crothers wrote. “IBM, however, has been very slow in trying to create that infrastructure.”

The result, over time, is that the PowerPC never seriously competed with Intel as a cross-platform CPU platform in its own right. (It was much more successful in video games, however; during the seventh generation of the console wars, all three primary game platforms—the Nintendo Wii, the Xbox 360, and the Playstation 3—used the POWER instruction set architecture in their respective processors, with the Wii using a direct successor to the processor the original iMac G3 used. Nintendo also used PowerPC for the GameCube and Wii U.)

But for many years, Apple still greatly benefited from access to this architecture, which was so highly advanced that, at the time the G4 processor was first introduced, it was technically classified as a weapon by the U.S. government due to export limits on processing power at the time.

POWER4 4 Chip Module

A four-chip IBM POWER4 module, of the kind seen in servers. (via IXBT Labs)

And in 2001, a PowerPC-based chip actually pulled off the multi-core trick that Apple had dreamed of for itself years prior with Project Aquarius. That chip, the IBM POWER4 microprocessor, became the first commercially available multi-core microprocessor, and was also one of the first processors to top the symbolic 1-gigahertz mark for processing power.

Two years after the release of that server-targeted chip, a single-core version was made available for the Mac as the G5 processor—the company’s first 64-bit processor, at a time when Intel x86 processors were only available as 32-bit chips—and despite its sheer power, its architectural and manufacturing challenges ensured it would eventually be cast aside by the company whose specific needs willed it into existence.

“The PowerPC G5 changes all the rules. This 64-bit race car is the heart of our new Power Mac G5, now the world’s fastest desktop computer. IBM offers the most advanced processor design and manufacturing expertise on earth, and this is just the beginning of a long and productive relationship.”

— Steve Jobs, in a June 2003 press release announcing the release of the PowerPC G5 processor, which was used in the Power Mac G5, the company’s first 64-bit computer. The “long and productive relationship” effectively got its pink slip two years later, on the very same Worldwide Developers Conference stage where Jobs introduced the G5 chip to the world.

Apple Steve Jobs G5 3 Ghz

It was never a good idea to make Steve Jobs explain the limitations of a processor line at his biggest keynote of the year.

The processor (and relationship) limitations that tore apart Apple’s long partnership with IBM and Motorola

In the span of two years, Apple CEO Steve Jobs had gone from excitedly announcing the massive benefits of the G5 processor to a captive audience at WWDC to revealing Apple’s desire to scrap the entire PowerPC platform.

There were a lot of reasons for this, but one of the most embarrassing might have been the 3-gigahertz problem. See, at the time Apple announced the Power Mac G5 in 2003, Jobs made a claim that the company would soon be shipping a machine with a 3-gigahertz processor, which turned out to be a bit more ambitious than the G5 was actually capable of.

After discussing a 2.5-gigahertz chip upgrade at the 2004 edition of WWDC, Jobs provided this fairly straightforward mea culpa:

I want to talk about 2-and-a-half gigahertz, because I stood up here a year ago and said we’d have 3 gigahertz within a year. What happened? What happened was: The G5, as you know, is a very complex chip, and in the semiconductor industry to make things run faster they traditionally shrank the geometries, and so the PowerPC was being made in 130-nanometer geometries. And in the last year the semiconductor industry has gone from 130 nanometer to 90 nanometer expecting everything would just get faster, no problem. It hit the wall. The whole industry hit the wall at 90 nanometer, and it’s been a lot harder than people thought. And so, the speed increases have been very small compared with what we’ve been used to for the last five years.

Jobs didn’t sound particularly excited to be making this revelation on the stage, possibly in part because it hinted at another weakness of the G5 transition: the Powerbook problem. As Low End Mac explains, while IBM and Motorola had in the past made available mobile-specific versions of various PowerPC processor generations, there had at times been a bit of a delay between the time that a PowerPC generation had hit the desktop and for its portable equivalent to emerge. The PowerBook G4, for example, didn’t come around until 2001, a year and a half after the Power Mac G4 first went on sale. But the architectural issues with the G5 that capped the processor at below 3 gigahertz seemed to suggest that this processor, originally developed for high-end servers and workstations, wouldn’t easily scale down to the power-sipping needs of a laptop.

Powerbook G4

There was a reason that the Powerbook G4 didn’t become the Powerbook G5. (mich1008/Flickr)

These days, an Intel-esque solution to this conundrum might look like this: Develop a laptop with multiple cores to get more processing power out of the last generation. But multi-core CPUs were still new and the potential untested, and that left a generation of Mac laptops stuck in limbo. Ultimately, the Powerbook line topped out in 2005, with a single-core G4 processor that had four times the processor speed of the original G4, introduced six years earlier.

And then there’s the relationship management aspect of this shift. As a 2009 CNet piece explains, a variety of factors, including access to Windows and a decaying partnership with IBM, drove the move. Price was another factor, and it’s an area where Apple was in a place to get IBM to budge. Unfortunately, it couldn’t.

“Apple was paying a premium for IBM silicon, he said, creating a Catch-22,” wrote Brooke Crothers (ironically the same Brooke Crothers cited in a completely different article above). “IBM had to charge more because it didn’t have the economies of scale of Intel, but Apple didn’t want to pay more, even though it supposedly derived more from an inherently superior RISC design as manifested in the PowerPC architecture.”

For these reasons and others, Jobs found himself on the WWDC stage, two years after introducing the G5 and one year after admitting its technical limitations, announcing a move to Intel.

“Looking ahead Intel has the strongest processor roadmap by far,” Jobs said of the move. “It’s been 10 years since our transition to the PowerPC, and we think Intel’s technology will help us create the best personal computers for the next 10 years.”

Power Mac G5

The Late 2005 model of the Power Mac G5, one of Apple’s last PowerPC models before it switched to Intel. (Wikimedia Commons)

Despite the announcement, the Mac’s three G5 models—the Power Mac G5, the iMac, and the Xserve—still kept shipping. In late 2005, Apple even released a four-core model—its first model with a multi-core processor (two of them, in fact). But Mac diehards admittedly felt a little defused by the whole situation. As longtime Mac journalist John Siracusa put the situation in a 2005 Ars Technica article:

As far as I can recall, this is the first new top-of-the-line Mac that has ever been introduced with a slower clock speed than its predecessor. Yet the Mac community is making little fuss. Why? Because we all know that the Quad might as well be called the Power Mac Lame Duck. It’s the last glimmer of a fading PowerPC rebellion. We all understand why it’s not 3GHz. There’s just a skeleton crew over at IBM now. We’re coasting on the fumes of an Apple/IBM relationship that ran out of gas months ago.

Some of this is perception—it’s now widely understood that multi-core processors tend to have lower clock speeds than single-core equivalents, which was less understood at the time—but much of it is disappointment that an ambitious promise to Apple’s customers wasn’t held up, for reasons as understandable as they were frustrating.

Summit Supercomputer POWER9

The interior of the Summit supercomputer, which uses IBM POWER9 processors. (Jason Richards/Oak Ridge National Laboratory/Flickr)

IBM has kept improving its POWER line of processors without Apple as its primary client, with the next version, POWER10, expected next year. And the legacy PowerPC line still has some uses in niche areas—for example, you can buy a brand-new Amiga computer with a PowerPC chip derived from the dual-core G5 architecture even today.

But the idea of the PowerPC as a mainstream desktop computing platform effectively died with the end of the Apple/IBM partnership.

Modern Intel processors, which became available in multi-core formats right as Apple began using them, have been a major boon for the uptake of MacOS over the years.

In the early days of the Apple/Intel partnership, their use represented something of a “pressure valve” on processor limitations that the Power Mac G5 created for Apple’s processor line. It helped solve a plateau in Apple’s laptops, which weren’t able to take advantage of the 64-bit architecture that the PowerPC G5 had promised to consumers.

In many ways, the transition from PowerPC reflected a slowly degrading relationship between three technology giants that were destined to move in different directions. IBM was more comfortable with the server and embedded use cases for its POWER architecture, which it eventually open-sourced. Motorola got out of the chip-fabrication business entirely, spinning it off as Freescale. And Apple got tired of waiting for processors that met its time tables and specifications.

Apple is likely feeling that same kind of tired about Intel these days, a company that continues to produce perfectly good chips, but has struggled to maintain the level of innovation it once did, with a couple of small strategic mistakes turning into unexpectedly big ones. (As Ben Thompson wrote in Stratechery on Tuesday, Intel was in the running for having its chip in the original iPhone, but wouldn’t budge on the price. Bad move.) Its longtime nemesis AMD is winning the battle for more cores at a cheaper price. And Apple apparently felt so bored with Intel’s processor lineup that it left entire lines of products—the Mac Mini and Mac Pro, to name two—to languish for years at a time.

But like the PowerPC, Intel is struggling to keep up with the demand from a tough client. The processors are good, but they don’t come along at the speed and cadence Cupertino likes, and when those chips are delayed, Apple has to delay its product releases, which means its release cycle can’t work like clockwork like it can with the iPhone. Apple doesn’t like companies that can’t keep it happy—which is why Nvidia GPUs haven’t been in its computers for many years.

There are other options in x86-world of course—AMD’s Ryzen and Threadripper lines look mighty tempting these days—but Apple already has some impressive mobile chipsets that might scale up far better than the G5 scaled down.

So now, apparently, Apple looks to its in-house ARM chipsets to give it that long-wanted vertical integration, 35 years in the making. Good thing it has the money to throw around this time.

--

Find this one an interesting read? Share it with a pal! And thanks to The Prepared for sponsoring today’s issue—be sure to subscribe!

Ernie Smith

Your time was just wasted by Ernie Smith

Ernie Smith is the editor of Tedium, and an active internet snarker. Between his many internet side projects, he finds time to hang out with his wife Cat, who's funnier than he is.

Find me on: Website Twitter