Slot Wars
The battle to replace the standard expansion slot in the IBM PC reflected an effort by two sides of the PC world to gain control. Spoiler: The clone-makers won.
Today in Tedium: In the history of the IBM-compatible personal computer, we know who the winners and losers all are. The biggest winner was most assuredly Microsoft, followed by the many clone makers that crept up on Iowa farms, in dorm rooms, and inside the pages of Computer Shopper. The biggest loser was possibly IBM, whose architecture became the de facto standard, but whose exclusivity fell through its fingers as the off-the-shelf hardware easily emerged with other companies. In the late 1980s, IBM wanted to reassert control. Its strategy for doing so involved a lot less off-the-shelf—and a lot more proprietary. But all those clone-makers weren’t ready to fall over quite so quickly. Today’s Tedium talks internal slots. — Ernie @ Tedium
Today’s GIF comes from a YouTuber showing off an EISA board.
Find your next favorite newsletter with The Sample
Each morning, The Sample sends you one article from a random blog or newsletter that matches up with your interests. When you get one you like, you can subscribe to the writer with one click. Sign up over this way.
Today’s Tedium is sponsored by The Sample. (See yourself here?)
The engineer that gave the IBM PC its defining architecture
During the early period of the IBM PC, the expansion slots that commonly came with its machines didn’t really have a name. At least not one that consumers would really remember.
But this technology, which would eventually become known as the Industry Standard Architecture, nonetheless set the stage for a diverse, upgradeable computer ecosystem that lasts to this day.
This work can, in large part, be credited to Mark E. Dean, one of the most prominent black engineers of the early PC computing era, and one whose work directly influenced the basic shape of the IBM PC. Of the nine earliest patents that IBM received for the machine, Dean holds at least partial credit for three.
(He also, side note, was later responsible for leading the team that developed the first 1-gigahertz processor.)
Dean was part of the small team that was developing PCs in the mainframe-heavy world of IBM, and it was clear that there would be a need for PC users to update and maintain their own equipment, which differed from IBM’s more controlled mainframe strategy. Fortunately for IBM, Dean had already worked on the groundwork for a more user-maintainable upgrade approach while in grad school.
In a 2015 article for The Bent, the magazine of the engineering honor society Tau Beta Pi, the story of Dean’s direct influence on the structure of the IBM PC was laid clear:
In 1982, he was 25 and had just earned his master’s degree from Florida Atlantic University. His thesis project was a high-performance graphics terminal.
At the time, the state of the art was the vector terminal, a complicated device the size of a small refrigerator. Dean found a simpler way. By taking advantage of faster 16-bit processors and low-cost memory, he was able to store information about each pixel and paint it onto the screen. This made it possible to take information from a mainframe and display it graphically.
It also addressed IBM’s PC problem. The PC did all its processing in an 8-bit processor and had no way to off-load some of that computing burden to other devices on the system. This made it hard to expand and modify. By tweaking his master’s project, Dean developed a bus that would enable the PC’s processor to send some data to a graphics card or a modem, which would carry out any additional processing remotely. This opened the door to radical PC customization.
Dean’s work, for which he received a patent with Dennis Moeller in 1984, was not known as the Integrated System Architecture upon its creation. In fact, it had a decidedly less attention-grabby name: The I/O Channel.
Certainly, it was snappier than PCMCIA, but it was a name only an engineer could love.
“It is mandatory reading for anyone wishing to work intimately with the PC’s extensive hardware and software features. This manual could even serve as an excellent textbook at the college level.”
— Norman McEntire, a reviewer for PC Magazine, explaining the appeal of the IBM Personal Computer Technical Reference Manual. The book, released in 1982, was detailed in part because IBM was hoping to prevent clones of its system being made—if you read the book, the thought process went, you couldn’t build a clone because you’d be violating IBM’s copyright. However, this was not-so-easily worked around through the well-known “clean room” process, which engineers used to create compatible PC BIOS software.
The confusing way that the Industry Standard Architecture got its name
So, how did this infrastructure get its best-known name? Well, the answer is intertwined with the rise of the clone market.
Without diving too deep into the rise of the clones, manufacturers took advantage of the fact that IBM mostly used off-the-shelf parts for its machines, then created devices that were closely compatible with those specifications. This was made possible through the release of the IBM Personal Computer Technical Reference Manual. Its existence was arguably the reason the clean-room approach was even possible.
“It had all the logic and software used to build a machine, so a technician could debug or fix it, like a television set,” Mark E. Dean told The Bent. “Making that information public is what created the clone industry. That ecosystem of PCs, clones, peripherals, and software is why the PC won over Apple. It was an accident.”
This approach was followed throughout the ’80s, especially after the 1984 release of the IBM PC/AT, which is when the clone market really picked up. That machine included 16-bit expansion slots, which were backwards-compatible with the 8-bit slots on the original IBM PC.
Companies like Compaq grew quite large off the back of the IBM PC market, but by the late ’80s, IBM was ready to reassert its dominance over the market.
Its solution to this problem was the IBM Personal System/2, or PS/2, an upgraded version of the PC that included more modern (read: proprietary) components and proved to be IBM’s last big hurrah as the biggest player in town. This system architecture, whose design language evoked an IBM take on the Macintosh, included a number of new pieces of technology that would become common parts of the PC experience over the next decade, including dedicated keyboard and mouse ports, as well as the well-known Video Graphics Array standard, which allowed for 256 colors in 320x200 resolutions, and 16 colors in 640x480.
It also included a new bus that was incompatible with the old one, the Micro Channel Architecture (MCA). MCA had a number of technical improvements—it was primarily a 32-bit bus, meaning it could be used in more advanced contexts such as networking or higher-end graphics support, and it allowed for a degree of what would later be called “plug-and-play”—and got past a variety of technical weaknesses that had emerged over the years with the PC/AT expansion cards.
The VGA standard largely found support among clone manufacturers, but clone-makers quickly figured out that MCA wasn’t worth their time. They weren’t interested in paying per-system royalties to IBM, and there was evidence that consumers might side with the clone-makers: for all of MCA’s strengths, the old expansion slots were tried and true.
One result of this? The industry, led by Compaq, gave the existing bus a new name, in part to avoid copyright concerns related to the IBM PC/AT name. It came to be known as the Industry Standard Architecture. It wasn’t as technically advanced as MCA, but it was an industry standard by this point, and required no licensing fees.
It was a technology invented by IBM, but at that moment, the clone-makers took ownership of it. Soon, they would try to push it forward.
1989
The year that the Video Electronics Standards Association first launched. The technical standards group, founded by NEC, came about to help push forward an open Super VGA graphics standard, in an attempt to replace the proprietary VGA and push it forward. In practice, however, Super VGA was a very loose standard, and implementations varied widely. For all its faults, VGA was fairly standardized and worked roughly the same on every machine on which it was installed. However, VESA managed to stick around and many of its standards (most famously, the common DisplayPort and the VESA mount that’s widely used for flat-screen monitors) remain common to this day.
The moment the clone-makers asserted their authority over the PC industry
The PS/2 was clearly IBM’s attempt to put the genie back in the bottle as far as the personal computer market went, while gaining some form of control over licensing in the process. But all these other companies that had gained prominence during the 1980s off the back of what accidentally proved to be an open architecture weren’t willing to go out so quietly.
At first, this meant generally ignoring what Big Blue was doing. In a 1987 New York Times article, Compaq made it clear that it would not be following IBM down the Micro-Channel path. After years of following the leader, Compaq drew a line in the sand at moving away from ISA, making the case that the expansion slot, for all its limitations, was good enough for PC users—and Micro Channel wouldn’t cut it.
“It’s very clear that what’s not happening yet is a major shift to the PS/2,” Compaq President Rod Canion told the newspaper. “You can’t dig a hole and bury eight or nine million PCs.”
Soon enough, it became clear that fellow clone-makers weren’t going to just lie down and let IBM decide a system architecture that cut them out of the equation. So, in September of 1988, the leaders of Compaq and a number of other major PC manufacturers of the era—AST Research, Epson, Hewlett-Packard, NEC, Olivetti, Seiko, Tandy, Wyse Technology, and Zenith Data Systems—offered an alternative. The name of this “No Homers” club of clone-makers? The Gang of Nine.
Canion, of course, wrote a book about this era of computing, and fittingly, it’s called Open: How Compaq Ended IBM’s PC Domination and Helped Invent Modern Computing. In the book, he described the launch of the Extended Industry Standard Architecture (EISA), noting that the design avoided harming the existing add-on ecosystem while helping to move things forward. Here’s what Canion stated during the announcement:
EISA is a true industry standard—just like its predecessor. There is broad support from PC manufacturers who will incorporate the EISA bus into their new industry-standard personal computers. There is broad support from system software vendors, who are already working to ensure that new operating systems will take advantage of the higher performance of the 32-bit bus while maintaining compatibility with today’s applications. And numerous third-party peripheral and board manufacturers already are developing new products that’ll work with the new bus. Availability of logic chips required to support the new bus helps ensure support from the broadest range of suppliers. And because EISA is compatible with their installed base of PCs, and is painless to adopt, we are confident that EISA will achieve broad support from the user community as well.
The move to create EISA and the industry support it received—beyond the clone-makers, Microsoft and Intel appeared at the event—represented perhaps the brashest move to emerge from the growing spate of clone-makers yet.
In one telling comment from the period, a 1989 InfoWorld article wondered aloud if Compaq had leapfrogged IBM: “Has Compaq replaced IBM as the company that defines the industry standard?”
It seems wild to think that there was legitimate discussion of Compaq-compatible computers in 2019, especially since the company doesn’t even exist anymore and instead is frequent Tedium fodder.
1987
The year that NuBus, a competing standard developed at MIT, was officially standardized by IEEE. This bus approach, widely used in non-IBM-based ecosystems such as the Macintosh and NeXT, was initially developed for the NuMachine, an early networking-based workstation later bought by Texas Instruments. The bus architecture, while eventually falling into obscurity, later proved influential on the later Peripheral Component Interconnect bus interface invented by Intel.
Why EISA, ultimately, didn’t win, and what did instead
In the end, the EISA bus proved more successful as a way for clone-makers to assert independence from IBM from a technical standpoint than it did as a consumer-focused product. There were a couple reasons for that:
EISA had technical limitations. For all of MCA’s licensing issues, it represented a significant technical step forward over ISA, in part because the bus architecture was significantly faster and it solved issues with conflict resolution, interrupts, and device addresses that were common with ISA. While EISA was an improvement over the existing market standards, its backwards compatibility saddled it with a slower bus speed that made it a bad fit for certain use cases, such as graphics cards.
Implementation was expensive. While not as costly as MCA would have been for clone-makers to implement, the costs of moving to the newer standard limited its uptake at the consumer level, and EISA found itself more popular higher up the food chain—so much so that, at one point, even IBM released systems that supported it.
The market wasn’t ready to upgrade. The ISA system bus remained popular for roughly a decade after IBM tried to ditch it in favor of MCA, despite its weaknesses. This was in part because many of the cards that relied on ISA, such as modems and sound cards, worked fine in practice without an extended bus, even if they weren’t plug-and-play; non-consumer markets that could take advantage of EISA’s advantages, such as in the world of servers, gravitated towards it. In a way, EISA’s success on the market was less as a bus on its own and more as a defensive move, as it allowed the PC clone industry to block MCA from gaining any significant ground in the consumer market.
The staying power of ISA proved problematic in some areas, and led to periods when alternate standards emerged. The best known of these was the VESA Local Bus standard for high-end graphics cards, which offered the oomph that EISA cards lacked, but came saddled with significant technical problems. (The VESA standard was so intertwined with the 486 processor that it didn’t correctly work past a certain clock speed, and the technology was unable to make the leap to the Pentium processor.)
EISA didn’t work to the degree that its inventors probably hoped, but it was nonetheless valuable because it gave the market extra time to get its ducks in a row. By the time the market was ready to move to something else, a more broadly embraced standard, the Intel-developed Peripheral Component Interconnect bus, had emerged.
Intel’s decision to build a bus of its own initially proved controversial, as VESA had a standards body behind it, and Intel backed away from supporting VESA in the end. A 1992 San Francisco Examiner article laid out the concerns of VESA members such as Tseng Labs product manager Ron McCabe.
“It’s a political nightmare. We’re extremely surprised they’re doing this,” McCabe told tech journalist Gina Smith. “We’ll still make money and Intel will still make money, but instead of one standard, there will now be two. And it’s the consumer who’s going to be hurt in the end.”
But by the mid-’90s, Intel’s decision-making (and the VESA standard’s technical failings) had proven itself, and PCI quickly superseded both VESA and EISA; in fact, Intel also created a PCI-based standard called the Accelerated Graphics Port (AGP), specifically for video cards. Heck, PCI even replaced NuBus on the Macintosh, a move that helped set the stage for Apple’s move to Intel processors a decade later.
EISA lived just long enough to ensure that PCI—with all of the benefits of a modern bus and none of the IBM-centric baggage—would become the standard.
We still use its successor, PCIe, today.
The conversation about EISA is worth having at this juncture, in no small part because these standards debates are still happening decades later, just with other parts of the computer—particularly the ports on the side.
That said, despite the industry’s full embrace of PCIe more than a decade ago, there was a new connector standard attempt introduced just a few years ago—one that somehow manages to combine the bad parts of MCA and the good parts of EISA.
Apple’s MPX slot for the 2019 Mac Pro, effectively an add-on to the existing PCIe standard that drives more power to cards, is an example of a proprietary slot that appears, externally, to exclude other manufacturers, like MCA did. However, it does so while maintaining backwards compatibility with the prevailing standard, like EISA did. You can use PCIe cards without a problem, even if you won’t get the advantages of MPX.
And since it came out, so few devices have been made to accommodate for this new technology Apple invented—essentially proprietary AMD graphics cards, a couple of storage adapters, and that’s about it. A MacRumors user suggested the standard was “a proprietary dead end,” and now that it’s had some time in the market, it’s hard to disagree with in the end. Which, in some ways, is too bad. In a world where metal risers to hold up your GPUs are actually a thing and plugging in cables into your high-end NVIDIA card seems to come with an unwanted melting side effect, Apple appears to have been onto something with the idea of a slot that got rid of the annoying wires. Too bad Apple put MPX in a computer that costs $6,000.
Ultimately, the tale of MPX it probably won’t be as interesting to look back on as the tale of MCA and EISA, which was an important proxy battle that set the stage for an open PC industry that was less an “accident” and more a way for manufacturers to control their own destinies.
Perhaps we didn’t end up using all that many EISA cards in the end. But as consumers, we directly benefited from their existence.
It ensured the IBM PC would eventually become just the PC.
--
Find this one an interesting read? Share it with a pal!
Oh yeah, thanks to The Sample for giving us a push this time. Be sure to subscribe to get an AI-driven newsletter in your inbox.