Plug and Pray

Why an early design decision around the IBM PC created the need for an innovation called plug and play—something we very much take for granted today.

By Ernie Smith

Today in Tedium: You know something amazing? You can buy a 30-year-old keyboard from eBay and, with the right adapters, it will work in a modern machine. I could hardly believe it, either, but the adapters really said it all—AT port to PS/2 port, PS/2 to USB, USB to Thunderbolt dock, Thunderbolt dock to my M1—and as a result, all week, I’ve been writing on the same model of keyboard I used as a kid 29 years ago. It’s pretty wild, and it probably wouldn’t have been possible a few decades ago. In one sense, when it comes to peripherals, it’s all electrical currents when you break things down. But in another, getting all these parts to play nicely took a while—and the thing that made it work was something called plug and play. Today’s Tedium ponders the simple beauty of being able to plug stuff into modern computers without having to use dip switches. — Ernie @ Tedium

Today’s GIF is from a video by DaveJustDave, who took part in #DOScember last year.

Plug In Radio

An ad for a radio that uses the predecessor phrasing “plug in and play” to describe its functionality. (New York Daily News/Newspapers.com)

Before it was known as “plug and play,” the concept gained its wings as “plug in and play”

Like the term multimedia, which I covered just a few weeks ago, plug-and-play had a real moment in the sun in the early 1990s, as manufacturers fell over one another trying to make clear that their peripherals could be installed into your computer without a whole bunch of extra headaches.

But the roots of plug-and-play are a bit more interesting in that they highlight two separate stories—a linguistic evolution and a technical evolution. At some point the two came together to tell a single story.

Perhaps the immediacy of the effect of plug-and-play needed to be emphasized by dropping a word on its way to computer ubiquity. See, it was originally called “plug in and play,” in reference to the fact that the thing often being plugged into was the electrical system. Given that electricity didn’t even reach many rural Americans until the tail end of the Great Depression era, it makes sense that the phrase entered the lexicon at this time.

The earliest references I could find to this extended version of the term came about in the 1930s, at a time when early radios and television sets, equipped with vacuum tubes, were first emerging into American homes. One of the earliest references I find dates to 1930 in the New York Daily News, when a radio shop, Michaels Bros., used the term; by the end of the 1930s, mainstream magazines such as LIFE and Popular Science were also using the term. In many ways, it didn’t just reflect the rise of electronics; it reflected the rise of marketing.

Plug N Play battery ad

The first Plug N’ Play product. (Fort Worth Star-Telegram/Newspapers.com)

Eventually, the phrase started to evolve into something more like its modern form. Perhaps the first discovery I spotted that seemed to fit this bill came about in the early 1960s, when the Dynamic Instrument Corporation came up with a device that is intended to recharge the batteries in a transistor radio, allowing the radio to continue being used years after normal batteries would’ve needed to have been thrown out. Plug ’N Play, as the device was called, is the first device patented under that name or any direct variants according to the U.S. Patent and Trademark Office.

The term tended to fade in and out during the 1960s and 1970s—in one case, used for television sets with built-in degaussing capabilities—until the phrase started making a comeback of sorts in the 1980s, when technology companies started to use it to describe their product lines.

In the computer industry, one of the earliest companies to use it was the Japanese company Okidata, whose printer lines represent some of the earliest mainstream uses of the term in the technology press. The company sold a kit for its Microline 92 and Microline 93 printers which allowed automatic support for IBM machines without additional programming. At the time, as PC Tech Journal noted, making a printer not made by IBM work with the computer line was extremely difficult, but Okidata had made it work with this add-on.

For people who have an Okidata ML92 or ML93 and who want IBM compatibility without program-ming, Okidata offers the Plug ’N Play interface kit for $49. These two replacement PROMS are installed in the printer’s control circuit board, and the resulting Okidata-cum-1BM-Graphics printer has a combination of features from both printers. Correspondence quality printing, proportional spacing, and 160 CPS data processing mode are added to the IBM Graphic printer’s repertoire. Regrettably, some standard Okidata features— notably, downline loadable character generation and printing at 12 and 6 CPI—are made unusable.

But, in many ways, the plug-and-play this references is not the plug-and-play we actually got. It was similar, in that it referenced a certain kind of compatibility, but on its own, it was not the same thing. That was actually being perfected in other computer lines at the time, using terms that in many ways obfuscate what they were actually doing.

1970

The year that Pierre Schwab, a developer for McGraw Edison Corp., first applied for a patent for physical dual in-line package (DIP) switches, which were used to manage settings on a board so that different parts could be set by the user based on technical needs. Early PCs relied on DIP switches, or the more spartan jumper block, to manage the ways that systems could distribute resources.

IBM PC 5150

This landmark PC had just one problem. (Steve Petrucelli/Flickr)

Why early home platforms were actually better at detecting hardware than the early IBM PC

The IBM PC, in its original form, turns 40 years old this month—and in that form, it gave us a lot of really great things. One of the things it did not give us was a properly considered approach to installing and setting up peripherals, which created a lot of headaches for early users, likely facilitating the need for more complex IT departments, and giving printers the reputation of being really difficult to install.

In many ways, this was a direct effect of the very thing that gave the PC its long-term power. By choosing to go with off-the-shelf parts or existing technologies rather than designing from the ground up, the PC gained a high level of compatibility due to its non-proprietary nature and the ease of cloning the machine. (In the case of its expansion ports, for example, the developers basically copied the work being done on the IBM System/23 Datamaster, a system developed from the ground up, during the same period. These expansion ports are now known today as the Industry Standard Architecture, or ISA.)

But as a result of that non-proprietary nature, the developers of the IBM PC didn’t deal with one of the biggest headaches of managing hardware at the time—the interrupt request (IRQ).

This somewhat basic element of the PC, while somewhat taken for granted today, was effectively how a piece of hardware could tell the processor, HEY, pay attention to me! During the early days of the IBM PC, this functionality was required to be managed by individual users, and honestly, it kind of sucked. Part of the challenge was that early Intel processors only had limited access to interrupts (before the 286, just 8 IRQs, total, and then 16 after that), and as a result, problems could emerge if, for example, a sound card and a printer were fighting for the same resource at the same time.

“One of the most common ways to signal the CPU is through the use of an IRQ,” PC Magazine noted in 1992. “However, the task of assigning IRQs when adding new expansion cards to your system can be a nightmare.”

(That’s where the DIP switches and jumper blocks came into play.)

TI 994 A

The thing with the keyboard is actually just the computer; the massive thing behind it is the thing that holds all the expansion cards. (Wolfgang Stief/Flickr)

Other manufacturers that weren’t dealing with x86 hardware were aware that these were issues that normal people would not want to deal with. Texas Instruments, for example, had created a system for expansion with its TI-99/4 and TI-99/4A machines that was effectively plug-and-play without really using that term to describe itself. It was incredibly awkward and became unwieldy over time, but that was largely because the hardware took up a massive amount of physical room due to the way TI decided how to handle expansion. (The computer itself managed the adding of peripherals just fine.) As Benj Edwards of How-To Geek recently described the hardware:

Expansion on the TI-99/4A was a little weird. TI initially released several different “sidecar” modules for the 99/4 that plugged into a port on the right side of the computer. These modules included a disk drive controller, a 32K RAM expansion, an RS-232 interface, a speech synthesizer, and even a printer. If you plugged them all in at once, you got an ungainly peripheral train that barely fit on a desk.

The TI-99/4A eventually tried solving the issue of peripheral add-ons by creating a dedicated case for the add-ons, but it ended up being far larger than the computer itself and extremely expensive.

0129_msx

A Panasonic FS-A1ST MSX Turbo R, one of the many devices built on the MSX standard. (Tilemahos Efthimiadis/Flickr)

And the MSX, the platform Microsoft forgot, also sported a big advantage over the IBM PC because of how it managed peripheral additions, abstracting the driver installation process by including the information in ROM and generally making the process painless for the end user.

But because IBM decided that simply getting something to market was more important than making sure it was easy to manage, a lot of people who didn’t actually need to know about interrupt requests knew all about them during the early days of the PC.

1987

The year NuBus, an MIT-developed expansion card interface approach, was first standardized by the Institute of Electrical and Electronics Engineers (IEEE). The standard became the primary connection method for Apple and NeXT computers during the late ’80s and early ’90s, with a focus on plug-and-play interfacing, making it one of the first widely-used card formats with plug-and-play support … though it ultimately never made the jump to the PC, even though it was entirely possible. (One distinctive feature of NuBus was its reliance on a pin-based interface, rather than the edge connectors used by modern expansion cards.)

ISA Card

An example of an expansion card from the mid-’90s. Up front: jumpers to be manually updated. (Luke Jones/Flickr)

How plug and play eventually swooped in and saved the day for a growing industry of PC users

By the late 1980s, computer makers had decided that enough was enough and all these issues with address conflicts, and led to two separate approaches to work with all of these frustrations—which I detailed in this 2019 piece on the battle between IBM’s proprietary Micro Channel Architecture (MCA) and the clone-makers’ more-open Extended Industry Standard Architecture (EISA).

Ultimately, neither of these expansion slot formats actually won over the intended audience, and that meant users were still reliant on ISA, which was the root of many of the issues around device conflicts in the first place.

The work on EISA led to the creation of a half-solution to making ISA cards plug and play-able. Basically, the cards would either include software that could self-configure the cards so jumpers would not need to be changed on the card—but it would create issues like the need to update configuration files on the machine itself.

Eventually, this led to something of a legitimate standard, which Microsoft and Intel called Plug and Play ISA, with the idea that cards would be able to self-configure themselves with the addition of a slight amount of hardware to manage the configuration. A 1993 Computerworld article put the added cost to support Plug and Play ISA at 25 additional cents per card, and was fully implemented in time for Windows 95.

The plus side was that new peripherals supported this standard right out of the gate; the downside was that old peripherals did not. But Microsoft added a little voodoo magic to Windows 95 to see if it could help make sense of everything so that there wouldn’t be any conflicts in cases where the hardware could potentially cause problems. It wasn’t perfect, as anyone who used a computer in 1995 could tell you, but it was significantly better than what was possible previously, and as a result Microsoft specifically advertised that point. It was good enough that people who hate upgrading had a real reason to upgrade to Windows—especially in the IT department.

“The software may still require occasional intervention to help it get all the settings correct, but it cuts down on the amount of time required to set up hundreds or even thousands of computers,” IT consultant Linda Musthaler wrote in Network World in 1995.

Over time, hardware came to be developed with plug and play in mind. Many of the devices you use today—whether SATA hard drives, PCI Express cards, or USB interfaces—have the ability to self-configure, so there’s no direct conflict with the machine you’re plugging it into. Which is why, when I plug in a 30-year-old keyboard via USB (given the right adapter) it works right away.

It’s wild to consider, but we can now take for granted something that was once a wild, never-ending source of frustration for early generations of PC users.

“One problem Linux has in a Microsoft-dominated world is default Plug and Play settings. Left alone, Linux does well with Plug and Play. However, any Plug and Play settings in a computer BIOS can keep Linux from detecting an otherwise Plug and Play device.”

— Michael Jang, the author of the 2006 book Linux Annoyances for Geeks, discussing the challenges that legacy ISA support could create for Linux machines. Even as ISA support for plug-and-play improved later in its life as hardware manufacturers built with it in mind, it was never 100 percent perfect, but as the PCI standard (and, later, PCI Express) took over, it put most of these issues in the rear-view mirror.

In the Computerworld article discussing the creation of the ISA Plug and Play standard, there’s a great quote from Brian Belmont, a manager at Compaq, highlighting just how messed up the concept of plug and play was in the context of more common consumer electronics.

“It’s like saying you have to open the TV set to plug in the VCR, and then when you do it, the thing doesn’t work properly,” he said.

As I pointed out earlier in the piece, the terminology of plug and play technically predates the concept by many decades. And even in those early days, using electronics like television sets and VCRs, it just kind of worked. That’s what consumer expectations are with most types of electronics—and why machines that were actually built for the home, like the MSX and TI-99/4A, actually were designed with these considerations in mind.

The IBM PC wasn’t, and in some ways, it says a lot about its roots. It wasn’t intended to be the basis for modern computing, at least not to the degree it actually became. It’s sort of like if I write a tweet and it goes viral; I didn’t have any idea it would, so I couldn’t plan for it being used in ways I didn’t expect.

I think that if IBM knew it was building a machine for home users and not just offices, it might have put more work into coming up with a technical solution to the jumper problem. They weren’t building for the eventual target audience of many PCs—regular people who, understandably, had a bias towards buying things that just worked, with no additional fuss.

As a result, it was a problem that took 15 years to fully solve, a design decision whose failings couldn’t have been seen at the time.

It may have been one of the final elements needed to ensure that computing was something mere mortals could do.

--

Find this one an interesting read? Share it with a pal! And thanks to The Daily Upside for sponsoring. Give their newsletter a look!

Ernie Smith

Your time was just wasted by Ernie Smith

Ernie Smith is the editor of Tedium, and an active internet snarker. Between his many internet side projects, he finds time to hang out with his wife Cat, who's funnier than he is.

Find me on: Website Twitter