A Kernel Of Failure
How IBM bet big on the microkernel being the next big thing in operating systems back in the ’90s—and spent billions with little to show for it.
Today in Tedium: In the early 1990s, we had no idea where the computer industry was going, what the next generation would look like, or even what the driving factor would be. All the developers back then knew is that the operating systems available in server rooms or on desktop computers simply weren’t good enough, and that the next generation needed to be better—a lot better. This was easier said than done, but this problem for some reason seemed to rack the brains of one company more than any other: IBM. Throughout the decade, the company was associated with more overwrought thinking about operating systems than any other, with little to show for it in the end. The problem? It might have gotten caught up in kernel madness. Today’s Tedium explains IBM’s odd operating system fixation, and the belly flops it created. — Ernie @ Tedium
Want another great newsletter in your inbox? Join 60,000 others by subscribing to the weekly Hacker Newsletter. From interesting technology to startups to everything in-between, you’ll find great reads each week. Sign up today!
Today’s Tedium is sponsored by Hacker Newsletter. (See yourself here?)
1991
The year that the open-source operating system Linux was first announced by Linus Torvalds. (“Just a hobby, won’t be big and professional like gnu,” he said. Oh, how wrong he was.) It was an announcement that later proved fundamental to modern computing, but that change was not a given, and Torvalds’ formative attempts at building a kernel wouldn’t be the only attempt to get off the ground that year. IBM was working on some stuff in this department.
How IBM became the poster child of operating system failure during the ’90s
In extremely simplified form, the popular plot line of the personal computer revolution goes like this: IBM wanted to create desktop computers for businesses, and did so using off-the-shelf parts, including the operating system—which business partner Microsoft bought from some guy.
Microsoft then realized its deal with IBM wasn’t exclusive, hardware companies figured out how to reverse-engineer early PCs, and Apple did some stuff that Microsoft later “borrowed,” and BOOOOOOOM! Personal computer revolution, just add water. Of course, as anyone who has ever owned a Commodore 64, ZX Spectrum, or MSX knows, the truth is more complicated than that, and leaves out a whole lot of history—including some created by the very companies involved in the original PC revolution.
Nonetheless, the rise of the IBM PC clone and the software that drove it left a lot of companies on the sidelines by the early ’90s.
Ironically, one of those companies was IBM.
Certainly, IBM by no means struggled during this era—the company released the first ThinkPads and was such a monolith that moved in so many directions that there’s no way it was going to fall apart. But IBM tasted the fruit of the PC’s success for far too short a time before clone-makers turned the company into just another player in the business computing market, and IBM wanted to do better than that.
…
Now, if you’re somewhat tech-savvy, you might assume that, given this lead-up, I’m about to talk about OS/2, the operating system that was at the center of a messy divorce between Microsoft and IBM. But while OS/2 is certainly a player in this story, it’s not the only one. IBM had bigger plans than just that.
It wanted to create the kernel on which all major operating systems were based—to the point where it had not one, but two simultaneous projects going on at once to create brand new operating system technology.
Both projects were audacious, but one is probably better-remembered than another thanks to a certain anecdote that’s more closely associated with a different company: The tale of the blue, red, and pink cards.
Three sets of cards, two sets of fates
After a staff reorganization at Apple in the late 1980s, an offsite meeting was put on, with the goal of trying to scope out the future of Mac OS in a post-Steve era, as Jobs and Wozniak had left the company by this time.
There’s more to the story, but basically the cards were the product of a brainstorming session: The blue cards represented immediate changes to Mac OS, the pink longer-term changes, and the red even more advanced changes (in this context, think of red as a deeper pink).
The cards ultimately represented marching orders for two teams: The blue cards led to new features in Mac OS System 4, while the pink and red cards were part of longer-term plans. Unfortunately, the smaller pink team struggled to implement the features in a reasonable amount of time, and became the victim of “scope creep,” with the decision made at some point to start developing a new operating system entirely, using a model called a microkernel, which effectively creates a minimal layer of software through which all tasks are handled. The work was eventually separated from Mac OS entirely.
This is where IBM comes back into play. The work of the pink team was long-running but highly advanced by the early ’90s, and dovetailed nicely into IBM’s plans for a new line of processors in its Power Systems line.
The collaboration led to the Apple’s use of the related PowerPC processor line (which I of course know all about), but it also led to a new company, called Taligent—in which Apple’s pink team and members of IBM’s own staff would work on this future operating system that somehow proved too big for Mac OS. The headline in The New York Times in October of 1991 was shocking for observers: “IBM Now Apple’s Main Ally.”
A 1994 PC Magazine column by voice of his generation John C. Dvorak laid out the company’s ultimate plan: “Taligent’s role in the world is to create an environment in which all the applications we buy individually are built directly into the operating system. Because the apps are programmable, you can put together your own custom-made suites. Taligent could mean the end of all applications as we know them.”
Bold stuff. However, it wasn’t IBM’s only project on this front. Nor was it the most audacious one.
“It will be either the definitive operating system of tomorrow or a massive flop.”
— John C. Dvorak, correctly diagnosing IBM’s efforts on its much-hyped Workplace OS, a microkernel-based project it was working in throughout the early ’90s.
One microkernel to rule them all
While Taligent was just getting airborne, employees of IBM were already working on a microkernel of their own, based on existing work done on the MACH microkernel at Carnegie Mellon.
The name of IBM’s endeavor? Workplace OS. The mission? To become the operating system at the center of every other operating system, no less.
As IBM had interests in numerous operating systems at the time—beyond OS/2 and Taligent, it also had legitimate say in the direction of MS-DOS, Windows, and the operating system standard POSIX, along with its own in-house operating systems OS/400 and AIX—it was perhaps the closest thing to the center of the world of operating systems at the time. And with interest in microkernels rising, in part due to their perceived reliability benefits, IBM was in a position to push forth its vision, which it attempted to drive in part because it felt it could save money by having a standard base for its different operating systems.
(Saving money was expensive, however: As University of California-Riverside researchers Brett D. Fleisch and Mark Allan A. Co noted in a 1997 post-mortem, IBM spent nearly $2 billion trying to get Workplace OS off the ground, approximately 0.6 percent of IBM’s total revenue over the five-year period.)
Workplace OS—whose kernel ambitions effectively torpedoed Taligent’s own microkernel, by the way, though not the Taligent project itself—was audacious, to say the least. The concept was conceived around something called the ”Grand Unification Theory of Operating Systems,” or GUTS, which effectively aimed to build standard subsystems around common operating systems, so that different pieces of software could use the same basic services, even if they were using different operating systems.
The company’s OS/2 endeavor would be out front, but, conceivably, would be able to exist in pretty much every major operating system going forward, big or small—whether the OS was intended for a server rack or a personal digital assistant. (Don’t know what that last thing is? Ask your parents.)
It was a good idea, if an audacious one, and one whose mission could be seen in a charitable light as a way to make operating systems work more efficiently. But in a less charitable light, it seemed like IBM’s attempt to reassert dominance over a PC world it was responsible for—dominance that somehow proved shakier than expected to assert thanks to all the clones that showed up after the fact.
There was just one problem: The microkernel wasn’t ready for what IBM wanted to do with it.
“True, linux is monolithic, and I agree that microkernels are nicer. With a less argumentative subject, I’d probably have agreed with most of what you said. From a theoretical (and aesthetical) standpoint linux looses. If the GNU kernel had been ready last spring, I’d not have bothered to even start my project: the fact is that it wasn’t and still isn’t. Linux wins heavily on points of being available now.”
— Linus Torvalds, in a 1992 post on the Usenet group comp.os.minix, responding to a post by MINIX author Andrew S. Tanenbaum that claimed that Linux was already obsolete, in part because it used a more traditional monolithic kernel, in which most basic parts of the operating system are managed within the kernel itself, rather than separated out as services, as they would be using a microkernel. The debate between Torvalds and Tanenbaum was famous in part because of the fact that Torvalds was directly inspired by Tanenbaum’s work on MINIX, but ultimately chose the performance benefits of a monolithic kernel. (Spelling errors kept in for full effect.)
Why IBM’s attempts to reframe the operating system didn’t work out
The microkernel concept that IBM embraced was not one that was out of the realm of possibility of working.
There was already significant evidence that building an operating system based on the MACH kernel was worthwhile—after all, it was what NeXTSTEP, the operating system that would later form the basis of the modern MacOS, started from. NeXTSTEP was solid as a rock at a time when many operating systems weren’t.
But IBM’s ambitions mixed with some technical weaknesses of the microkernel model ultimately sank the ship. IBM’s unusually aggressive operating system moves in the early ’90s, particularly its partnership with Apple, didn’t seem make much sense to either outsiders or longtime partners.
In a 1991 New York Times article that laid out the messy split between Microsoft and IBM over OS/2, Steve Ballmer, then Microsoft’s senior vice president of systems software and later its CEO, implied the double-dipping on the operating system front directly created the chasm between the two companies.
“I can’t tell you what IBM’s strategy is, but I do know it’s not working together with us,” Ballmer said, “It’s to our benefit because their customers can’t understand it either.”
Ballmer was proved right, of course, as IBM’s complex projects in the operating system space failed to set the world ablaze. (OS/2, of course, did have its adherents.)
The problem with the microkernel, in IBM’s case at least, was twofold: One, putting it together and having it meet all these competing needs was incredibly complex, something reflected by the fact it spent billions of dollars on the project; and two, microkernels, by design, sacrifice speed for stability—something that IBM soon realized its server customers definitely would not want, which caused the company to backtrack on the original Workplace OS promise. Per the University of California-Riverside post-mortem I highlighted earlier:
Internal discussion at IBM focused on AIX. Finally, at Comdex in 1993, IBM Chairman Louis Gerstner announced that the microkernel would not replace AIX. IBM realized that many AIX users would not accept performance penalties associated with microkernel systems. IBM was also concerned with the microkernel presenting a competitive impediment against high performance HP or Sun Unix systems than ran directly on the hardware. Instead, Gerstner told AIX customers that they would be able to migrate to Workplace OS, later if they were interested.
The AIX part of the plan was ultimately abandoned, but the Workplace OS effort nonetheless continued, with the goal of putting Workplace OS into OS/2-based PowerPC workstations. But when the company’s 64-bit PowerPC 620 chip ran into bad press, it led to a scuttling of the Workplace OS endeavor entirely, with only a little-seen PowerPC variant of OS/2 Warp to show for it.
So what about Taligent, Apple’s collaboration with IBM? Well, that joint venture faced a lot of challenges, many of which were caused by the culture clash of having straight-laced IBM alums leading more relaxed expats from Apple’s pink team. The collaboration was already audacious on paper; the reality was even harder to build.
“The technological challenge—to develop an object-oriented operating system to compete with Microsoft and Next—is tough,” a 1993 Fortune article noted. “The social engineering challenge—to create a new corporate culture out of the collision of two diametrically opposed operating philosophies—may be even tougher.”
And ultimately, Taligent wasn’t up to the task. The spinoff failed to release an operating system despite much pressure to do so, only eventually releasing a runtime system, CommonPoint, that saw some critical success but wasn’t a hit. At one point Hewlett-Packard joined in on the fun, only for the partnership to fall apart entirely by late 1995, with IBM left holding the bag.
“Taligent was one of several pie-in-the-sky fiascos that left Apple in such desperate straits that they had to buy NeXT,” Daring Fireball scribe John Gruber wrote in 2014.
Apple, which spent the 1990s facing one operating system crisis after another, got a microkernel out of the whole deal in the end by acquiring NeXT, even one that was based on Carnegie Mellon’s MACH microkernel technology. It just wasn’t the one that they spent so much time and money working on.
“OS/2 for PowerPC was undoubtedly an interesting experiment, albeit a failed one. It is impossible to tell whether this failure was caused more by shortcomings of OS/2 for PowerPC or the failure—perhaps just falling far short of expectations—of the PowerPC platform as a whole.”
— Michal Necasek, a writer on OS/2 Museum, in a review of IBM’s OS/2 variant for the PowerPC, which only came out in a very limited release and didn’t have much in the way of software. In a deep irony, the one system released to the wild that was built around Workplace OS had no support to speak of, despite the “one kernel to rule them all” mindset that came to define the project.
It doesn’t feel like such a surprise in retrospect, but if I were to step into a time machine, go back 30 years, and tell everyone that a random programmer in Finland was going to beat IBM to the dream of creating a “universal” operating system, I’m sure I’d be laughed out of the room, and not just because I was wearing futuristic clothes like Marty McFly did in Back to the Future.
But when you break it down, Linux has basically fulfilled every requirement of the Workplace OS dream—it’s an operating system that’s often used as the basis for numerous other operating systems, with its solutions getting used everywhere from embedded systems to smartphones to servers to television sets. While Windows and MacOS don’t use the Linux kernel, of course (though Microsoft, increasingly, loves Linux), it’s a key element of Google’s Chrome OS, an operating system that knows a thing or two about turning some other company’s dream into reality. Really, the only thing it doesn’t do is improve IBM’s bottom line every time it’s used, Red Hat acquisition notwithstanding.
The microkernel vs. mono-kernel debate is one that likely will never go away completely, in part because it’s a useful academic debate for computer scientists or commenters on Hacker News to have. And there are successful examples of both, of course: MacOS borrows elements from both concepts, while the embedded QNX excels at the stability part of the microkernel equation.
(And, stealth edit, it’s worth noting that Google’s in-the-works Fuschia OS, believed to be a replacement for both Android and Chrome OS, is based on a microkernel. So a pure microkernel might see its day in the sun just yet.)
In a world where it seemed like massive chunks of the population couldn’t be bothered to use anything other than Microsoft Windows, IBM thought it had an opportunity to build the next generation of operating systems.
Instead, it failed. Twice.
--
Find this one a fascinating read? Share it with a pal! And thanks again to Hacker Newsletter for the sponsorship throughout February. Thanks again for the support!