The Liaison
How firmware became the layer between our hardware and software experiences. It was only sorta like Halt and Catch Fire.
Today in Tedium: In 2006, the Nintendo Wii’s software was running a little behind its hardware, and that meant problems for the console’s initial release. How’d Nintendo deal with it? Well, on some early units, it included a firmware disc that installed the software needed for this console to run. This disc, now considered one of the rarest pieces of Nintendo ephemera, wasn’t the first bit of installable firmware for a game console, but the disc’s long-rumored and since confirmed existence reflected how firmware had reshaped the way we thought about our computing devices. Firmware, simply put, was a bit of a game changer for video game consoles, and computing in general. Suddenly, we weren’t stuck with the device we bought. If it was buggy or unfinished, the manufacturer had a chance to fix it after the fact. Today’s Tedium discusses how firmware became the glue of computing. — Ernie @ Tedium
If you find weird or unusual topics like this super-fascinating, the best way to tell us is to give us a nod on Ko-Fi. It helps ensure that we can keep this machine moving, support outside writers, and bring on the tools to support our writing. (Also it’s heartening when someone chips in.)
We accept advertising, too! Check out this page to learn more.
“Firmware does a lot of the work that computers had to do before. But firmware doesn’t cost like computers cost. And firmware is tiny.”
— A Burroughs ad, dating to 1969, that is one of the first newspaper ads I can find that directly references firmware. The fact that Burroughs, a large company during this era, promoted firmware to the masses speaks to how quickly it gained prominence in computing—the term had only entered the lexicon two years prior.
The man who pinpointed firmware’s potential didn’t live to see it take over the computer
If you think about computing as a relationship between hardware and software, firmware neatly fits in the middle. It is a go-between, a liaison between the information you’re trying to process and the electronics that do the managing. It meets in the middle, the marshmallow between the chocolate and graham cracker.
And it existed long before that fateful day, when a few early Wii customers had to install a disc into their machine to get their machines working. We can thank Ascher Opler for introducing the world to this big, yet small idea. Opler, a computer scientist who worked with numerous luminaries of the era, described the concept in a January 1967 article for Datamation, in an article titled “Fourth Generation Software.”
It was part of a series discussing the “fourth generation” of computing, as the magazine defined it—and it turned out to be the most important part. As it created a well-known and influential term, one can say they succeeded at their goal.
The moment Opler, an executive at Computer Usage Education, dropped the term feels like he knew something the rest of us did not:
In fourth-generation computers, many microprograms will be available from the manufacturer. Software and user specialists will also prepare and use their own. This should throw the whole field wide open.
To better understand the nature of microprogramming a no-order-set/no-data-structure computer, I believe it worthwhile to introduce a new word into our vocabulary: firmware. I use this term to designate microprograms resident in the computer’s control memory, which specializes the logical design for a special purpose, e.g., the emulation of another computer. I project a tremendous expansion of firmware—obviously at the expense of hardware, but also at the expense of software.
The article later discusses microprogramming, the installation of small programs that computers frequently need for low-level tasks, as well as the potential cost-reduction benefits of firmware. For the forthcoming generation of computing that Opler was writing about, he argued that microprograms were well-positioned to leverage slow-write/fast-read (SW/FR) memory, as the data would be permanently accessible, but not easily modifiable.
In fourth-generation equipment using SW/FR micromemory, microprograms can be prepared by the firmware specialist-manufacturer, programming company or the user-to carry out the specific interrupt and input/output control function specified by the user. This alone will go far to simplify control programs.
Further simplification can be obtained by making the data structure and order set work for, not against, the implementors of control programs. The basic implementation involves techniques of queue management, control block handling, table reference, internal sorting, pointer handling, etc.
Since microprogramming permits extensive data structuring for control program implementors, it will permit the addition of instructions to enqueue, dequeue control blocks build search and sort tables of specified structure, etc. These new commands should prove a boon to expediting the running of supervisory programs.
In some ways, what Opler describes is an operating system, and in some ways, many firmware implementations can be seen as miniature or even full-fat operating systems, managing specific needs that would otherwise fall on the back of individual software programs. (It’s worth noting in this context that many early graphical operating systems, such as MacOS, were implemented in ROM, effectively making them firmware.)
Opler’s insights on computing sadly outlived him—he died of a heart attack just two years after writing what turned out to be a strongly predictive article. Between the time he wrote it and his 1969 passing, he had taken a role at IBM’s research lab, working closely with one of the company’s most prominent luminaries of the era, Arthur Anderson.
But his idea lived on. The Burroughs ad I mentioned above directly referenced the Fourth Generation concept that Opler had framed his discussion of firmware around. And soon enough, it would be more than just a selling point by a mainframe seller—it would be in every device under the sun, big and small.
What changed? Simply put, the technology caught up with Opler’s vision in short order.
EEPROM
A form of non-volatile memory that can be electronically erased, but which keeps its data between restarts. At the time EEPROM was first developed in the 1970s, it was common to have to make physical changes to programmable ROM chips to upgrade firmware, such as EPROM, which could be modified by exposing the chip’s window to ultraviolet light. EEPROM, which is essentially a more primitive form of flash memory, allowed firmware updates through software programs, without having to open the machine. (If you hear the phrase NVRAM or PRAM in reference to your computer, it is a modern manifestation of EEPROM.)
How the BIOS became an essential element of computing
The best-known term for firmware, and one that often still gets used today, is the Basic Input/Output System, also known as BIOS. It is an excellent descriptor for how we think about the basic functions of computers to this day, as it reflects the system’s capabilities, as well as how software can plug into those functions.
While often associated with the IBM PC, it actually predates it, and was utilized by the CP/M operating system, one of the first PC-specific operating systems around. A straight-up computing legend, Gary Kildall of Digital Research, was the person who came up with the name and general concept, essentially a form of firmware that allowed software to have general compatibility with different devices—as long as they had the right firmware.
(By the way, Kildall’s life story deserves a mention here—he arguably is the man that should have been Bill Gates, but that he wasn’t was one of computing’s great failures.)
The IBM PC translated the concept of BIOS to its personal computers, which the machine’s technical reference guide (yes, the guide that lots of reverse-engineering PC clone companies tried to rebuild in clean-room implementations) explained as such:
The ROM resident Basic I/O System (BIOS) provides the device level control of the major I/O devices in the System Unit. The BIOS routines allow the assembly language programmer to perform block (diskette and cassette) or character (video, communications, keyboard and printer) level I/O operations without any concern for device address and operating characteristics. Additionally, system services such as time of day and memory size determination are provided. The goal is to provide an operational interface to the system and relieve the programmer from concern over hardware device characteristics.
That final line is perhaps the most important, and it underlines the essential nature of firmware to this day. If you’re building software for a computer, a video game, even an Android-based smart toaster, for some reason, you need to have some sort of baseline for what to expect, one that doesn’t require you to know what every machine on the planet looks like. When there’s no guarantee that every machine will be exactly the same, you need a layer like the BIOS to help determine the rules at play for each component—rules implemented by the operating system and its component drivers.
On the IBM PC, the BIOS was intended to be proprietary, unlike most of its components. But it didn’t stay that way, with companies like Phoenix Technologies, American Megatrends, and Award Software developing the firmware that the clone machines used. Compaq was one of the first to get around Big Blue, convincing Microsoft to ship it copies of MS-DOS and its BASIC program, and then building a BIOS through a complex series of guesswork, which was only kind of like the first season of Halt and Catch Fire.
A 386 clone, booting with the help of an American Megatrends BIOS.
“We gave the black box inputs and saw what the results were,” said Gary Stimac, one of Compaq’s first employees, in 1984 comments to PC Magazine. “Typically, Microsoft gave us the specs, and we tested it—we didn’t look at the code.”
By the time Compaq was done, Stimac and similar figures knew more about the IBM PC than perhaps even IBM did.
Even though we don’t technically use BIOS anymore, transitioning to the standardized Unified Extensible Firmware Interface (UEFI) starting in the late 2000s, but we remain tethered to its original intentions. While UEFI is more advanced, the purpose is similar: It creates a way for our computer’s innards to talk to the software we use. BIOS was pretty long in the tooth by that point, as noted by Maximum PC in 2006:
As old as it is essential, BIOS architecture is long overdue for a fundamental overhaul. We’ve gone through five major generations of Microsoft operating systems (DOS, Windows, Windows 3.1, Windows 98, Windows XP) and we’re poised to move to a sixth—Windows Vista—but BIOS architecture hasn’t changed much at all. It still relies on the 16-bit interface of x86-style CPUs. And it has very limited memory space—just 128 kilobytes—in which to execute the option ROM firmware that’s stored on expansion cards; this severely limits the number of cards that a system can host.
BIOS enhancements have been few and far between, with each improvement being laid atop the antiquated foundation of the original. The technology moved away from unalterable ROM (read-only memory) chips to a storage medium that allows for changeable settings and that can be flashed with entirely new versions. Because the underlying technology hasn’t changed, tweaking your BIOS remains fraught with danger for users who aren’t educated and careful. Updating your BIOS incorrectly—or just encountering a power outage during the flash process—still holds the potential to kill your motherboard.
(And as we highlighted just recently, yes, Intel was involved in herding the cats that gave us UEFI.)
BIOS has nonetheless stuck around as shorthand, the way many people refer to solid-state drives as hard drives, despite a new technology effectively replacing it completely. It’s that small little piece of computing, a liaison between your hardware and software, that makes the whole thing work. Without it, we’d be pushing a boulder up a mountain with every single app.
Rockbox made it possible to run Doom on an iPod. Because of course it did.
Five notable examples of firmware you might know about
- Open Firmware. This open standard, developed by Sun Microsystems and standardized by the Institute of Electrical and Electronics Engineers (IEEE), found use in many non-PC computing platforms. Non-Intel machines, particularly Macs and Sun machines, used Open Firmware for years—and I gotta say, I still have memories of trying to remember all those weird commands.
- Rockbox. This firmware modification, available on many portable media players developed throughout the 2000s, expanded what devices like iPods could do—for example, adding support for additional audio formats.
- OpenWrt. This well-known open-source firmware, widely used in wireless devices, became popular after Linksys accidentally sold a popular router with Linux-based firmware—something we wrote about in a well-read 2021 piece.
- Smart TV apps. If you’ve ever used a Roku, you’ve used firmware. It’s perhaps the most visible example of a modern embedded system, and it also shows how firmware often straddles the line between operating system and embedded tool.
- Android custom ROMs. If you’ve ever wanted to modify your operating system, it is certainly more than possible with Android if your device is unlocked. Its installation process is similar to more traditional firmware, but when you’re done, you get a brand-new OS interface.
DFU
Device firmware upgrade, the name of the process that USB devices, particularly iPhones, use for getting their firmware upgraded. Interestingly, Apple has implemented this approach to its recent Apple Silicon-based PCs, requiring end users to use a tool called Apple Configurator to install firmware when an error emerges. In a way, it highlights how Apple thinks about firmware on modern computers—as something that should work like it does on a smartphone.
These days, firmware is everywhere, gradually shaping the experiences we have with the devices we build. Perhaps the people making it aren’t burning the midnight oil, trying to recreate it in their garage, in an effort to work around IBM.
The cloak and dagger stuff is no longer necessary, and not quite this perfectly lit.
But I do think our relationship with firmware has definitely changed in the nearly 60 years since Ascher Opler dropped the mic and ended up revealing one of the most important computing concepts before departing way too soon, staying just long enough to share his best idea with us.
(Ascher Opler’s legacy sparks a great question for me: If you invented an important concept, but would not live to see it take over the world, what big idea would you share?)
In recent years, I’ve been keeping an eye on efforts to make firmware more user-modifiable and upgradeable, such as Libreboot, which relies on the coreboot firmware platform to replace one of the few areas of computing that has proven surprisingly impervious to user freedom—the boot process. I like the way Libreboot talks about all this on its site:
Libreboot is a community-oriented project, with a focus on helping users escape proprietary boot firmware; we ourselves want to live in a world where all software is free, and so, Libreboot is an effort to help get closer to that world. Unlike the big vendors, we don’t try to stifle you in any way, nor do we see you as a security threat; we regard the ability to use, study, modify and redistribute software freely to be a human right that everyone must have. Extended to computers, these are products that you purchased, and so you should have the freedom to change them in any way you like. When you see Intel talk about their Boot Guard (which prevents coreboot by only letting firmware signed by them be executed) or other vendors imposing similar restrictions, and you hear them talk about “security”, they are only talking about their security, not yours. In the Libreboot project, it is reversed; we see Intel Boot Guard and similar such technologies as an attack on your freedom over your own property (your computer), and so, we make it our mission to help you wrest back such control.
The thing is, if a computer manufacturer decides they don’t want to give us certain features in their firmware, they just won’t, limiting the experience for the end user, and offering a mechanism for control that limits trust. It makes it so that you’re having to work around their decisions, rather than the other way around.
(It also, arguably, makes the computing experience a bit more big-brotherish, as anyone who has ever been locked out of a work laptop after getting laid off might tell you.)
And this firmware sticks around even if every other part of the experience is otherwise opened up. You can use a completely open stack on your machine—FOSS operating systems, FOSS applications, the whole bit—and still be stuck with a closed-source bootloader. Some manufacturers, such as System76, have moved around that, but they’re sadly few and far between.
I’ve had a little experience with working around this stuff thanks to my efforts to daily drive a Hackintosh back in the day—and let me just say, it’s often much more complicated than it needs to be, sadly. If you have the technical knowledge, your computer should let you mess with the options on the firmware.
We already have an important lesson from computing history: When bootloaders are worked around, it makes the experience better for the entire industry.
My personal hope is that the future of firmware looks less like the first season of Halt and Catch Fire and more like Libreboot.
--
Find this one an interesting read? Share it with a pal! And back for more next week.