Today in Tedium: In our modern era, we tend to choose devices with as many functions as possible, and bristle at the thought of an object with a single use case—hence why umbrellas can be so frustrating to carry around. But sometimes, a single use case is exactly the right level of functionality. This is something I’ve been thinking about recently after I got my hands on a fairly large radio that literally has one function: You turn it on and a specific station plays, and there’s no surface-level way to do anything else with it. But despite that, these devices opened up the world for some. Today’s Tedium talks about the unusual world of subcarrier radios. — Ernie @ Tedium
Keep Us Moving! Tedium takes a lot of time to work on and snark wise about. If you want to help us out, we have a Patreon page where you can donate. Keep the issues coming!
We accept advertising, too! Check out this page to learn more.
What the heck is a subcarrier and why does it hide radio stations?
In 1985, a South Florida Sun-Sentinel article discussed a potentially lucrative offering for the owners of FM radio stations: Ways to make extra money from parts of the signal they weren’t already using.
This was not an unusual thing at the time the article came out; it had actually been around for decades. But what the article highlighted was that this technology was used in numerous ways that the average radio listener was likely not even aware of—for background music, for stock reports, even to transmit computerized data.
And while it wasn’t a ton of extra money—a single lease brought in $1,400 a month (or $3,388, including inflation)—it was a great way to raise additional revenue to support a radio station. During the early history of the FM, it could be the difference between being in the red and being in the black for many early radio stations.
The thing that allows many radio stations to do this is, essentially, a technical gap inside of the FM broadcast signal. Subcarriers are, essentially, hangers on, areas of frequency that weren’t being used for the primary signal, but could find secondary uses in more specialized contexts.
In practice, this worked out in two ways: First, it allowed additional bandwidth to strengthen the primary technology, something most notably seen in the analog form of color television, which carries color information through the subcarrier, as well as through FM’s stereo capabilities; and with secondary signals that may not be directly accessible at all by the primary receiver, completely unrelated, niche services were offered.
The most famous of those services was Muzak, a service that in so many ways reflected all the strengths and weaknesses of specialized radio services. While it predated the FM technology, with music initially being distributed over electrical lines, it eventually came to use subcarriers during the 1960s and 1970s. As Tom Vernon writes in Radio World, it was not as easy to get a solid Muzak signal as it was a standard FM radio line:
The audio signal for a subcarrier service typically arrived at the station’s FM transmitter site via an equalized telephone line. From there, it usually went through some form of audio processing, and on to the station’s subcarrier generator and then to the exciter. Stations broadcasting subcarriers had to have an FCC type-approved SCA monitor and take regular meter readings of subcarrier frequency, injection level and modulation.
At the receive site, a rooftop antenna was always installed, as FM subcarriers are even more sensitive to multipath degradation than stereo. The installers usually carried an assortment of high-gain, cut-to-frequency, gamma-matched 72-ohm three- and five-element Yagi antennas in their trucks. Cut-to-frequency antennas usually yielded a gain of about 20 dB over broadband FM devices. For extreme reception issues, a stacking harness was available so that two or more antennas could be deployed.
Ultimately, the quality was lower than traditional FM broadcasts as well, helping to contribute to Muzak’s elevator music reputation. And soon enough, satellite (and later, internet) technology superseded subcarriers—being both cheaper and allowing for higher fidelity than an FM signal that was already splitting hairs of frequency.
So why was there a sudden rush in demand for subcarriers in the 1980s? Well, in 1983, the Federal Communications Commission changed its rules on subcarriers, effectively deregulating them and allowing their use without additional oversight.
The Sun-Sentinel noted that while, in practice, this mostly meant music applications, it broadened subcarrier use cases to “any legitimate communications purpose, whether or not broadcast-related.”
And this meant, by the mid-1980s, they were widely being used beyond their original use case, especially in large major markets.
But subcarrier broadcasts weren’t just a money-making endeavor—they had value in areas you might not even think about when listening to Top 40 radio.
Which gets to why a radio exists that can only pull in a single FM station.
The year that the Physicians Radio Network, a network supported by the six major pharmaceutical companies that specifically targeted doctors and other medical, was launched. The service, which distributed news and other medical information to consumers, was transmitted via devices that were given to doctor’s offices for free. (As I noted in my piece about Channel One, doctor’s offices are excellent examples of captive audiences.) The network faced challenges because of the complexity of its format—The New York Times reported in 1981 that the service was going off the air, only for the service to expand into a portable format by the mid-1980s. (Perhaps the drug industry found the right treatment to keep it alive.) While the service did not survive into the modern day, a spiritual successor, Doctor Radio, is available on the satellite radio service SiriusXM.
How radio-reading stations help to improve access to information for blind listeners
Anyway, now that you have a basic understanding of subcarrier radios, let me tell you about the radio I got a hold of, thanks to a suggestion by my pal Matthew Keys (who is doing cool stuff on niche media these days via his publication The Desk).
The radio I acquired has one user-accessible knob, for volume. It can pull in one station, 92.9 FM, and not much else. It is described as “solid state” technology, but it is very much not digital—when I unplugged the device, the radio kept playing for about ten seconds after it lost power. And despite having one date on the back—a service date of 1988—it likely is much older. To my discerning eye, it seems to date to the 1960s.
This is not a Swiss Army knife by any means. But for its use case, it probably shouldn’t have been. See, the radio, produced for the Radio Talking Book Network, was a radio designed specifically for blind people to be able to listen to the newspaper, in full, in audio form. This concept, first developed by the public radio station KSJR-FM (later known as Minnesota Public Radio), was the first example of a radio reading service, which offers a selection of reading materials to blind listeners, usually newspapers, magazines, and books, with a lean on recent periodicals.
The concept, while new to radio, was not completely without precedent. As I wrote in 2016, the first audiobooks came about because of a need to create audio publications that targeted blind audiences. These talking books also relied on innovative audio technology, using vinyl records for distribution decades before the format became popular for music and later moving to more esoteric RPM speeds such as 16⅔ and 8⅓ to help maximize the amount of space on a single record. (An issue of Newsweek takes a while to read out cover to cover, so the added space was necessary, see.)
In many ways, Minnesota’s Radio Talking Book Network helped to expand this concept to daily periodicals. In many ways, it reflects the state’s early track record of technological innovation during the 1970s, something also on display in the world of computers with the Minnesota Educational Computing Consortium, the state-owned software company that gave us The Oregon Trail.
Using a mixture of staff and volunteer programmers, the service helped to give blind listeners access to the newspaper in a form that they could use. In a 1973 broadcast on the student radio station KUOM, host Larry Davenport discussed how the radio station has “one of the most select audiences in the state”—audiences that can hear radio programmers turn news stories into detailed reports, and daily comics into miniaturized comedy routines.
(An important side note: The state-funded network also plays an unfortunate footnote in the history of gay rights. The first chief announcer for the network, Thom Higgins, was fired from his job because of his sexuality and his public advocacy for gay rights; he was fired about a year after the public radio service began. Higgins later played a pivotal role in the organized protests against 1970s-era gay rights opponent Anita Bryant, most famously depicted in the movie Milk, after “pieing” her at an event in Iowa. I wrote about it in a Twitter thread here if you’d like to learn more about Higgins.)
In the years after the Radio Talking Book Network came to life, its format became widely copied around the country, particularly by public radio stations that maintain these services today. While many of these stations are accessible online, devices such as the one I got a hold of continue to be offered to audiences who need them. As shown by this list from the International Association of Audio Information Services, most states have at least one station dedicated to radio reading.
I’m still torn about whether I should tear my specific machine apart to see if I can re-tune it to the correct station for my area. (From what I’ve read online, it can be done.)
But ultimately, I’m just glad services like these exists to allow accessibility to the news for audiences that need them, even if much of the world doesn’t even know about them.
Microsoft tried using FM radio subcarrier signals to turn basically everything into a smart gadget
You may remember there was a period during the 1990s where “push” technology, essentially feeds of data, were breathlessly hyped as the next big thing, with services like PointCast trying to pitch the idea of the internet as a method for getting constant ticker tapes of information at all times. (As its Wikipedia page describes it, “The company’s initial product amounted to a screensaver that displayed news and other information, delivered live over the Internet.” When you put it that way, it doesn’t sound particularly innovative.)
Essentially, it was Twitter … if you couldn’t interact with the streams of information. Despite this seemingly modest business model, it was a bandwidth hog in the early days of the internet and was frequently banned in offices. (How a company could have managed to screw up the bandwidth requirements of a stream of information that could have been a basic text ticker? Who knows.)
PointCast was one of the earliest busts of the dot-com boom of the late 1990s, but its original idea carried as streams of data flooded our eyeballs in the first decade of the 2000s. But what to do about the constrained data feeds?
During the 2003 Consumer Electronics Show, Microsoft offered up one vision on what a push-technology future could look like. With none other than Bill Gates helping to make the pitch, the company showed off a subscription network called DirectBand (also known as MSN Direct), which sent short text updates over radio signals to devices like watches, coffee makers, alarm clocks, and magnets using a technology called SPOT (Smart Personal Objects Technology). Essentially, Microsoft tried to invent the Internet of Things using 2003 technology … using FM radio. And they actually got a lot closer than you might expect!
While somewhat skeptical of its chances in the market, Washington Post writer Leslie Walker nonetheless noted that Microsoft’s idea had potential even if it wasn’t Microsoft itself that executed it.
“People in the audience snickered when Gates showed off a teeny magnet he had customized to receive traffic updates from Seattle,” Walker wrote. “But his point was that such info-magnets could have a gazillion other specialized uses and be affixed almost anywhere—on briefcases, wallets, key chains, watch fobs.”
It was a strange idea at the time despite basically directly predicting the success of the smart watch a little more than a decade later. In fact, Microsoft’s network helped bring to life some formative smartphones, most notably a model from Fossil that literally used Dick Tracy branding.
Some analysts, such as the mobile tech journalist Kevin Toefl, have praised its simplicity—while it couldn’t do much (it could pull in information from 12 curated streams of data, as well as pull down personal data from Microsoft services), what it did do, it was effective at. And the pricing at the time wasn’t terrible—$59 per year for a watch that could pull in streams of curated information along with your email and your MSN messages.
The problem was that it was a one-way medium. Your friends could text you to your heart’s delight, but your Dick Tracy watch could not reply.
“Ultimately, it literally suffered from bad timing. Microsoft relied upon one-way FM radio for data just at the time cellular broadband was getting off the ground,” Toefl wrote for GigaOm in 2013. “Once that happened, few wanted a watch that could only receive very limited data through 12 MSN Direct channels.”
The Microsoft network was generally forgotten by most of the public throughout the first decade of the 2000s, but it did find some niche use cases, particularly in Garmin GPS devices, where its ability to pull in traffic notifications and weather forecasts was notably useful. But the technology was not successful enough to stick around, and by the end of 2009, Microsoft had announced the service would reach its end of life by 2012.
Like smartphones, Microsoft was a key innovator in smartwatches—coming up with some key advances that effectively proved themselves out in the market just a few years after if exited said market—only to lose the market entirely. Using FM subcarrier technology wasn’t perfect, but it was novel for its time and likely paved the way for future innovations.
Wanna learn more about the ins and outs of FM subcarrier broadcasting? Be sure to check out this video from Fran Blanche, an engineer and YouTuber who is well known for her guitar pedal designs and deep knowledge of how everyday electronics work. It’s a good one.
So in the end, this device I got for my collection of interesting junk, a radio that can only tune a single station, is a reminder that objects with a single use case can still find value with the right audience.
That audience might be doctors; it might be businesses in search of some elevator music. It could be people who need access to a foreign-language broadcast, who can’t otherwise get one in their native language from traditional FM stations. And yes, it could be those with vision disabilities who, just like everyone else, need access to reliable information.
Today, options abound for all of these things online, but it wasn’t that long ago when they didn’t. Most audiences wouldn’t be well-served by these tools, because the information isn’t relevant to them, or they have more efficient ways of getting this information.
But because of a quirk in the way radio works, these audiences were able to be served … while not affecting the people listening to Casey’s Top 40.
We know a lot about radios—we’ve had them in our lives for decades, and it doesn’t feel like they hide many surprises anymore. But perhaps a part of that is because we’ve gotten so used to their role in our lives.
Sometimes, the things we think we understand might actually still be full of surprises.
Find this one an interesting read? Share it with a pal!