The Internet, Through A Filter

As content filters re-enter the digital conversation, a look back at the internet filters of the ‘90s, and the librarian who sold the Supreme Court on them.

By Ernie Smith

Today in Tedium: If there’s one thing I’ve learned in my years as a writer, it’s that when someone doesn’t like something enough to email about it, they start their message out with “I read with great interest …” before diving into their gripe. With that in mind, I read with great interest Tumblr’s announcement about censoring adult content on its platform, which saddened me as a longtime Tumblr user—not because I was looking for that content, but because a creative outlet I once greatly appreciated was losing much of its freedom. The filter is terrible, of course, and its terribleness reminded me of the bad old days of early web filtering, when the internet was new and its capabilities poorly understood. And as the conversation about the European Union’s Article 11 and Article 13—the latter of which would effectively require pervasive filters for copyright on many platforms—now’s a good time to look into that history. I love you, Tumblr, but today’s Tedium is talking filters. — Ernie @ Tedium

“We want to find ways for parents to have an easier time exercising appropriate control over messages … that they feel are not appropriate for their own children.”

— Former Vice President Al Gore, speaking in 1995 about his support of a computer chip designed to block certain kinds of programs in television sets. Gore’s support of the V-Chip, a parental control tool, helped to drive forth the passage of the Telecommunications Act of 1996, a which included one of the first attempts by a government to block content, the Communications Decency Act of 1996. That law, which barred access to indecent content to children, helped kick off the uptake in web filtering software, though much of the law was struck down just a few months after it was passed and blocked by the Supreme Court, thanks in no small part to the Electronic Frontier Foundation’s Blue Ribbon Campaign, one of the first successful examples of digital activism. (One portion of the CDA that has stood the test of time, Section 230, has given immunity to service providers for the actions undertaken on their platform.)

Net Nanny

A box for Net Nanny, which (like many of its competitors) played itself up as a free-speech tool. (via eBay)

Five early examples of web filtering tools

  1. WebSense. Perhaps the best-known filtering program for many years, WebSense came about after its founder, a man named Phil Trubey, saw the potential of the web and realized that filtering products were necessary. “There were a few primitive home-based filtering products on the market when I came up with the concept,” Trubey told The Tampa Tribune in 1998. “But no one had yet introduced a solution that would monitor employees’ internet use.” Yes, while the proxy server-style software is probably best known in schools or libraries, its intended audience was the workplace—something reflected in the fact that the tool has evolved into the modern cybersecurity offering Forcepoint.
  2. Net Nanny. One of the earliest tools targeted specifically at parents, the content-blocking tool initially took a very aggressive approach to filtering online content by relying on word-based filters, rather than a block list. “Every time ‘What’s your name?’ or a word such as ‘sex’ crosses the screen, Net Nanny shuts the whole thing down and the kid has to get Mom’s help to turn the computer back on,” founder Gordon Ross explained in a 1995 Montreal Gazette article, adding that the goal was to leave the ultimate decision up to the user, rather than the government. “Censorship should come from the computer operator alone, not from the state.” PeaceFire, a digital resource highlighting content-blocking tools, noted that its approach to filtering relied on aggressive keyword use, rather than a long list of blocked sites. Net Nanny, after some ownership changes, is still actively sold today.
  3. SurfWatch. Dating to the same period as Net Nanny, SurfWatch also took a more aggressive approach to keyword-based blocking rather than relying on a massive list of blocked sites, which became an issue after the tool blocked a page on the White House website featuring information about the Clinton and Gore families. (The problem? The URL was titled couples.html.) The company, however, was better than others when it came to hearing out criticism from groups such as the Gay & Lesbian Alliance Against Defamation, leading the group to stop blocking sites on the basis of sexual orientation. The software may have played a role in the striking down of the Communications Decency Act, as the company made the case in federal court that its software was a more reasonable approach than an overarching law.
  4. CyberSitter. This platform, initially sold by Solid Oak Software, has had a reputation for taking a more conservative approach to what it would block, something it did not shy away from after GLAAD published a 1997 report that called its software “homophobic.” Speaking to ZDNet about the report, Mark Kanter of Solid Oak leaned into the reputation, despite its competitors doing the opposite. “That has driven our sales to some extent—the fact that we have admittedly blocked gay and lesbian sites, that we have blocked NOW,” Kanter explained. The platform had a reputation for aggressiveness—at one point threatening to block every site on PeaceFire’s ISP if it did not shut the site down. The tool is still sold today, including in a hardware box form that sits at the network level.
  5. Cyber Patrol. Initially facing similar criticism over blocking LGBT content from GLAAD, the maker of this software actually added a GLAAD member to its oversight board in an effort to quell concerns about its platform. The company took a similar approach to blocking hate speech with its tool, teaming with the Anti-Defamation League to create a filter designed to tackle such issues—while forwarding to the ADL’s website. Cyber Patrol, at one point owned by Mattel, was more recently acquired by Content Watch, which also owns Net Nanny.

2000

The year that a piece of software called cphack was developed by a Swedish programming team. The tool allowed users to get around Cyber Patrol’s encryption to take a look at the full database of blocked sites, as well as to turn off the software. (This was possible with Cyber Patrol, as it was a piece of software installed on the machine, rather than a proxy tool like WebSense was.) Soon after it was developed, Cyber Patrol sued the hosts of the software—including PeaceFire, which frequently found itself at the center of debates around blocking software—who earned legal representation from the American Civil Liberties Union. After a legal battle, the software distributors emerged victorious after the U.S. Copyright Office decided that “reproduction or display of the lists for the purposes of criticizing them could constitute fair use.”

A computer lab of the kind that requires filtering tools thanks to U.S. law. (Jonathan Reyes/Flickr)

How online filtering created a new front in the culture wars

While there are plenty of examples of networked technology before the late ’90s—the Free-Net, for one—things started to pick up after the web became a thing and started appearing in common settings; think schools, cybercafes, offices, and so on. Things had reached scale.

And this created a need for a market for digital filters, which were intended to serve a role not that dissimilar to the V-Chip—that blocked the bad stuff from being accessed online, while allowing most of the good. This sounded good in theory, but ultimately, the problem is the same one that the folks at Tumblr are running into right now: At the time, filtering software wasn’t very good, and the “I know it when I see it” approach to indecency and obscenity, as famously defined by Supreme Court Justice Potter Stewart in the 1964 case Jacobellis v. Ohio, breaks down online. When new webpages are being produced by the thousands and even millions each day, you can’t possibly see everything, and suddenly, it becomes a matter of grappling with a whole lot of different standards of what’s safe and what’s not.

In other words, it was a First Amendment issue, and a knotty one at that. Installing the V-Chip in TVs? Compared to content filters on the internet, relatively painless—as soon as Al Gore was convinced, it became downright easy to make the case for it, because it empowered parents without actually blocking anything for folks who didn’t want to use it. While some broadcasters might have felt frustrated by a decision that could affect their ad revenue, it was easy for folks who weren’t in the target audience to ignore.

But internet filtering software had many more variables. The internet didn’t have a standardized rating system like television or movies. Anyone could create anything on it—and everyone did. Filtering required much more room for edge cases, such as that of the British city of Scunthorpe.

Naturally, this issue becomes a challenge as Net Nanny, WebSense, and so on, filter things with differing sets of standards, meaning that you’re basically controlling content based on someone else’s opinion of what’s decent and what’s not—as well as how often they choose to update their filters. Something, inevitably, would get through. Kids are smart.

A 1999 editorial in the Quad-City Times really nailed the problem, suggesting that by handing the job to an automated filtering app, rather than an actual person, the software created complex problems that can’t be easily sorted out by algorithm.

“Unfortunately, the software is not very refined,” the editorial board wrote. “By restricting access to sites that mention sex, for example, WebSense blocks access to sites that contain legitimate news stories on such topics as impeachment of the president.”

On the other hand, you could also argue things the other way, as Oregon librarian David Burt successfully did. Burt, concerned with the potential that children would access indecent material at the library, started up a platform called Filtering Facts, which used Freedom of Information Act requests filed by both Burt and a team of volunteers to highlight cases where indecent material had been accessed at libraries.

“I want to keep letting people know about the problem,” Burt told The New York Times in a 1999 article that raised his profile significantly.

The stance went against the party line of librarians at the time—that internet access wasn’t dangerous, that the freedom the internet offered was more important. Many chose not to go along with the FOIA requests, noting that existing laws prevented libraries from revealing who accessed certain kinds of information.

Dangerous Access

Burt’s research, which often took aim at traditional bodies such as the American Library Association, found support from the Family Research Council, which published his findings in a report titled Dangerous Access, which advocated for legal action to require filtering tools.

“The failure of many libraries to prevent these incidents combined with the demonstrated effectiveness of filtering software supports the appropriateness of legislation to require the use of filters in public libraries,” the report’s introduction stated.

Burt’s work directly helped to drive the passage of a piece of legislation called the Children’s Internet Protection Act, which required libraries and schools to install filters on computers if they wanted access to federal funding. (The latter part being key because it didn’t tie the legislation to censorship, but to funding.)

But his work actually went deeper than simply the report itself—he spoke during Congressional and regulatory hearings, before and after the passage of the law, and when the American Library Association and American Civil Liberties Union sued over the law, the Department of Justice brought him on as a consultant in the ensuing legal battle. He filed too many FOIA requests to simply let the issue go as soon as he published his report.

Dangerous Access, and Burt’s work on it, was directly cited in the Supreme Court’s 2003 decision to uphold the law, with the court making its decision from the standpoint that the law did not violate the First Amendment, as the filter could be turned off for adult patrons upon their asking. The librarian who went against the grain ended up changing the law.

Burt, these days, is a Microsoft employee who heads up the company’s compliance efforts on stuff like GDPR—which makes sense, as parental controls are a compliance issue when you break it down.

Internet filtering, love it or hate it, is here to stay.

In many ways, online filtering has blended into the background by this point, its legal battles largely decided in the U.S. For years, it was an uncomfortable legal conflict; now, it’s the target of lighthearted jokes as Starbucks decides to turn on the Wi-Fi content filters that most other retailers already tend to use.

Even Disney got in on the action relatively recently by releasing its Circle device, which effectively allows parental controls at a network level. Whether or not you think it’s censorship, the fact is, the issue is basically decided at this point.

(Online censorship and filtering schemes by other countries, such as China and more recently Turkey, are reminders that things could have gone very differently in the U.S. if not for advocates like the ACLU and EFF standing up for our First Amendment rights.)

Of course, there are always tweaks happening. The passage earlier this year of the Fight Online Sex Trafficking Act (FOSTA), which took aim at Section 230 of the Communications Decency Act, is believed to have played a role in Tumblr’s decision to aggressively censor its platform, despite the fact that the type of censorship it’s trying to do—based on images, rather than text, because a lot of Tumblr posts don’t have any text—is basically impossible to do well.

Tumblr is owned by Oath, which is owned by Verizon. It’s a private company. It can do what it wants, really—as frustrating as that is. (May I suggest Mastodon instead?)

But content filters are imperfect, and there’s momentum in favor of their increasing use, particularly in the European Union, where the desire to protect copyright is leading to some decisions that could permanently screw up online culture—particularly in the form of Article 13, a move to require platforms to actively block copyrighted material against a database. It’s so bad that, as the EFF’s Cory Doctorow recently noted, EU politicians have taken to the point of arguing that no, it’s not filtering, even though it is.

It’s one thing to block stuff that people don’t want their kids to see—whether at home or at the library. It’s another entirely to filter out the stuff that makes the internet, well, the internet.

Ernie Smith

Your time was just wasted by Ernie Smith

Ernie Smith is the editor of Tedium, and an active internet snarker. Between his many internet side projects, he finds time to hang out with his wife Cat, who's funnier than he is.

Find me on: Website Twitter