Today in Tedium: Every time I use pieces of the early internet, I get this warm feeling in my chest. It’s hard to describe, but I imagine it’s a feeling not unlike the feeling that went through the crowd during the Sex Pistols concert at Manchester’s Lesser Free Trade Hall on June 4, 1976. There’s a sense of purity and simplicity there that is hard to recapture through other means—the sense that I’m witnessing something culturally important that, in its own way, could change the world. It feels unadulterated, without the frayed ends and sense of familiarity that come with years or even weeks of constant use. And it’s one of those things where, if you feel it once, it’s kind of like a drug. I had that feeling recently when I was reading up on Gopher, a part of the internet that got overshadowed by the World Wide Web, but in its own quiet way, still lingers on. I wanted to check its pulse—and while it’s not a hustle-and-bustle in the way that, say, Twitter is, it carries. Tonight’s Tedium talks about the Gopher scene in 2017. Yes, there still is one. — Ernie @ Tedium
Are you in NYC, San Francisco, Chicago, Los Angeles or Washington DC? We do unconventional tours at the best museums in your city. It’s kind of like the tedium for museums. We don’t talk about the most famous paintings or the newest collection, we find the esoteric stories that make even the “boring art” amazing.
Today’s issue is sponsored by Museum Hack. (You can sponsor us, too.)
70
The standard port number that Gopher uses for online connections, a standard set in stone in 1993 by the Internet Assigned Numbers Authority. (The web, generally uses port 80, while Telnet uses port 23, and FTP port 21.) Despite being in heavy use throughout the early ’90s, the technology faded from use as the web became more common, and as a result, it’s difficult to find a modern tool that allows you to connect to Gopher sites. (One exception is Matt Owen’s Gopher Browser, a client for Windows that came to life relatively recently.) The Overbite Project, located at Floodgap Systems, has a list of preferred clients, if you’re interested in hopping on board.
Five things you should know about Gopher’s history
- How it was born: In 1991, a group of computer scientists that worked in the Microcomputer Center at the University of Minnesota built a lightweight method of accessing and distributing information online. The design of the system was such that the server load was very modest. The university at first disowned the project, but public interest kept it alive.
- What it was like: It was designed for a text-based interface, and it showed. Highly structured around a file system, it focused less on appearance and more on organization. The result particularly shines in Lynx, the text-based web browser.
- Interesting quirks: Archie Comics got a lot of love on the early internet, particularly on Gopher. In a mimic of the Archie FTP search tool, a team at the University of Nevada-Reno came up with Veronica, a search tool specifically built to search the entire Gophersphere. It was Gopher’s version of Google, without the highly commercial element. A variation of Veronica, Jughead, was created for searching on a single Gopher server.
- Notable users: Perhaps the most famous user of Gopher in 1993 was Adam Curry, the MTV VJ and later podcast innovator who purchased the mtv.com domain and used the domain to host an unofficial online presence for the TV network. (When he left MTV, Curry’s ownership of a valuable three-letter domain led to a messy legal battle.)
- The turning point: Two things happened in 1993 that ultimately proved greatly damaging to Gopher as a medium. First, the release of the graphical NCSA Mosaic, which supported Gopher but ultimately focused on web technologies, eventually helped the web surpass Gopher in uptake. And second, the University of Minnesota, which had not properly resourced for Gopher, requested a licensing fee for for-profit uses of the platform, and did so in a way that scared off even non-commercial users. By the late ’90s, web browsers had stopped supporting Gopher, hastening its return to obscurity.
An ascii-art photo of Floodgap founder Cameron Kaiser. (per Kaiser’s homepage on Gopher)
This guy might be the most influential figure in the Gophersphere in 2017
Cameron Kaiser has a lot of time, heart, and soul invested in Gopher. But don’t mistake his passion for the protocol and its many servers for mere nostalgia. He sees Gopher as structurally better than the Web in a number of hugely important ways.
“I like a lot of things about Gopher—its easy parsing, the simple protocol, low bandwidth and computing requirements and relatively few moving parts,” he explained to me in a interview. “I think the Web has gone the wrong direction on all of these attributes, and I didn’t want to see Gopher go away in its shadow.”
The operator of Floodgap Systems, who has been active on Gopher since 1993 and has operated his own servers since 1999, has found himself in the position of being the Gopher protocol’s most important steward.
Among the things that Floodgap does that are valuable for Gopher: It watches over a sizable repository of unique content on its own Gopher server; it maintains a list of active and recently updated Gopher servers, so they can be easily found and used; it hosts the only active Veronica-2 search engine on the entire Gophersphere; it keeps a list of clients for each platform; and, most importantly for people who don’t have access to such clients, it offers a web-based proxy for accessing Gopher sites.
While he points out there are some weaknesses in the technology he offers, it’s hard to ignore the impressiveness of what’s mostly a one-man shop. He points in particular to the strides of his Veronica-2 system.
“Even though Veronica-2 is hardly Google-class, I’m proud of how much it has indexed, that the system is also aggressive about expiring servers that are gone, and the fact that it gives people a reliable foothold into Gopherspace to look at what’s there,” he noted by way of example. “Floodgap is also one of the few sites providing automatically maintained news and weather; there is a battery of systems on the backend that find, convert and index content for use and it all runs generally without intervention.”
Why put in all this work? In large part, it’s because he sees Gopher as an extremely important platform, one that is both structurally consistent and is designed to put the power of the interface into the hands of the user—unlike a website where the visual look and functionality is driven by the developer. This, notes Kaiser, holds benefits specifically for machines of an older vintage.
“The retro community is discovering the ugly truth: If it can’t browse the Web, people think it’s not useful as a computer,” he explained. “And a 1MHz 6502 or an old 68K Mac can’t browse the modern web. But they can browse Gopher because the protocol and interface makes little demand on the client, which happily by simple convergence is also Web-like, and there are many resources out there that are still hosted on Gopher.”
“Gopher is the information without the flair, the HTML without the Javascript. Gopher gives me what I want when what I want is to read stuff, not like/comment/interact/favorite/share etc. I’m a big fan of all of those things, but sometimes I just want to read a thing on an old computer and follow a few links. Gopher lets me do that. It’s ultimate Old Web and I am one of those ultimate Old Web ladies who still uses Lynx occasionally just so some BOFH will see it in their web logfiles and, hopefully, smile.”
— Jessamyn C. West, a Vermont librarian and onetime MetaFilter employee, discussing why she worked to convince the community site to bring back its long-dormant Gopher server, which it relaunched last year after a 15-year hiatus. (BOFH, in case you’re wondering, is “Bastard Operator From Hell,” a fictional sysadmin that dates back to the Gopher era.) So how much use is the Gopher version of MetaFilter getting? According to site operator Josh Millard, the read-only server is generally pretty quiet and allowed to live on its lonesome, but it does have a certain appeal for some types of users, especially on long comment threads, when CSS and Javascript can slow down the page. “It’s definitely got some appeal as a lightweight option for the nuclear bunker,” Milliard said. MetaFilter is by far the best-known mainstream site in the modern-day Gophersphere, but it’s far from the only one.
People are still doing innovative things with Gopher, even now
Last month, the long-dormant search engine AltaVista made a surprising comeback onto the internet, in all its late-’90s glory.
No, Verizon didn’t get any weird ideas about reviving the name after completing its recent acquisition of Yahoo. Instead, a young hacker-type that works for CloudFlare launched a brand new version of AltaVista, based on a 20-year-old server app called AltaVista Personal, for the simple purpose of creating a Gopher search engine.
“The idea was originally a concept I had to prove to a friend you can still run 1996 software in a modern system,” Ben Cox explained. “Gopher is a conveniently retro data source!”
Cox, who is 22 and was as a result a toddler when AltaVista’s server software was first released, noted that much of his work is based around the intricacies of the HTTP and HTTP/2 protocols, making working in Gopher a comparative cake walk.
“Unlike HTTP and HTTP/2, where there are lots of odd rules you may have to follow, Gopher has very few rules you have to follow, and most of them involve the logic behind serving the directory pages, not content itself,” he explained. “this makes it a great hobby project since it’s entertaining to use, and not likely to be frustrating to deal with edge cases.”
(In case you’re in the mood to try to build your own Gopher Altavista server, he helpfully put the code up on GitHub.)
He’s not the only hobbyist cracking Gopher’s bones. A slightly older project that added a lot of value to Gopher as a whole is Gopherpedia, which (as you might guess) is a Gopher version of Wikipedia.
In a text-only interface like Lynx, it feels utterly natural, like Wikipedia was made for this format. I know I was smitten. But creator Colin Mitchell says that he sees the tool as being better for some use cases than others, due in no small part to its lack of hyperlinks.
“I hear from a lot of people that they use Gopherpedia because it works really well on low-bandwidth connections. If you know exactly what you want to read about, you can look it up and start reading without loading all the extra chrome that comes with Wikipedia,” Mitchell told me in an interview. “On the other hand, I think Gopherpedia really suffers from the lack of hyperlinks, because one of the great things about wikipedia is the serendipity of finding really interesting links in an article you’re reading.”
So why Wikipedia? Turns out Mitchell had spent some time working a Ruby-based Gopher server named Gopher 2000, and wanted a project that put the server through its paces. He picked the largest thing possible, of course.
“I like to joke that it’s probably the biggest site in Gopherspace in terms of content, but I think that must actually be true,” he added.
While not officially sanctioned by the Wikimedia Foundation, it’s polished enough that it seems like it should be. (While the server runs into the occasional hiccup, it’s quite slick for a service that 50 to 100 users rely on daily.) And he’s still making improvements. At first, the platform imported Wikipedia articles en masse, but eventually he moved to an API-based interface “so in theory it’s always up to date.”
So what drives projects like these, anyway? Clearly, the public benefit of these ideas is relatively small. A big part of it might simply be that it’s good for practice. Mitchell cited his work on Gopherpedia as a boost to his skills with the Ruby programming language, for example.
“I’ve gained a lot of respect for early internet technologies, and an interest in keeping them alive as much as possible,” Mitchell noted.
“Gopher, naturally, will never be what it was,” Cameron Kaiser admitted in his comments on the platform.
It’s not 1993, and pretty graphics won out, even though they use a whole lot more bandwidth, put stress on servers, and inevitably force us to use more powerful technology than we really need for basic tasks.
That said, nearly everyone I talked to for this piece spoke up on how Gopher’s general capabilities—in particular, its ease of use—remain a virtue even into 2017. It’s not hard to get a server on Gopher, for example, if you have even a touch of technical interest. The protocol is dead simple. There’s even a tool for converting WordPress posts directly into a “phlog,” the Gopher variation of a blog.
It’s with these phlogs—particularly those located on the SDF Public Access UNIX System, a nonprofit service which Jessamyn West describes as “what I remember Gopher servers to be like”—that the true potential of Gopher is laid bare. It feels intimate in a way that the web hasn’t since perhaps the earliest days of LiveJournal.
Gopher feels like the place to go if you want to pretend Donald Trump doesn’t exist for half an hour. If you took Facebook and removed all commercial influence from it—along with the cruft that such influence brings—you might get something like Gopher.
Of course, the question is, is there room for something like this on the modern-day internet? Kaiser suggests there is—especially as more machines become “vintage,” unable to keep up with the ever-increasing system requirements of the web.
“There’s very little barrier to entry and it’s conceptually simple to understand and get up and running,” Kaiser said of Gopher. “And that will ensure its long term survival even if only at a very low level into the future.”
Sure, it doesn’t look like much, but perhaps looks were never the point.