No Shortage of GPUs Here

What makes a GPU a GPU, and when did we start calling it that? Turns out that’s a more complicated question than it sounds.

By Andrew Egan

Hey all, Ernie here with a piece from Andrew Egan, who has a story to share about GPUs, the chips whose economics are really annoying a lot of gamers right now.

Today in Tedium: Names change. Perhaps the most jarring element of the recent, widely reported “alien” activity isn’t so much descriptions of sonic boomless sonic flight but that UFOs (unidentified flying objects) are now called UAPs (unidentified aerial phenomena). Companies rebrand; Google became Alphabet and the Washington Football Team decided that was a good idea. With technology, terminology tends to become antiquated as industries progress beyond understanding their own achievements. Today’s Tedium is looking at changes to the GPU (now the graphics processing unit) acronym and how it harkened a new era of computing applications, while frustrating an obvious customer base. — Andrew @ Tedium

Tedium on Patreon

Keep Us Moving! Tedium takes a lot of time to work on and snark wise about. If you want to help us out, we have a Patreon page where you can donate. Keep the issues coming!

We accept advertising, too! Check out this page to learn more.

Pasted image 0

Ad from Tektronix Interactive Graphics, an alleged forebearer of GPU technology. Or were they?

Most of the history of GPUs doesn’t really count

Computers compute numbers, most people probably understand that. Using computers in a way that has felt “intuitive” to people has generally meant using some type of graphics. Or more importantly, making the numbers look pretty. (Now you know why your math teacher wanted you to show your work. Same principle.)

How mainstream consumers came to expect graphic interfaces to work with computers is a long, fascinating history covered by many books and at least one excellent made-for-TV movie. On the backend, however, the story was just beginning.

While Windows and Apple were gaining acceptance for their point-and-click interface, hardcore computer users, i.e. gamers, needed those points and clicks to register a lot faster. They also wanted the graphics to look more realistic. Then they started asking for features like online multiplayer game play, instant chat, and a slew of other features we expect nowadays, but seemed like a lot in the early to mid-1990s.

One could contribute the increase in technically demanding game play to the 1993 classic Doom, as well as its successor Quake, which drove consumer interest in dedicated graphics cards. However, after talking with an expert that’s watched the field develop from the beginning, the history of GPUs just isn’t that simple.

Dr. Jon Peddie first got involved in the computer graphics industry in the 1960s when he was part of a team that made 3D topographic maps from aerial photography, leading to the creation of his company, Data Graphics. By the early 1980s, he was considering retirement and a career writing sci-fi (sounds nice) when he noticed an explosion in the field that was hard to ignore. Practical applications for high performance graphics were initially driven by CAD and GIS companies, though the video game explosion of the ’80s would change that.

“Gaming was (and still is) the driver because [of] the volume of the customers,” Peddie said in an email. “The other users of 3D and GPUs were engineering (CAD, and molecular modeling), and the movies. But that market had (in the ’80s and ’90s) maybe 100k users total. Consumer 3D had millions. But, the pro market would pay more—thousands to tens of thousands, whereas the consumer would pay a few hundred. So the trick was to build enough power into a chip that could, in a final product, be sold for a few hundred.”

At this point in computing history, the acronym GPU had been introduced into the tech lexicon. This blast-from-the-past article from a 1983 edition of Computerworld details the Tektronix line of graphics terminals. But if you look a little closer, GPU didn’t yet stand for “graphic processing unit”. Instead, this iteration stood for “graphic processor unit”. Small difference, but this is Tedium, so it must have a big impact, right? No. So is there even a difference?

“None,” Peddie explains. “Tense at best case. English is not the first language for a lot of people who write for (on) the web.”

Okay, fair enough. But this isn’t actually the problem or even the interesting element of GPU history to consider, Peddie points out. It’s the fact that before 1997, the GPU didn’t actually exist, even if the acronym was being used. A proper GPU, it turns out, requires a transform and lighting (T&L) engine.

“Why shouldn’t, couldn’t, a graphics chip or board developed before 1997 be called a GPU?” Peddie asks. “It does graphics (albeit only in 2D space). Does it process the graphics? Sure, in a manner of speak[ing]. It draws lines and circles—that’s processing. It repositions polygons on the screen—that’s processing. So the big distinction, that is a GPU must do full 3D (and that requires a T&L).”

Ultimately, like much of tech history, the story quickly becomes about competing claims between an industry leader and a forgotten innovator.

Graphics chip glint550x258

The Glint 3D graphics chip by 3Dlabs, arguably the first company to produce a true GPU. Largely used for “high-end 3D CAD applications”, it was released in November 1994 (still not the first “real” GPU but still a cool graphics chip). Though first to market, 3Dlabs would not enjoy the economies of scale available to their competitors, like Nvidia.

Bragging rights are claimed by the winners

Let’s get this out of the way since it’s a common mistake. The first PlayStation was not the first mass market GPU. That belief comes from the powerful marketing efforts of Sony and Toshiba. As Peddie explains, “The original PlayStation [had] a geometry transformation engine (GTE), which was a co-processor to a 2D chip that was incorrectly labeled (by marketing) as a GPU.”

Marketing is a big element in this era of GPUs, which is just before they actually came out. The breakthrough for a true 3D GPU was on the horizon and plenty of companies wanted to get there first. But the honor would go to a little outfit from the UK imaginitely called 3Dlabs. The specific innovation that gave 3Dlabs the title of first accurately named GPU was their development of a two-chip graphics processor that included a geometry processor known as a transform and lighting (T&L) engine. Compared with their competitors, 3Dlabs focused on the CAD market though it was trying to make inroads with the larger consumer market by partnering with Creative Labs.

A technology demo highlighting the 3Dlabs Glint chipset.

The smaller size and professional focus of 3Dlabs meant there were still plenty of “firsts” to be had in the consumer GPU market.

The graphics-card sector was incredibly busy during this period, with one-time big names such as Matrox, S3, and 3Dfx competing for mindshare among Quake players.

But the winners write the history books, and a dominant player emerged during this period. By late 1999, Nvidia was ready to release the first mass consumer GPU with integrated T&L, known as the GeForce 256.

“That, by Nvidia’s mythology, was the introduction of the GPU, and they claim the invention [of it],” Peddie explains. “So you can slice and dice history as you like. Nvidia is at $10 billion on its way to $50 billion, and no one remembers 3Dlabs.”

(Side note: Nvidia is and always will be a noun and not an acronym despite the wide belief it is one.)

Pretty soon, the market would be loaded with competing GPUs each aiming at their own particular market niche. Canadian manufacturer ATI Technologies, which was later purchased by Nvidia’s biggest competitor AMD, attempted to differentiate their entry into the market by calling their GPU a VPU, or video processor unit, even though they were the same thing. This effort didn’t last.

“ATI gave up, they couldn’t stand up to Nvidia’s superior (and I mean that) marketing skills, volume, sexiness, and relentless push,” Peddie says.

By the early 2000s, major players like Nvidia had dominated the consumer market, quickly becoming villains to gamers everywhere. Interestingly enough, this exact market consolidation helps explain exactly why high-end graphics cards are so hard to find nowadays.

RTX 3080

Behold! One of the most coveted items in the world. And it’s not even the top of the line.

So who do we blame for that GPU shortage, anyway?

If you’ve gotten this far into an article on GPU history and naming convention, I bet you’re wondering when I’m going to get to the Great GPU Shortage of 2020 (and probably beyond).

For those who don’t know what I’m talking about, the gist is this: the price of higher-end GPUs has exploded in recent months, if you can even find them.

For example, the folks over at Nvidia have three models of graphics cards that are generally sought after by gamers:

  • RTX 3090: MSRP $1,499
  • RTX 3080: MSRP $699
  • RTX 3070: MSRP $499

The individual merits of these models can be (and very much are) debated relative to their given price points and performance. However, scarcity has made the resale markets for these GPUs shoot through the roof as supply becomes scarce. Current listings price the middle-tier RTX 3080 at $1,499, while the 3090 and 3070 are nearly impossible to find. One listing for a 3090 on eBay is over $3,000 at time of writing.

The AMD line of graphics cards also deserve a mention here. Though not as highly sought after because, traditionally, they haven’t been as powerful, AMD has nonetheless been affected by the supply chain limitations for GPU manufacturing. Like the Nvidia line, the AMD RX 6700, 6800, and 6900 models have seen similar price spikes in the secondary market with most models fetching more than twice their original values in resale markets.

(Ed note from Ernie: Some fun context here—my old Xeon has a refurbished AMD RX 570, a card I paid slightly more than $100 for on the Newegg website in mid-2019. That same card, which is basically a budget model and was already a little old at the time I purchased it, currently sells for $599 on Newegg’s website.)

Clearly there is heavy demand and capitalism is usually pretty good at filling that gap. Like many things wrong with 2020, a good bit of the blame is being placed on COVID-19. Manufacturing hubs in China and Taiwan, along with most of the world, had to shut down. While much of the work in hardware manufacturing can be automated, the delicate nature of GPUs requires some degree of human interaction.

GPU Chart

A flow chart describing the current shape of the GPU industry. (courtesy Dr. Jon Peddie)

Still, this explanation oversimplifies processes that have been trending in the graphics industry long before COVID-19 hit. Again, I’ll let Dr. Peddie explain:

About 15 [plus] years ago, the manufacturing pipeline was established for GPU manufacturing (which includes sourcing the raw silicon ingots), slicing and dicing the wafers, testing, packaging, testing again and finally shipping to a customer. All the companies in the pipeline and downstream (the OEM customers who have a similar pipeline) were seeking ways to respond faster, and at the same time minimize their inventory. So, the JIT (just in time) manufacturing model was developed. This relied on everyone in the chain providing accurate forecasts and therefore orders. If one link in the chain broke everyone downstream would suffer … When governments shut down their countries all production ground to a halt – no parts shipped—the pipeline was broken. And, when and if production could be restarted, it would take months to get everyone in sync again.

At the same time people were being sent home to work, and they didn’t have the tools needed to do that. That created a demand for PCs, notebooks especially. [Thirty to forty percent] of PCs have two GPUs in them, so the demand for GPUs increased even more.

And then [crypto] coins started to inflate … Now the miners (people who use GPUs to monitor and report …) were after every and any GPU they could get their hands on. That caused speculators to buy all the graphics boards and offer them at much higher prices.

So, the supply line got hit with a 1-2-3 punch and was down for the count.

And that was him keeping a long story short. To put it plainly, companies that make GPUs were operating on a thin margin of error without the ability to predict the future. And this applies more to the general market for GPUs while tangentially addressing the higher-end customers.

Another point of frustration to add here was the unfortunate timing of the latest generation of video game consoles in 2020, which also meant a new generation of video games. The highly anticipated PlayStation 5 along with _Cyberpunk 2077 _was met with numerous supply and technical issues upon launch. Cyberpunk players reported inconsistent experiences largely dependent on hardware the game was being played on. On the differences between the game on a PS4 and a PS5, one YouTuber commented, “At least it’s playable on PS5.”

While Dr. Peddie expects the shortage to self-correct by the first quarter of 2022 (hooray …), he is not optimistic about the industry avoiding such missteps in the future.

“The [next] problem will be double-ordering that is going on now and so we have the prospect of a giant slump in the semi market due to excess inventory,” he concludes. “Yin-yang—repeat.”

There is a lot to learn from history even if it’s fairly recent. While it might be tempting to lean into market failures to meet demand, obviously the story is more complicated. Though GPUs have become required for billions on a daily basis, higher performance is left to a few with niche interests.

Still, the larger market should pay attention to frustrated gamers, at least on this point. Their needs push the industry into innovation that becomes standard in more common devices. With each iteration, devices gain a little more of those advanced graphics as they drip down to people who hadn’t noticed them before but now expect it.

After all, if it doesn’t have painstakingly realistic 3D graphics, can we even call it a phone anymore?

--

Find this one an interesting read? Share it with a pal!

Oh, and a lot of appreciation for Dr. Jon Peddie for his time and his insight. Quite conveniently, he is also working on a book called, “The History of GPUs” which will go into much greater detail about this fascinating slice of computing history.

Andrew Egan

Your time was just wasted by Andrew Egan

Andrew Egan is yet another writer living in New York City. He’s previously written for Forbes Magazine and ABC News. You can find his terrible website at CrimesInProgress.com.

Find me on: Website