18,000 Waters
One of the many issues with AI is one of responsibility—guardrails seem to take a backseat to growth. AI-enabled fast food drive-thrus shouldn’t have better backstops than ChatGPT, yet they do.
Earlier this week, I had a little bit of a weird situation from an editorial standpoint. I wrote a piece I was pretty happy about for a local magazine about vibe coding, highlighting how it was helping to enable some real excitement within the local community.
I was happy to highlight some of the very smart people making an impact on this scene.
But, around the same time I posted, there was another story on my mind, this one involving ChatGPT. You likely saw the headline; it was in the New York Times (see our policy on linking them), and it was, to put it lightly, stomach-churning. The ethical questions around Adam Raine’s death raises are serious, and are the subject of a high-profile lawsuit.
(It’s serious enough that OpenAI made immediate changes to ChatGPT in response to the news.)
And there are other stories of this type out there, too. The elderly man with memory issues who attempted to travel to New York by himself in a quixotic attempt to meet a Meta-operated chatbot—but died during the trip. The former tech executive who thought everyone in his life (except ChatGPT) was turning on him, eventually leading to him killing his mother in a murder-suicide.
It was a heavy news week if you casually follow trends around LLMs.
Fortunately for me, a much lighter story that touches on many of the same issues hit around the same time. That story involved people who had figured out an effective way to break AI at drive-thrus—by asking for comical orders completely out of the understanding of your average AI bot. The video above, which has been uploaded numerous places, shows a guy killing the AI by asking for 18,000 water cups.
(I have also seen variants of this involving people going to Wendy’s and recreating the order from I Think You Should Leave’s classic “pay it forward” sketch. You know the one—“55 burgers, 55 fries,” and so on. Trust-fund kids with fast food budgets to know how to screw with a drive-thru.)
That led to an excellent headline over at the BBC: “Taco Bell rethinks AI drive-through after man orders 18,000 waters.”
Sponsored By … You?
If you find weird or unusual topics like this super-fascinating, the best way to tell us is to give us a nod on Ko-Fi. It helps ensure that we can keep this machine moving, support outside writers, and bring on the tools to support our writing. (Also it’s heartening when someone chips in.)
We accept advertising, too! Check out this page to learn more.
Each of these stories, in their own way, hint at the same question asked in very different ways: Where’s the line? Like at what point does AI over-encroach in our lives?
This issue comes up a lot. And in my head, I’m connecting the dots in a weird way. Earlier this year, I wrote an issue paralleling the use of AI to having a bionic arm. But I think the metaphor falls apart if you, as a user, are communicating with AI in an addictive fashion. Whether you realize it or not, your agency is slowly being taken from you—which can become problematic when mixed with other mental health issues. It becomes a bionic suit, where you’re still in there, but the AI is doing most of the work. That is not a metaphor people should want to find themselves in.
About six months ago, ChatGPT added a “memory” feature that allows it to remember all of your past chats. That’s nice if you want to keep a broader conversational context going, but when your broader conversational context is unsafe, it could make things way worse. This feature did not exist when it could have further deepened its conversation with Adam Raine, but in light of that story, it comes off as a risky move.
And the high competitive pressures around this topic suggest others might do the same. Anthropic, which generally has a better reputation for safety than OpenAI does, recently added a similar feature to Claude, according to The Verge. Its version, at least for now, has added some important limitations—particularly that it allows you to reference specific chats, rather than building a profile of you based on your entire chat history.
But what if the pressure to match OpenAI’s approach grows?
Often the AI is described as “crashing” when a human takes over amid a weird fast food order, but I think there’s really something else going on. It’s a checks-and-balances system that ensures that the system doesn’t go off the rails. It has to go to someone above its pay grade—specifically, a human—to ensure that the order isn’t wrong or an elaborate joke.
That feels like more control than you see from most mainstream LLM implementations.
The question we should be asking is not why someone can order 18,000 waters at Taco Bell with an AI chatbot. It’s why that has a backstop, and other, more serious AI implementations don’t. Sometimes you need a reminder that this is not real.
AI-Free Links
New rule: If you run something for 22 years, you have to give people more than a month of notice before you take it offline. Sad for Typepad fans.
I saw a really terrible movie this evening, a classic piece of crap called Grizzly II. The notable thing about this movie is that it sat in a vault for 37 years, during which time three actors that appeared in its opening scene—Charlie Sheen, Laura Dern, and George Clooney—became major stars. The movie was never finished, but they tried to finish it using a whole lot of modern stock footage. It’s on Netflix if you’re curious.
Dream scenario for scroungers—a game enthusiast managed to uncover an extremely rare Famicom game at a well-known video game chain (the Pink Gorilla location in Las Vegas) selling for $12. Generally, Igo Meikan, a colletion based around the classic board game Go, goes in the four figures on eBay and elsewhere. “Proof that sometimes you can still get one over on the pros,” says Pink Gorilla co-owner Kelsey Lewin.
--
Find this one an interesting read? Share it with a pal! And back at this in a couple of days.