You might not be familiar with the phrase “peanut butter platform heels” but it apparently originates from a scientific experiment, where peanut butter was transformed into a diamond-like structure, under very high pressure—hence the “heels” reference.
Except this never happened. The phrase is complete nonsense, but was given a definition and backstory by Google AI Overviews when asked by writer Meaghan Wilson-Anastasios, as per this Threads post (which contains some other amusing examples).
The internet picked this up and ran with it. Apparently, “you can’t lick a badger twice” means you can’t trick someone twice (Bluesky), “a loose dog won’t surf” means something is unlikely to happen (Wired), and “the bicycle eats first” is a way of saying that you should prioritize your nutrition when training for a cycle ride (Futurism).
Google, however, is not amused. I was keen to put together my own collection of nonsense phrases and apparent meanings, but it seems the trick is no longer possible: Google will now refuse to show an AI Overview or tell you you’re mistaken if you try and get an explanation of a nonsensical phrase.
If you go to an actual AI chatbot, it’s a little different. I ran some quick tests with Gemini, Claude, and ChatGPT, and the bots attempt to explain these phrases logically, while also flagging that they appear to be nonsensical, and don’t seem to be in common use. That’s a much more nuanced approach, with context that has been lacking from AI Overviews.
Now, AI Overviews are still labeled as “experimental,” but most people won’t take much notice of that. They’ll assume the information they see is accurate and reliable, built on information scraped from web articles.
And while Google’s engineers may have wised up to this particular type of mistake, much like the glue on pizza one last year, it probably won’t be long before another similar issue crops up. It speaks to some basic problems with getting all of our information from AI, rather than references written by actual humans.
What’s going on?
Fundamentally, these AI Overviews are built to provide answers and synthesize information even if there’s no exact match for your query—which is where this phrase-definition problem starts. The AI feature is also perhaps not the best judge of what is and isn’t reliable information on the internet.
Looking to fix a laptop problem? Previously you’d get a list of blue links from Reddit and various support forums (and maybe Lifehacker), but with AI Overviews, Google sucks up everything it can find on those links and tries to patch together a smart answer—even if no one has had the specific problem you’re asking about. Sometimes that can be helpful, and sometimes you might end up making your problems worse.
What do you think so far?