Google’s new AI search is getting things dangerously wrong

If you regularly use Google search, you’ve probably noticed a difference in results recently – AI Overviews results at the top of the page. You might have also noticed that they are wrong.

From prestige media outlets to social media users, there has been backlash to the artificial intelligence-generated results that now populate in the valuable online real estate. Multiple outlets, including The New York Times, Ars Technica and NBC News chronicled the ways in which AI Overview messes up.

“The new technology has since generated a litany of untruths and errors – including recommending glue as part of a pizza recipe and the ingesting of rocks for nutrients,” said the Times.

NBC offered this example: “An NBC News search for ‘how many feet does an elephant have’ resulted in a Google AI Overview answer that said ‘Elephants have two feet, with five toes on the front feet and four on the back feet.’”

Ars Technica broke down incorrect results from Google’s AI Overview into the following categories: treating jokes as facts, bad sourcing, answering a different question, problems with math and reading comprehension, issues understanding that different people can have the same name, and just plain weird. 

According to the outlet, some of the results have been “hilariously or even dangerously wrong.”

“Factual errors can pop up in existing [large language model] LLM chatbots as well, of course,” Ars Technica explained. “But the potential damage that can be caused by AI inaccuracy gets multiplied when those errors appear atop the ultra-valuable web real estate of the Google search results page.”

However, the outlet also said that Google’s AI feature appeared to be improving quickly. In a Friday blog post, Google said that large language models “reliability in real-world deployment is sometimes compromised by the issue of ‘hallucination’, where such models generate plausible but nonfactual information,” and that this hallucination often happens when it is prompted with open ended questions.

Want to get caught up on what's happening in SoCal every weekday afternoon? Click to follow The L.A. Local wherever you get podcasts.

Google also said it is working to make AI responses more accurate and reliable.

“The examples we’ve seen are generally very uncommon queries and aren’t representative of most people’s experiences,” a Google spokesperson told Ars Technica of the incorrect results people have been getting via search.
{The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web.”

By the end of the year, Google hopes to bring AI Overviews to more than 1 billion users, said the company in a blog post. While The Verge reported that there isn’t a way to disable the feature, it did offer some tips for getting around it if people don’t feel like scrolling past to get to meatier sources.

Misinformation related to AI programs isn’t new. For example, when Audacy asked ChatGPT for help to find the best burger restaurants in Dallas, Texas, last year we had to note that at least one of the locations was closed. However, misinformation threats aren’t limited to AI.

According to a 2022 study published in the Springer Nature journal “the spread of misinformation in social media has become a severe threat to public interests.”

Whether web users find information via AI Overviews, a social media post or a Wikipedia article, it is best to dig until you find where the original information comes from. It is also best to find multiple sources that confirm information.

Follow KNX News 97.1 FM
Twitter | Facebook | Instagram | TikTok

Featured Image Photo Credit: (Photo Illustration by Michael M. Santiago/Getty Images)