When Google’s new AI Overview feature generated bizarre and misleading answers to search queries last week, the company initially downplayed the notion that the technology had issues. However, on Thursday, the head of search, Liz Reid, acknowledged that these mistakes highlighted areas for improvement. “We wanted to explain what happened and the steps we’ve taken,” Reid wrote.
Reid’s post specifically addressed two viral and incorrect AI Overview results. One result suggested that eating rocks could be beneficial, while another recommended using nontoxic glue to thicken pizza sauce.
Rock eating is an unusual topic with limited online discussions, so there are few reliable sources for a search engine to reference. Reid explained that the AI tool had found a satirical article from The Onion, reposted by a software company, and mistakenly interpreted it as factual.
Regarding the glue suggestion for pizza, Reid attributed this error to the AI misinterpreting humorous content from discussion forums. “Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza,” she noted.
Reid advised against taking AI-generated suggestions for meal preparation without careful scrutiny.
Reid also mentioned that judging Google’s new search feature based on viral screenshots is unfair. She claimed that extensive testing was conducted before its launch and that data shows users appreciate AI Overviews, as indicated by their increased time spent on pages discovered through this feature.
Why did these embarrassing errors occur? Reid explained that the mistakes were partly due to an internet-wide audit that wasn’t always conducted in good faith. “There’s nothing quite like having millions of people using the feature with many novel searches. We’ve also seen nonsensical new searches, seemingly aimed at producing erroneous results,” she said.
Google contends that some widely circulated screenshots of AI Overviews were fake, a claim supported by WIRED’s testing. For instance, a user on social media posted a screenshot suggesting AI Overviews claimed that cockroaches could live in human penises, which was widely viewed but found to be fake upon closer examination. WIRED could not replicate this result.
Even reputable sources like The New York Times were misled by misleading screenshots of fake AI Overviews. The Times issued a correction regarding a claim that AI Overviews suggested jumping off the Golden Gate Bridge for depression, clarifying it was a social media meme. Reid also mentioned that AI Overviews never provided dangerous advice for topics like leaving dogs in cars, smoking while pregnant, and depression.
Reid acknowledged that not everything was perfect with Google’s new search feature. The company made over a dozen technical improvements to AI Overviews, including better detection of nonsensical queries, reducing reliance on user-generated content from sites like Reddit, offering AI Overviews less frequently in unhelpful situations, and strengthening guardrails on important topics like health.
Reid’s post did not mention significantly reducing AI summaries but stated that Google will continue to monitor user feedback and adjust the feature as necessary.