- USDT(TRC-20)
- $0.0
Google is finally explaining what the heck happened with its AI Overviews.
For those who arenāt caught up, AI Overviews were introduced to Googleās search engine on May 14, taking the beta Search Generative Experience and making it live for everyone in the U.S. The feature was supposed to give an AI-powered answer at the top of almost every search, but it wasnāt long before it started suggesting that people put glue in their pizzas or follow potentially fatal health advice. While theyāre technically still active, AI Overviews seem to have become less prominent on the site, with fewer and fewer searches from the Lifehacker team returning an answer from Googleās robots.
In a blog post yesterday, Google Search VP Liz Reid clarified that while the feature underwent testing, "thereās nothing quite like having millions of people using the feature with many novel searches.ā The company acknowledged that AI Overviews hasnāt had the most stellar reputation (the blog is titled āAbout last weekā), but it also said it discovered where the breakdowns happened and is working to fix them.
āAI Overviews work very differently than chatbots and other LLM products,ā Reid said. āTheyāre not simply generating an output based on training data,ā but instead running ātraditional āsearchā tasksā and providing information from ātop web results.ā Therefore, she doesnāt connect errors to hallucinations so much as the model misreading whatās already on the web.
āWe saw AI Overviews that featured sarcastic or troll-y content from discussion forums," she continued. "Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice.ā In other words, because the robot canāt distinguish between sarcasm and actual help, it can sometimes present the former for the latter.
Similarly, when there are ādata voidsā on certain topics, meaning not a lot has been written seriously about them, Reid said Overviews was accidentally pulling from satirical sources instead of legitimate ones. To combat these errors, the company has now supposedly made improvements to AI Overviews, saying:
All these changes mean AI Overviews probably arenāt going anywhere soon, even as people keep finding new ways to remove Google AI from search. Despite social media buzz, the company said āuser feedback shows that with AI Overviews, people have higher satisfaction with their search results,ā going on to talk about how dedicated Google is to āstrengthening [its] protections, including for edge cases."
That said, it looks like thereās still some disconnect between Google and users. Elsewhere in its posts, Google called out users for ānonsensical new searches, seemingly aimed at producing erroneous results.ā
Specifically, the company questioned why someone would search for āHow many rocks should I eat?ā The idea was to break down where data voids might pop up, and while Google said these questions āhighlighted some specific areas that we needed to improve,ā the implication seems to be that problems mostly appear when people go looking for them.
Similarly, Google denied responsibility for several AI Overview answers, saying that ādangerous results for topics like leaving dogs in cars, smoking while pregnant, and depressionā were faked.
Thereās certainly a tone of defensiveness to the post, even as Google spends billions on AI engineers who are presumably paid to find these kinds of mistakes before they go live. Google says AI Overviews only āmisinterpret languageā in āa small number of cases,ā but we do feel bad for anyone sincerely trying to up their workout routine who might have followed its "squat plug" advice.
Full story here:
For those who arenāt caught up, AI Overviews were introduced to Googleās search engine on May 14, taking the beta Search Generative Experience and making it live for everyone in the U.S. The feature was supposed to give an AI-powered answer at the top of almost every search, but it wasnāt long before it started suggesting that people put glue in their pizzas or follow potentially fatal health advice. While theyāre technically still active, AI Overviews seem to have become less prominent on the site, with fewer and fewer searches from the Lifehacker team returning an answer from Googleās robots.
In a blog post yesterday, Google Search VP Liz Reid clarified that while the feature underwent testing, "thereās nothing quite like having millions of people using the feature with many novel searches.ā The company acknowledged that AI Overviews hasnāt had the most stellar reputation (the blog is titled āAbout last weekā), but it also said it discovered where the breakdowns happened and is working to fix them.
āAI Overviews work very differently than chatbots and other LLM products,ā Reid said. āTheyāre not simply generating an output based on training data,ā but instead running ātraditional āsearchā tasksā and providing information from ātop web results.ā Therefore, she doesnāt connect errors to hallucinations so much as the model misreading whatās already on the web.
āWe saw AI Overviews that featured sarcastic or troll-y content from discussion forums," she continued. "Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice.ā In other words, because the robot canāt distinguish between sarcasm and actual help, it can sometimes present the former for the latter.
Similarly, when there are ādata voidsā on certain topics, meaning not a lot has been written seriously about them, Reid said Overviews was accidentally pulling from satirical sources instead of legitimate ones. To combat these errors, the company has now supposedly made improvements to AI Overviews, saying:
We built better detection mechanisms for nonsensical queries that shouldnāt show an AI Overview, and limited the inclusion of satire and humor content.
We updated our systems to limit the use of user-generated content in responses that could offer misleading advice.
We added triggering restrictions for queries where AI Overviews were not proving to be as helpful.
For topics like news and health, we already have strong guardrails in place. For example, we aim to not show AI Overviews for hard news topics, where freshness and factuality are important. In the case of health, we launched additional triggering refinements to enhance our quality protections.
All these changes mean AI Overviews probably arenāt going anywhere soon, even as people keep finding new ways to remove Google AI from search. Despite social media buzz, the company said āuser feedback shows that with AI Overviews, people have higher satisfaction with their search results,ā going on to talk about how dedicated Google is to āstrengthening [its] protections, including for edge cases."
That said, it looks like thereās still some disconnect between Google and users. Elsewhere in its posts, Google called out users for ānonsensical new searches, seemingly aimed at producing erroneous results.ā
Specifically, the company questioned why someone would search for āHow many rocks should I eat?ā The idea was to break down where data voids might pop up, and while Google said these questions āhighlighted some specific areas that we needed to improve,ā the implication seems to be that problems mostly appear when people go looking for them.
Similarly, Google denied responsibility for several AI Overview answers, saying that ādangerous results for topics like leaving dogs in cars, smoking while pregnant, and depressionā were faked.
Thereās certainly a tone of defensiveness to the post, even as Google spends billions on AI engineers who are presumably paid to find these kinds of mistakes before they go live. Google says AI Overviews only āmisinterpret languageā in āa small number of cases,ā but we do feel bad for anyone sincerely trying to up their workout routine who might have followed its "squat plug" advice.
Full story here: