Connect with us

Tech

Google’s AI search tool tells users to ‘eat rocks’ for your health

Published

on

Google’s AI search tool tells users to ‘eat rocks’ for your health

Stay informed with free updates

Google’s new artificial intelligence search tool has advised users that eating rocks can be healthy and to glue cheese to pizza, prompting ridicule and raising questions about its decision to embed an experimental feature into its core product.

“Eating the right rocks can be good for you because they contain minerals that are important for your body’s health,” Google’s AI Overview responded to a query from the Financial Times on Friday, apparently in reference to an April 2021 satirical article from The Onion headlined “Geologists recommend eating at least one small rock per day”.

Other examples of erroneous answers include recommending mixing glue into pizza sauce to increase its “tackiness” and stop the cheese sliding, which may have been based on a joke made 11 years ago on Reddit.

More seriously, when asked “how many Muslim presidents the US has had”, the AI overview responded: “The United States has had one Muslim president, Barack Hussein Obama” — echoing a falsehood about the former president’s religion pushed by some of his political opponents.

Google said: “The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web. Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce.

“We conducted extensive testing before launching this new experience, and as with other features we’ve launched in Search, we appreciate the feedback. We’re taking swift action where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.”

The errors arising from Google’s AI-generated answers are an inherent feature of the systems underpinning the technology, known as “hallucinations” or fabrications. The models that power the likes of Google’s Gemini and OpenAI’s ChatGPT are predictive, meaning they work by choosing the likely next best words in a sequence, based on the data on which they were trained.

While the companies building generative AI models — including OpenAI, Meta and Google — claim the latest versions of their AI software have reduced the occurrence of fabrications, they remain a significant concern for consumer and business applications.

For Google, whose search platform is trusted by billions of users because of its links to original sources, “hallucinations” are particularly damaging. Its parent company Alphabet generates the vast majority of its revenue from search and its associated advertising business.

In recent months chief executive Sundar Pichai has come under pressure, internally and externally, to speed up the release of new consumer-focused generative AI features after being criticised for falling behind rivals, in particular OpenAI, which has a $13bn partnership with Microsoft.

At Google’s annual developer conference this month, Pichai laid out a new AI-centric strategy for the company. It released Overviews — a brief Gemini-generated answer to queries — at the top of many common search results for millions of US users under the taglines “Let Google do the Googling for you” and take “legwork out of searching”.

The teething issues faced by Overviews echoes the backlash in February against its Gemini chatbot, which created historically inaccurate depictions of different ethnicities and genders through its image-creation tool, such as women and people of colour as Viking kings or German soldiers from the second world war.

In response, Google apologised and suspended image generation of people by its Gemini model. It has not reinstated the feature.

Pichai has spoken of Google’s dilemma in keeping up with rivals, while acting ethically and remaining the search engine of record widely relied upon for returning accurate and verifiable information.

At an event at Stanford University last month, he said: “People come to search at important moments, such as the medicine dosage for a three-month-old child, so we have to get it right . . . that trust is hard earned and easy to lose.”

“When we get it wrong people let us know, consumers have the highest bar . . . that is our north star and where our innovation is channelled towards,” Pichai added. “It helps us make the products better and get it right.”

Continue Reading