After a new feature suggested unconventional actions such as eating rocks or adding glue to pizza sauce, the company is planning to limit the search queries that produce such summaries

Google announced on Thursday its plans to refine and adjust the summaries of search results created by artificial intelligence. This decision follows a blog post explaining why the feature produced bizarre and inaccurate answers, such as suggesting people eat rocks or add glue to pizza sauce. The company will now restrict the types of searches that trigger an AI-generated summary.

Liz Reid, Google’s head of search, stated that they have implemented several restrictions on the types of searches that will result in AI-generated summaries. Additionally, Google has “limited the inclusion of satire and humor content.” The company is also taking action against a small number of AI-generated summaries that violate its content policies, which it says occur in fewer than 1 in 7 million unique search queries where the feature is used.

Google recently launched the AI Overviews feature in the US, but it quickly garnered attention for generating misleading information. The tool seemed to draw from satirical sources like the Onion or humorous Reddit posts, resulting in bizarre answers. These AI missteps soon became a meme, with fabricated screenshots of absurd and dark responses spreading widely on social media platforms alongside the tool’s genuine errors.

Google promoted its AI Overviews feature as a key element of its broader strategy to integrate generative artificial intelligence into its core services. However, its launch resulted in another round of public embarrassment for the company, similar to earlier incidents this year. Google faced criticism and ridicule after its AI image generation tool mistakenly depicted people of color in historically inaccurate contexts, such as portraying Black individuals as World War II German soldiers.

In its blog, Google provided a summary of the issues with AI Overviews and defended the feature. Reid explained that some of the inaccuracies in AI Overviews stemmed from gaps in information caused by uncommon or unique search queries. She also noted intentional efforts to manipulate the feature into generating incorrect answers.

“There’s a unique challenge in handling the millions of novel searches made by users,” Reid wrote. “We’ve encountered new and nonsensical queries, seemingly designed to generate misleading outcomes.”

Numerous viral posts stemmed from peculiar searches like “how many rocks should I eat,” which generated a result based on an Onion article titled “Geologists Recommend Eating at Least One Small Rock Per Day.” However, other examples seemed to originate from more sensible queries. An AI expert shared an image of an AI Overview asserting that Barack Obama had been the first Muslim US president, a widely debunked right-wing conspiracy theory.

“By reviewing examples from recent weeks, we identified patterns where our results were inaccurate, leading us to implement over a dozen technical enhancements to our systems,” Reid explained.

While Google’s blog portrays the challenges with AI Overviews as primarily unique cases, several AI experts have noted that these issues reflect broader challenges regarding AI’s ability to assess factual accuracy and the complexities of automating information access.

Google stated in its blog that “user feedback shows” increased satisfaction with search results due to AI Overviews, but the broader implications of its AI tools and search function changes remain uncertain. Website owners fear that AI summaries could harm online media by diverting traffic and advertising revenue from their sites, while some researchers are concerned about Google further consolidating control over online content visibility.

By admins

Leave a Reply

Your email address will not be published. Required fields are marked *