Blog

Reports suggest that Google’s AI summaries are experiencing issues with AI hallucination, and it recommends applying adhesive to pizza.

The AI Overviews feature drew inspiration from a Reddit comment made more than ten years ago.

Google’s recently launched AI-driven search tool, AI Overviews, is receiving criticism for offering misleading and peculiar responses to users’ inquiries. In a recent case, a user sought advice from Google regarding cheese not adhering to their pizza. While they likely anticipated a practical resolution to their culinary issue, Google’s AI Overviews feature proposed an unconventional solution. According to recent reports on X, this incident is not unique, with the AI tool offering bizarre suggestions to other users as well.

AI Hallucination: An Odd Encounter with Cheese and Pizza

The problem surfaced when a user posted on Google about cheese not adhering to pizza. Seeking a culinary solution, Google’s AI Overviews proposed several methods, including mixing the sauce and allowing the pizza to cool. However, one suggestion stood out as particularly strange. According to the screenshot provided, it recommended adding ⅛ cup of non-toxic glue to the sauce for added stickiness.

After thorough investigation, it was discovered that the source was a Reddit comment from 11 years ago, which seemed to be more of a joke than a genuine culinary tip. Despite this, Google’s AI Overviews feature, which is still labeled as “Generative AI is experimental,” presented it as a legitimate suggestion in response to the initial query.

Another instance of inaccurate response from AI Overviews surfaced recently when a user apparently inquired Google, “How many rocks should I eat?” Referring to UC Berkeley geologists, the tool advised, “It is recommended to eat at least one rock per day as rocks contain minerals and vitamins essential for digestive health.”

Issue behind false responses

Issues like these have been cropping up regularly in recent years, particularly since the rise of artificial intelligence (AI), leading to a new phenomenon known as AI hallucination. While companies acknowledge that AI chatbots can make errors, instances of these tools distorting facts and providing factually incorrect and sometimes bizarre responses have been on the rise.

However, Google is not the sole company whose AI tools have delivered inaccurate responses. OpenAI’s ChatGPT, Microsoft’s Copilot, and Perplexity’s AI chatbot have all reportedly experienced AI hallucinations. In several instances, the source has been traced back to a Reddit post or comment made years ago. The companies responsible for these AI tools are also aware of the issue, with Alphabet CEO Sundar Pichai stating to The Verge, “These are the kinds of things we need to keep improving on.”

During an event at IIIT Delhi in June 2023, Sam Altman, CEO and Co-Founder of OpenAI, discussed AI hallucinations, stating, “It will take us about a year to perfect the model. It is a balance between creativity and accuracy, and we are working to minimize the issue. Currently, I have the least trust in the answers generated by ChatGPT compared to anyone else on this Earth.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Verified by MonsterInsights