Google has quietly removed AI overviews from some medical searches after experts warned the feature was delivering misleading and potentially dangerous health advice. Users searching for common questions about blood tests or cancer nutrition no longer see AI-generated summaries at the top of results. The change follows a recent investigation that raised serious concerns about accuracy and patient safety. Many people want to know whether Google AI can be trusted for health information, and this move suggests the company is reassessing that risk. Medical searches are among the most sensitive queries users make online. Errors in this area can directly affect life-and-death decisions. That context makes the removal notable, even without a formal announcement.
The controversy gained traction after an investigation revealed multiple examples of incorrect medical guidance. In one case, Google’s AI overview advised people with pancreatic cancer to avoid high-fat foods. Medical experts described that recommendation as the exact opposite of what patients typically need. High-fat diets are often crucial for maintaining weight and strength in pancreatic cancer care. Experts warned that following the AI’s advice could worsen outcomes or even increase mortality risk. Another example involved inaccurate explanations of liver function tests. Such misinformation could cause patients with serious liver disease to believe their results are normal. Doctors described these errors as alarming given how authoritative AI summaries can appear.
As of January 11, AI overviews no longer appear for several medical questions that previously triggered summaries. Searches like “what is the normal range for liver blood tests” now return standard search results only. The removal appears targeted rather than a full shutdown of health-related AI features. Google has not publicly listed which medical topics are affected. The absence was first noticed by reporters revisiting queries highlighted in the investigation. This suggests the company acted quickly after external scrutiny. However, the lack of transparency has left users guessing about the scope of the change. For now, the silence is fueling debate rather than calming it.
Google declined to comment directly on the specific removals but defended the broader system behind AI overviews. A company spokesperson said Google invests heavily in quality, especially for health-related topics. According to the statement, internal clinicians reviewed shared examples and found many were supported by reputable sources. Google also argued that some issues involved missing context rather than outright inaccuracies. When that happens, the company says it works on broader improvements. It also claims to take action under existing policies when necessary. Still, critics argue that even rare mistakes are unacceptable in medical contexts. Trust, once shaken, is hard to rebuild.
This incident adds to a growing list of AI overview missteps that have already drawn public ridicule. Earlier examples included bizarre suggestions such as adding glue to pizza or eating rocks. While those errors were mostly humorous, medical misinformation carries far higher stakes. The feature has also been linked to multiple lawsuits over misleading or harmful advice. Each controversy raises fresh questions about whether generative AI belongs at the top of search results. Experts warn that users may over-trust AI summaries because they appear definitive and concise. That perception can discourage people from checking primary sources. Health professionals say caution should outweigh convenience.
The removal signals that even tech giants are still struggling to deploy AI safely in healthcare-adjacent spaces. Medical information demands accuracy, nuance, and constant updating, all of which challenge large language models. Regulators and clinicians are watching closely as AI tools become more embedded in daily life. For users, the episode is a reminder to treat AI-generated health advice with skepticism. Doctors continue to stress that search results are not a substitute for professional care. Google’s next steps will likely shape how AI is used in sensitive searches going forward. Whether this pause leads to meaningful reform remains to be seen.
𝗦𝗲𝗺𝗮𝘀𝗼𝗰𝗶𝗮𝗹 𝗶𝘀 𝘄𝗵𝗲𝗿𝗲 𝗽𝗲𝗼𝗽𝗹𝗲 𝗰𝗼𝗻𝗻𝗲𝗰𝘁, 𝗴𝗿𝗼𝘄, 𝗮𝗻𝗱 𝗳𝗶𝗻𝗱 𝗼𝗽𝗽𝗼𝗿𝘁𝘂𝗻𝗶𝘁𝗶𝗲𝘀.
From jobs and gigs to communities, events, and real conversations — we bring people and ideas together in one simple, meaningful space.

Comment