Google removes some AI health summaries over safety concerns:Following reports of misleading results, the tech giant acted, but the rollback appears selective

We’ve all done it: a headache, a weird report, a strange symptom… and straight to Google we go. But what if Google’s quick answer sounds confident and still gets it wrong? That’s exactly what raised alarm bells after Google’s AI gave shaky medical advice, forcing the company to quietly remove some AI summaries from health searches. What exactly happened? According to an investigation by The Guardian, Google removed its ‘AI Overviews’ feature from some medical searches after it was found to be giving misleading or over-simplified health information. AI Overviews are those short, ready-made answers that appear at the top of Google search results. They’re meant to save time, but in health matters, shortcuts can be risky. The Guardian reported that for certain medical questions, the AI gave neat, simple answers where real medicine is never that simple. Example: Liver test results without full context One of the searches flagged was: “What is the normal range for liver blood tests?” The AI reportedly showed fixed number ranges, but did not explain that liver test results depend on many things, like: Doctors warned that without this context, people might think: “My numbers look fine, so I must be okay,” When in reality, they might still need treatment or further tests. This could delay diagnosis, which can be dangerous in serious conditions.
Also read: ISRO’s first mission of 2026 hits turbulence, PSLV-C62 with 15 satellites deviates from path

Which health searches lost AI answers? After the report, The Guardian noticed that AI Overviews disappeared from some direct questions, including: What is the normal range for liver blood tests? What is the normal range for liver function test? But the fix wasn’t complete. When reporters tried slightly different wording, like: “LFT reference range” “LFT test reference range” The AI summaries still appeared, showing that the system could still be triggered with small wording changes. So, the removal looked selective, not full-scale. Why doctors are worried about AI health advice Medical experts say healthcare is not one-size-fits-all. The Guardian also found cases where AI Overviews gave dangerous diet advice. In one example, people with pancreatic cancer were told to: “Avoid high-fat food.” But doctors say patients with this condition often need more fat, not less. Wrong advice like this could actually make health outcomes worse. Health specialists argue that even if some facts are technically correct, missing medical nuance can still cause harm, especially when users trust AI answers as final and authoritative. Also read: Phone numbers and emails of most Indian users are being sold on dark web – tips to protect your account

What did Google say? Google did not confirm which searches were changed. A company spokesperson said: We do not comment on individual removals within Search. In cases where AI Overviews miss some context, we work to make broad improvements, and we also take action under our policies where appropriate. Google also said that its internal team of clinicians reviewed the examples and felt that in many cases: The information was not inaccurate and was supported by high-quality websites. But critics argue that in medicine, being technically correct is not enough if important details are missing. Not the first time AI Overviews caused trouble This isn’t Google’s first AI health headache. When AI Overviews launched in May last year, it went viral for some truly bizarre advice, including: Put glue on pizza so the cheese doesn’t slide off,and Eat a small rock every day for vitamins.
After public backlash, Google briefly pulled the feature and later reintroduced it with changes. Now, with health advice again under scrutiny, experts say the problem isn’t just funny mistakes, it’s about real-world safety. Also read: Wi-Fi not working properly? Common reasons and easy fixes

AI health advice can’t be fully authentic AI can be helpful for quick information, but when it comes to health, context matters as much as facts.
The Guardian’s investigation shows that even polished AI answers can quietly miss crucial details, and in medicine, that can make all the difference.

The post appeared first on .

Leave a Comment

Your email address will not be published. Required fields are marked *