We cover 360 degree news
AI Technology, Digital, News, Technology, Tips & Tricks

Google Search Gave Risky Medical Info — Then It Was Removed

Google Search Gave Risky Medical Info — Then It Was Removed

Google has quietly removed its AI-generated summaries from certain health-related search results after concerns emerged about the accuracy and safety of the medical information being presented. The move follows an investigation by The Guardian, which highlighted how Google’s AI Overviews were offering potentially misleading health guidance to millions of users.

AI Overviews are designed to provide quick, summarized answers at the top of Google Search results. While the feature aims to save time, its application to medical queries has raised serious red flags. According to the investigation, searches such as “what is the normal range for liver blood tests” returned numerical values that lacked critical medical context.

The reported ranges did not take into account factors such as age, sex, ethnicity, or nationality, all of which play an important role in interpreting medical test results. Health experts warn that presenting generalized numbers without these variables can create a false sense of reassurance and delay professional medical consultation.

Following the report, Google removed AI Overviews from some health searches, signaling acknowledgment of the issue. However, the company has not issued a detailed public explanation outlining exactly which queries were affected or whether similar risks exist in other medical topics.

This development has reignited debate around the reliability of AI-generated health information. Medical professionals have consistently cautioned against using search engines or AI tools as substitutes for qualified medical advice. Even small inaccuracies in health-related information can have serious consequences when users make decisions based on what they read online.

The Guardian’s findings underline a broader challenge facing generative AI systems. While these tools are effective at summarizing large volumes of data, they can struggle with nuance, exceptions, and individualized interpretation, which are essential in healthcare.

Google has previously stated that health and safety are high-priority areas where it applies stricter quality controls. Despite this, the incident suggests that safeguards may not always work as intended when AI features are scaled to a global audience.

Digital rights advocates say the episode highlights the need for greater transparency around how AI-generated content is tested, reviewed, and corrected. They argue that users should clearly understand when information is generated by AI and what its limitations are, especially in sensitive areas like health.

For users, the incident serves as a reminder to approach online medical information with caution. Search results, whether AI-generated or not, should not replace consultations with doctors or certified healthcare professionals.

As AI becomes more deeply integrated into everyday tools like search engines, the pressure on tech companies to ensure accuracy and accountability continues to grow. Google’s decision to remove AI Overviews from certain medical searches may reduce immediate risk, but it also raises questions about how trustworthy such features are in their current form.

The situation underscores a critical reality of the AI era. Convenience and speed must not come at the cost of safety, particularly when people’s health is involved.