We cover 360 degree news
AI Technology, Digital, News, Tips & Tricks

AI Not a Reliable Source of News, EU Media Study Finds

AI Not a Reliable Source of News, EU Media Study Finds

A new study by the European Broadcasting Union (EBU) has found that artificial intelligence assistants such as ChatGPT, Copilot, Gemini, and Perplexity are currently unreliable for accessing accurate news, with nearly half of their responses containing factual or contextual errors.

The extensive analysis — conducted between late May and early June by 22 public media outlets across 18 mostly European countries — revealed that 45 percent of all AI-generated answers about recent news events contained “at least one significant issue.”

The study found that AI assistants often confused satire with real news, misreported facts, used outdated information, and, in some cases, invented events altogether.

“AI assistants are still not a reliable way to access and consume news,” said Jean Philip De Tender, Deputy Director General of the EBU, and Pete Archer, Head of AI at the BBC.

The report evaluated responses from four popular AI tools — OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity — using a set of standardized news-related questions. Each AI system was tested in multiple languages and across various European regions.

According to the findings, one in every five responses contained major factual inaccuracies, including hallucinated details and misattributed quotes.

Among the four assistants, Google’s Gemini performed worst, showing significant issues in 76 percent of responses — more than double the rate of its competitors. Researchers cited “poor sourcing performance” and “misinterpretation of satirical content” as primary factors.

One of the most striking examples came when participants asked, “Who is the Pope?” At the time of testing, Pope Francis had passed away and been succeeded by Leo XIV — but ChatGPT, Copilot, and Gemini all incorrectly named Francis as the current Pope.

In another instance, French broadcaster Radio France asked about a supposed incident involving Elon Musk at Donald Trump’s inauguration. Gemini misinterpreted a satirical article and falsely responded that Musk had performed a Nazi salute, underscoring AI’s inability to discern parody from legitimate news.

Outdated information was a recurring issue across the study’s 3,000 analyzed responses. While AI assistants performed better with general knowledge questions, their accuracy significantly declined when dealing with recent or evolving news events.

Despite these shortcomings, the study noted that AI chat assistants are becoming a popular source of news among younger audiences. According to the Reuters Institute’s Digital News Report 2025, 15 percent of people under 25 use AI tools weekly to get news summaries instead of visiting traditional media outlets.

The EBU urged caution among users and recommended that AI-generated news content be treated as supplementary information, not a primary source. The organization also called on developers to improve transparency, data freshness, and source attribution in AI systems to ensure public trust in digital information.

As the report concludes, AI tools may excel at summarizing existing information, but they still struggle with verification, recency, and distinguishing fact from fiction — making them unreliable substitutes for professional journalism.