Introduction to AI Assistants and News Content
A major new study of 22 public media organizations has found that four of the most commonly used AI assistants misrepresent news content 45% of the time – regardless of language or territory. Journalists from various public broadcasters evaluated the responses from four AI assistants or chatbots – ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity AI.
Methodology and Findings
When measuring criteria such as accuracy, citing sources, providing context, the ability to edit appropriately and the ability to distinguish fact from opinion, the study found that almost half of all responses had at least one significant problem, while 31% had serious problems with attribution and 20% had serious factual errors. The study found that 53% of the AI assistants’ answers to their questions had significant problems, with 29% having specific accuracy issues.
Examples of Factual Errors
Among the factual errors in answering questions was the appointment of Olaf Scholz as German chancellor, even though Friedrich Merz had already been appointed chancellor a month earlier. In another case, Jens Stoltenberg was appointed NATO Secretary General after Mark Rutte had already taken over the post.
The Rise of AI Assistants in News Consumption
AI assistants have become an increasingly common way for people around the world to access information. According to the Reuters Institute Digital News Report 2025, 7% of online news consumers use AI chatbots to get news, rising to 15% of those under 25.
Systemic Failures in AI Assistants
The study’s founders say it confirms that AI assistants systematically distort news content of all kinds. “This investigation shows conclusively that these failures are not isolated cases,” said a deputy director general of the European Broadcasting Union (EBU), which coordinated the study. "They are systemic, cross-border and multilingual, and we believe this threatens public trust. If people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation."
Unprecedented Study
This is one of the largest research projects of its kind to date and is based on a BBC study from February 2025. This study found that more than half of all AI responses it reviewed had significant problems, while almost a fifth of responses that cited BBC content as a source had factual errors of their own. In the new study, media organizations from 18 countries and multiple language groups applied the same methodology as the BBC study to 3,000 AI responses.
Call to Action
The broadcasters and media organizations behind the study are calling on national governments to take action. They urge EU and national regulators to enforce existing laws on information integrity, digital services and media pluralism. They also emphasized that independent monitoring of AI assistants must be a priority going forward, given the speed at which new AI models are being introduced.
Campaign for Facts In: Facts Out
The EBU has joined forces with several other international broadcast and media groups to launch a joint campaign called “Facts In: Facts Out,” which calls on AI companies themselves to take more responsibility for how their products process and disseminate news. The campaign’s demand is simple: When facts come in, facts must come out. AI tools must not compromise the integrity of the messaging they use.
