AI search engines give incorrect answers at an alarming 60% rate, study says
1 min read
Summary
A report from the Columbia Journalism Review’s Tow Center for Digital Journalism has revealed serious accuracy problems with generative AI tools used for news searches.
When asked to identify news content, the models incorrectly answered more than 60% of queries, with paid versions of the tools sometimes performing worse than the free versions.
The models frequently tried to provide answers, even when they were unsure, and the research highlighted issues with citations and control of publisher information.
These findings raise serious concerns about the reliability of AI-driven search tools, which are becoming increasingly popular as alternatives to traditional search engines.
Researchers who conducted the study said bots ignored protocols set by publishers to prevent unauthorised access, with a tool called Perplexity identifying all 10 excerpts from paywalled National Geographic content, despite the magazine prohibiting access by Perplexity’s web crawlers.