Mistrust of artificial intelligence-generated news is on the rise.

Source: Luca de Tena Foundation
This new study, conducted by academics from the University of Minnesota and Oxford University, led by Benjamin Toff and Felix M. Simon, and titled:
"Or They Could Just Not Use It: The Paradox of AI Disclosure for Audience Trust in News," sheds light on the complicated dynamics between technological innovation in the media and public trust in AI-produced news.

Readers do not have much confidence in news written by artificial intelligence. If a few days ago, a survey revealed that in Switzerland media users do not want AI-generated information on relevant topics, now it is research in the United States that concludes in the same sense and confirms this rejection.
Readers demand to specify whether a news item has been generated by AI
Despite the growing integration of AI in various spheres of everyday life, and increasingly in newspaper newsrooms, the study reveals a deep distrust among US citizens towards news written by AI. Only 10% of respondents believe that AI is better than humans at crafting journalistic content.
This compared skepticism intensifies among those who previously had a high trust in the media and a solid understanding of journalism. However, according to the study, AI-written articles are not perceived to be less accurate, or fair to human writing.
The research points out that more than 80% of participants believe it is crucial for the media to inform readers when AI is used in news production. In addition, 78% of respondents suggest that an explanatory note should be provided detailing how AI was employed in the creation of articles. Half of the participants favored including a byline on articles, attributing the work to AI.
A particularly interesting finding is that when articles indicate the sources used to create them, public trust tends to be restored. The results of the study suggest that negative effects on the perception of trustworthiness are largely counteracted when articles disclose the list of sources used to generate the content. This underscores the importance of transparency in the media's use of AI technologies.
The study highlights a crucial challenge for newsrooms around the world: the need for clear and transparent communication about the use of AI, especially in how it is used to produce news. Such an approach could be essential to maintain or even reinforce public trust in an era of growing skepticism toward the media, the research confirms.
Sample and recruitment.
For the research, a sample of 1,483 English-speaking participants in the U.S. was selected using the Prolific platform. Participants were randomly assigned to two conditions: a control group that read stories attributed to a fictional news organization, and another that saw labels indicating the use of AI in newsrooms. Some also saw lists of sources used to generate the articles. Questions were asked to assess perceptions of news reliability, accuracy, and bias.
Mean responses to questions were compared between the control and treatment groups to assess treatment effects. Particular attention was paid to differences in perception according to the level of prior trust in the news and knowledge of journalistic procedures.

Publicaciones Similares