Ten practical tips for effectively integrating AI into the media

Source: Luca de Tena Foundation
Senior Lecturer in Journalism at the University of Oregon, Damian Radcliffe, has shared in a workshop organized by the New York Press Association, ten practical tips to effectively integrate AI in the media.
The tips range from the formulation of objectives to innovation in content creation. This is a summary and contextualization of the tips, which are part of the talk “How is AI changing journalism?”, given by Radcliffe.
Define objectives and KPIs: before implementing AI solutions, it is crucial to establish clear and measurable objectives. Radcliffe cites an LSE survey that reveals how news organizations are implementing AI in their operations: 90% use AI for news production, including tasks such as fact-checking, proofreading, and summary writing. In addition, 80% employ AI for news distribution, with tools for content personalization and recommendation, and 75% for newsgathering, leveraging AI to spot trends and discover news through tools such as transcription and extracting text from images.
Clear guidelines for use: Radcliffe highlights the following as key points to consider: oversight, transparency (e.g., labeling if AI is used), prohibited versus permitted uses, responsibility and accountability, privacy and confidentiality, cautious experimentation, strategic use, training, and bias.
Transparency with the public: here, the professor stresses the need for clear reporting of when and where content is produced using AI and the publication of guidelines on AI use. He provided examples of several publications that have adopted explicit policies on the use of AI, such as Associated Press, The Guardian or Insider and specifically stopped at some approaches, such as Wired, which has published a detailed policy on the use of AI in its content, which includes the following points:
Not to publish stories with AI-generated text
Not publishing AI-edited text
Experimenting with AI to suggest headlines or text for short social media posts
Testing AI to generate story ideas.
Experimenting with AI as a research or analysis tool.
Publish AI-generated images or videos, but only under certain conditions.
Do not use AI-generated images in place of stock photography.
Etc
Copyright and intellectual property protection: Radcliffe emphasizes the importance of learning from past mistakes and warns against relying on technology platforms to develop business models in the future. He proposes several specific strategies that media outlets can adopt to protect their content and intellectual property in the age of AI:
Opt-out: use forms such as the OpenAI Data Opt-Out Request to exclude proprietary content from AI model training.
Block web crawlers: implement rules in the robots.txt file to block the OpenAI web crawler, known as “GPTBot”.
Update terms and conditions: follow the lead of the New York Times, which updated its terms of service on August 3 to prohibit the use of its content in training machine learning or artificial intelligence systems.
Licensing content: encourage the formation of consortia, especially for smaller players, similar to what Associated Press has done, to better control how its content is used.
Managing cultural change: Damian Radcliffe highlights how the media can intelligently integrate artificial intelligence to ease the transition and minimize resistance. He gave as an example Norway's leading newspaper, Verdens Gang, which has implemented AI-generated summaries of human-written news stories, and quoted Paula Felps of INMA (International News Media Association), who argues that this type of use “minimizes the risk of things like hallucinations and could be a good way to introduce technology into newsrooms because journalists might be more receptive to using AI for routine tasks.”
Develop new roles and competencies: with the advent of AI, new roles emerge, and new skills are needed. Training employees not only in the use of new tools, but also in critical competencies such as data interpretation and ethical oversight of the technology, is critical to a comprehensive harnessing of AI.
Recognize labor challenges: regarding the growing impact of artificial intelligence on labor issues, Damian Radcliffe cited guidelines established by the APP-MCJ Guild and the AP News Guild on the use of AI in journalism work. The quotes emphasize that AI technology, including machine learning and deep learning, should be limited to supplementing the gathering, organizing, recording or maintaining of information.
Manage risks: AI implementation carries associated risks, from technical errors to ethical issues related to privacy and misinformation. Damian Radcliffe pointed out that this technology is not foolproof, citing examples from CNET, MSN and Gannett, where edits and corrections still need to be made. In addition, he mentioned that many AI-generated images can look strange.
Adapt to changing consumer habits: AI also changes the way audiences consume news. Media must be attentive to these changes and adapt their strategies to meet new consumer demands and expectations, while ensuring the quality and reliability of content.
Innovate and do something new: Finally, AI offers the opportunity to innovate in content creation. From interactive formats to personalized narratives, AI can help explore new forms of journalism that were not possible before, opening up new avenues for audience connection. Damian Radcliffe quoted Louise Story at Nieman Lab, who raises an interesting thought: instead of focusing on how to get AI to do all the tasks that human reporters do, why not create new types of content that capture readers' attention and go beyond what human teams can create?

Similar Posts