How to establish a clear policy on the use of AI in media: the example of the ICFJ.

Source: Fundación Luca de Tena

Autor de la imagen: MingPhoto / Depositphotos

The rapid advance of generative artificial intelligence is raising fundamental questions about how to integrate it into the media in areas such as employment (not replacing people, but helping them in their work), privacy, copyright, and the veracity of information, among others.

However, comparatively few media have decided to develop clear policies on its use so that journalists know how to take advantage of it within a framework of transparency and support for journalistic quality.

In fact, many journalists complain that the media in which they work have neither clear policies on the subject nor training plans. 

But there are media outlets or journalistic organizations that decide to elaborate work proposals and share them, to make their position clear and so that it can serve as a reference for other organizations in similar circumstances, since there are still no standards that are adopted in a generalized manner.

The International Center for Journalists (ICFJ) is one of the latest organizations to undertake the task of developing an AI use policy and sharing it publicly, so that it can serve as a reference.

Motivation for the creation of an AI use policy

ICFJ identified a critical need to standardize the use of AI tools within its organization, driven by two main perspectives:

-AI concerns: concerns about how AI could affect employment, impact on the journalism community, increased misinformation, and safety in its use.

-Potential of AI: opportunities to optimize organizational processes, analyze program impacts and empower journalism.

-To address these issues, ICFJ formed a working group with members from various departments and hierarchical levels, ensuring that the resulting policy was enforceable.

-The first objective of the IA working group was to establish some principles that would guide them through the process. Specifically:

-That AI does no harm.

-Protect the rights, privacy, and original work of others.

-Use content and data with consent.

-Maintain transparency.

"After identifying these principles, we began our research. We started with AI Adoption for Newsrooms:A 10-Step Guide. We read articles, listened to podcasts, talked to colleagues, and attended webinars and in-person discussions. Once we were confident in our knowledge, we began drafting the policy."  

Ecco alcuni dei punti chiave del documento emerso da questo processo (Politica sull'uso dell'IA)

1. Principios básicos

-No causar daño: evitar daños a individuos o la comunidad.

-Proteger derechos y privacidad: salvaguardar los derechos de terceros y la privacidad de la información.

-Consentimiento para uso de datos y contenido: utilizar solo datos y contenidos con el consentimiento explícito de los involucrados.

-Transparencia: ser claros sobre cómo y para qué se utiliza la IA.

2. Prácticas recomendadas

-Uso de IA para generar borradores, analizar datos y automatizar tareas repetitivas, siempre bajo revisión y edición humana.

-Prohibición del uso exclusivo de material generado por IA sin revisión humana.

3. Despliegue ético

-Transparencia: informar a los empleados y al público sobre el uso de IA.

-Precisión y verificación: requerir verificación humana para contrarrestar los errores y fabricaciones de la IA.

-Sesgo y Equidad: atender y mitigar los sesgos potenciales en las herramientas de IA.

-Intención de Uso: limitar el uso de IA a propósitos legítimos y seguros.

4. Gestión de riesgos y seguridad

-Evaluación y mitigación de riesgos antes de adoptar cualquier herramienta de IA.

-Protección de datos confidenciales y adherencia a políticas de seguridad organizacional.

5. Capacitación y colaboración

-Capacitación periódica del personal sobre el uso responsable de la IA.

-Fomento de la colaboración con la comunidad periodística para abordar desafíos relacionados con la IA.

6. Cumplimiento de la política

-Medidas de cumplimiento ejecutadas por el equipo de seguridad de la información.

-Procesos establecidos para manejar excepciones y no cumplimientos, con revisiones regulares de la política.

 Conclusions of the process

From the ICFJ they have also shared what they have learned during the process, in the form of recommendations:

-Necesidad de aprendizaje continuo: la investigación de IA es compleja y requiere un enfoque proactivo y educativo constante.

-Organizational collaboration: the creation of collaborative spaces (such as Slack channels) to facilitate research and policy development is suggested.

-Importance of communication: it is crucial to keep all staff informed about the purpose and impact of the policy.

- Streamlined review process: the review team must understand how the policy affects all departments, beyond simple copy-editing. copyediting.

-Regular updating: incorporate a process within the policy to regularly review and update the document as new AI tools emerge.

 

Publicaciones Similares