Development action with informed and engaged societies
As of March 15 2025, The Communication Initiative (The CI) platform is operating at a reduced level, with no new content being posted to the global website and registration/login functions disabled. (La Iniciativa de Comunicación, or CILA, will keep running.) While many interactive functions are no longer available, The CI platform remains open for public use, with all content accessible and searchable until the end of 2025. 

Please note that some links within our knowledge summaries may be broken due to changes in external websites. The denial of access to the USAID website has, for instance, left many links broken. We can only hope that these valuable resources will be made available again soon. In the meantime, our summaries may help you by gleaning key insights from those resources. 

A heartfelt thank you to our network for your support and the invaluable work you do.
Time to read
4 minutes
Read so far

AI and the Future of Journalism: An Issue Brief for Stakeholders

0 comments
Affiliation
Columbia University
Date
Summary

"Accuracy and credibility are paramount for quality news outlets and worries about the 'hallucinations' of Generative AI outputs are top of mind."

This brief, published as part of the United Nations Educational, Scientific and Cultural Organization (UNESCO)'s World Trends in Freedom of Expression and Media Development series, discusses the challenges the development of generative artificial intelligence (AI) poses to journalism. It seeks to provide an analysis of the key issues around its impact on copyright, working methods, and business models, and it provides recommendations for different stakeholders on AI and the future of journalism.

The brief examines how journalism is taking advantage of AI and the media's role in creating public awareness of AI, but it also looks at the dangers of AI to journalism and media outlets, especially as it relates to copyright and intellectual property. The brief highlights some of the legislation that is available or being considered to protect the copyright of journalistic work, and it outlines some of the guidelines that have been issued and the recommendations that have been made by civil society and media organisations to protect freedom of speech and the viability of journalism in the face of AI.

A focus of the brief is the impact of large language models (LLM), which are advanced AI models trained on vast amounts of data to understand and generate human-like text. These models, such as GPT by OpenAI and Llama by Meta, can perform a variety of language tasks, including translation, summarisation, and question answering. The dangers to media outlets and freedom of expression are numerous. For example, as LLMs can synthesise and analyse data from large numbers of sources, individuals may be discouraged from consulting the original sources, which, in turn, has revenue implications for media outlets. In addition, LLMs potentially reflect, and even amplify, prejudices and distortions in existing data, which are the data on which they are trained. If that data reflects longstanding racial or gender prejudices, those prejudices will be reflected in the output of the LLMs. The delegation of decision-making to AI undermines the transparency of decision-making processes, because, as explained in the brief, even those constructing these models don't know fully how they work.

Some of the key messages, as highlighted in the brief, are: 
 

  • Journalism organisations are embracing generative AI as part of newsroom practice but are wary of the economic impact of large companies profiting from their content if used without proper permission and compensation.
  • Media outlets around the world have issued codes of conduct that stress respect for audience data, authenticity of content, disclosure when AI is used, transparency, diversity and integrity of information, and the right to remuneration by the AI firms.
  • Without agreements on copyright and intellectual property, the current business model of journalistic creation will be undermined, posing a serious threat to cultural diversity. The models themselves will be unreliable if not trained on quality information.
  • In addition to being affected by the growth of generative AI, news organisations and journalists are covering the topic and educating audiences about the risks and potential benefits. 
  • Concentration in the AI sector will have profound implications for the entire world, and competition authorities are weighing whether new regulations are needed.
  • Creators, publishers, and journalists have made the case for updating copyright regulations or at least stringently enforcing the ones that exist and making sure they take into account new technologies.
  • Some journalism organisations are signing licensing deals with large generative AI firms. In doing so, these firms are holding the line in requiring compensation and setting a market for licensing content.
  • Key areas to consider include: preserving authenticity, diversity of languages, and cultures and ensuring transparency of information. Maintaining competitive markets may help avoid the further deterioration of the cultural and news ecosystem.

The report offers recommendations for AI companies, publishers and newsrooms, states, and international agencies. The following are just some of the recommendations: 

Recommendations for AI companies:
 

  • Human-rights-based governance - In any kind of regulatory arrangements, AI companies should be able to demonstrate the systems or processes they have established to ensure ongoing human rights due diligence, as well as risk mitigation measures. These systems should be reviewed periodically, and the review should be made public.
  • Media viability and diversity - Generative AI companies should create transparent frameworks and standards for collaboration with publishers and creators, with a focus on diversity and inclusivity (including cooperation beyond dominant, English-speaking outlets). They should actively collaborate and seek journalists, publishers, and media outlets inputs to improve existing features and develop new products to support high-quality and pluralistic journalistic content.
  • Transparency - Generative AI companies should regularly report to the public and the governance system on how they adhere to the principles of transparency, including transparency with regard to data collection and web scraping practices.
  • Web scraping - Generative AI companies that use web crawlers and other scraping techniques to collect non-personal publicly available data should provide websites and content owners with effective tools to prevent unwanted automated data extraction (partially or entirely).
  • Attribution - Generative AI companies should urgently focus on improving attribution mechanisms and enable users to identify and connect with journalists, media sources, and publishers by accurately and systematically citing and linking their content.

Recommendations for publishers/newsrooms:
 

  • Adopt clear policies, or update existing ones, on the use of generative AI, and communicate these policies to audiences. Ensure human oversight and involvement at all levels of the process.
  • Clearly label and disclose to audiences the use of generative AI to report and analyse data. In the same way, artificially generated outputs, including images and audio, should be labelled as such.

Recommendations for states:
 

  • Governments can strive, through regulatory processes, to prevent AI tools from being controlled by only a few entities and ensure that their development and deployment adhere to international human rights standards, including to protect privacy, intellectual property rights, labour rights, and freedom of expression.
  • Regulatory authorities must also be careful that broad limitations on AI-generated content could end up creating greater restrictions on freedom of expression. 
     

Recommendations for intergovernmental organisations:
 

  • UNESCO could convene multi-stakeholder meetings to discuss standards of valuing content and the rights of journalism organisations and content creators.
  • UNESCO has recommended that governments promote media and information literacy.
  • UNESCO also supports a holistic approach to supporting media viability, including financial support for public interest news - with the proviso that editorial independence is respected.
Source

UNESCO website on October 2 2024. Image credit: UNESCO