Development action with informed and engaged societies
As of March 15 2025, The Communication Initiative (The CI) platform is operating at a reduced level, with no new content being posted to the global website and registration/login functions disabled. (La Iniciativa de Comunicación, or CILA, will keep running.) While many interactive functions are no longer available, The CI platform remains open for public use, with all content accessible and searchable until the end of 2025. 

Please note that some links within our knowledge summaries may be broken due to changes in external websites. The denial of access to the USAID website has, for instance, left many links broken. We can only hope that these valuable resources will be made available again soon. In the meantime, our summaries may help you by gleaning key insights from those resources. 

A heartfelt thank you to our network for your support and the invaluable work you do.
Time to read
5 minutes
Read so far

Setting Democratic Ground Rules for AI: Civil Society Strategies

0 comments
Affiliation
National Endowment for Democracy's International Forum for Democratic Studies
Date
Summary

"Whether by training professional gatekeepers who shape public opinion or by modeling rights-respecting approaches to AI deployment, civil society has an important role to play in determining how democratic societies will utilize and live with AI."

In recent years, organisations rooted in digital issues such as internet freedom, data privacy, and information space integrity have spoken out about how advances in artificial intelligence (AI) are presenting challenges to democratic principles such as privacy, transparency, accountable governance, and non-discrimination. Civil society organisations (CSOs) have a role to play in working together to map different dimensions of the problem and to learn what approaches have and haven't worked for others. This report analyses priorities, challenges, and promising civil society strategies for advancing democratic approaches to governing AI. The report is based on conversations from a May 2023 workshop held by National Endowment for Democracy's International Forum for Democratic Studies in Buenos Aires, Argentina, that brought together roughly 40 Latin American and global researchers and civil society practitioners.

The report distills key workshop takeaways, with the aim of providing a starting point for colleagues in the democracy community and beyond as they think through: AI's intersections with democratic norms; the conceptual and strategic challenges of this emerging field; and potential avenues for civil society engagement. Drawing on global civil society perspectives, the report surveys what stakeholders need to know about AI systems and the human relationships behind them. It delves into the obstacles - from misleading narratives to government opacity to gaps in technical expertise - that hinder democratic engagement on AI governance, and it explores how new thinking, new institutions, and new collaborations can better equip societies to set democratic ground rules for AI technologies.

Here is an overview of eight key challenges to and opportunities for the democratic governance of AI.

  1. The wide range of technologies described by the term "AI" are shaped by human choices about design and deployment, as well as the social and political contexts that feed into training data (a set of labeled examples that is used to train machine learning models). Like all human products, they must be open to challenge by democratic activists and institutions.
  2. The risks and harms associated with AI challenge traditional assumptions. These impacts can arise at all stages of the AI pipeline, from development to procurement to use, and they may demand new ways of thinking about issues like data protection.
  3. AI systems - from surveillance cameras to social media algorithms - already work in the background of our daily lives, and the institutions that deploy them often prefer not to share the details. This reluctance, as well as the inherent complexity of AI systems, can make it hard to map the impacts of these tools and can hinder democractic engagement.
  4. Addressing AI impacts will require more than just technical expertise. Because AI risks and harms have social and political roots, they will also require social and political responses. For instance, AI models reflect the histories of the societies that produce their training data, the inequalities that shape who is or is not represented in datasets, and the choices or assumptions of developers who optimise for certain priorities and not others. One Latin American participant in the May 2023 gathering warned others in the group to "beware of datasets designed in the global north". Moreover, responses may sometimes demand trade-offs between competing democratic values. For instance, privacy benefits when systems collect only the minimum data required ("data minimisation"), but equity may be better served by collecting sensitive demographic data in order to be able to test for bias.
  5. Technical expertise on AI is concentrated in the private sector, which places democracies and their publics at a disadvantage in key decision-making processes - many of which exclude civil society and marginalised communities. Democracies must close institutional gaps and widen participation in AI governance.
  6. Democratic governance of AI may require building specialised institutions, but it also hinges on finding ways to apply existing democratic laws and principles effectively when AI tools enter the picture.
  7. Tech expertise within civil society can help influence the trajectory of AI technologies. Civil society groups are leveraging their technical skills to: pinpoint government or corporate systems' vulnerabilities; model more inclusive, representative, and responsible approaches to design; and develop AI tools to support civic accountability activities. However, CSOs seeking to develop their own AI tools have thus far faced an uphill struggle: As with many civic tech projects, resource constraints make it difficult to ensure sustainability, and there is a lack of high-quality datasets relevant to the geographic and thematic contexts where activists work.
  8. The complexity of AI governance makes cross-sectoral collaboration crucial. AI governance challenges cut across traditional sectoral boundaries. New partnerships and knowledge-sharing initiatives that bring together digital rights groups, traditional human rights groups, journalists, trade unions, teachers, and others can enable CSOs to address these issues more effectively.

One chapter of the report explores this question: What strategies for networking, communication, and engagement in digital design hold promise for bringing democratic principles closer to centre of AI development? As reported here, potential avenues of engagement for CSOs include raising awareness, pursuing strategic litigation, engaging with government institutions on laws and policies, and promoting responsible approaches to development. Strategic communication can help convey the importance of democratic AI norms to government interlocutors and the wider public. A few specific strategies discussed here include:

  • Since it can be challenging to communicate to the public about diffuse impacts such as erosions of privacy or unaccountable decision making, advocates found it useful to leverage specific events of concern in the headlines, such as celebrity data leaks or legal cases, to spark broader conversations.
  • One participant argued that to avoid having their messages dismissed, activists need to meet people where they are and communicate in a balanced way about harms, recognising that some people and governments are enthusiastic about AI. Formulating a positive, democratic digital agenda (beyond just saying "no") is one component of this approach.
  • Some corporations have conducted initiatives for focused discussion across civil society, academia, and the public and private sectors about AI impacts. Additional research, potentially drawing on ideas from within the tech community and self-regulation models, can help to make ideas like privacy-by-design and algorithmic transparency more concrete.
  • Open data activists are considering how datasets specially curated for civic purposes in the global South might result in tools that better serve democratic institutions. From Hungary to Brazil and Peru, CSOs working for accountability have designed AI tools to help citizens make sense of public information or identify indicators of corruption. Such projects can even help to counter information asymmetries around AI itself.
  • Impact assessments can be linked to direct engagement with affected communities and their lived experience: Even where it's not possible to explain exactly how specific AI systems work, decision makers can benefit from hearing what people have to say about how systems affect their lives. Finally, it is important to ensure that these assessments reach the people who are actually in a position to make decisions about projects.

In conclusion: "As AI use grows more pervasive, our expectations of privacy, access to public goods, and opportunities to challenge injustice from the courtroom to the workplace are likely to increasingly depend on the rules and norms we establish for AI systems. At the same time, the ways in which AI impacts us will depend on how well democratic mechanisms are working to uphold government transparency, support deliberation, and engage affected communities in decision making. AI's trajectory depends in part on the health of democratic institutions, and the health of democracy will be affected by our choices around AI....[T]his report might provide an initial series of guideposts to promising opportunities for civil society engagement in AI, as well as the enduring relevance of democratic principles in this space."

Editor's note: On October 24 2023, the International Forum for Democratic Studies held a virtual event launching the report. The discussion shares key findings and further insights on opportunities for promoting democratic approaches to AI. The video of the event may be viewed below.

Source

Report release, sent from International Forum for Democratic Studies to The Communication Initiative on October 25 2023. Image credit: LuckyStep/Shutterstock

Video