Safeguarding Trust in Elections in the Age of AI

By Rachel Greenspan

Rachel Greenspan leads the Trust in Media (TIM) NextGen Initiative and co-leads the TIM/Harvard Belfer Center “Trusted Election Analysis” working group. She has experience conducting research and analyses on the impact of false and misleading information throughout her master’s program at Johns Hopkins University and in her role as Co-Founder and Chief Media Officer for The Disinformation Project.

By Tiffany Saade

Tiffany Saade is a first-year masters candidate in the Stanford Ford Dorsey Program in International Policy, specializing in Cyber Policy and Security. She is co-leading the “Trusted Election Analysis” working group's efforts to develop and test information quality standards and measurements.

SPONSORED CONTENT — Given the rise of generative AI, access to trustworthy information is critical to the integrity of our elections. We must democratize access to quality information to restore trust in our electoral process and institutions. Information Quality (InQ) is defined as the degree to which data or information is accurate, reliable, relevant, complete, and timely. It is at the heart of our institutions’ integrity, of our platforms’ capacity to guide our decision making processes, and ultimately affects the very fabric of representation, trust and electoral outcomes. 

It is also at the heart of the multi-sectoral “Trusted Election Analysis” working group the Trust in Media Cooperative (TIM) has culminated in collaboration with the Harvard University Belfer Center for Science and International Affairs. The working group brings together experts from industry and academia with the aim of increasing the demand for quality information, rather than combating false and misleading narratives. The working group is shaping the dialogue around election integrity and information quality by crafting a set of research-backed, scientifically grounded standards and measurements that average voters, institutions, and the public can turn to in their quest for reliable, transparent, explainable and credible election-related information.

We chose elections as our first use-case because more than half the world’s population will be casting their ballots in 2024. This year is suspected to be one of the biggest election years in history, at a time where geopolitical fragmentation, multi-sectoral crises, social disparities, AI-enhanced information poisoning campaigns and cyberattacks have reached new heights. 

The objective of the TIM-Belfer working group is to leverage the silver-linings of AI to enable and encourage the consumption and amplification of high-quality content about upcoming elections. We believe that renewing trust in our institutions hinges on ensuring the availability of quality information, and we are creating the standards and measurements to do just that. 

Design Process 

The Trusted Election Analysis working group is focused on developing and institutionalizing standards and measurements to align AI capabilities with information quality, and supply the public with reliable information for secure and trusted elections. The working group, led by the Hon. Ellen McCarthy, Chairwoman and CEO of the Trust in Media Cooperative and senior fellow at the Belfer Center at Harvard University, kicked off in October 2023 and continues to meet on a biweekly basis. Our efforts have been organized into 5 phases including:

  1. Development of key concepts to characterize information and sources of information
  2. Alignment of key characteristics with specific standards and gaining consensus on formal definitions for InQ standards
  3. Exploration and evaluation of proven and existing indicators and measurements for InQ standards
  4. Identification of plausible, reliable, and diverse data sets to inform measurements and determine whether specific information meets TIM’s InQ standards
  5. Gaining consensus and finalizing essential InQ standards based on a comprehensive and collaborative review, assessment, and deliberation

Research Process

The research process has involved continuous design and development of research questions to test the applications of our proposed standards and measurements for information quality. The working group began by identifying a list of characteristics—with adequate definitions—that potentially inform whether information is or is not “of quality”. Identified characteristics include, but are not limited to:

  • Information provenance: the degree to which the stated origin matches the source and the history of changes and ownership it has undergone over time.
  • Timeliness: the degree to which information is consistent with the stated time frame; for instance, whether an image and its subtitle match the indicated date.
  • Source credibility and reliability: A reliable or credible source is one that provides a thorough, well-reasoned theory, argument, discussion, etc. based on strong evidence.
  • Sentiment neutrality: the degree to which information is void of emotional appeal. 
  • Consistency: the degree to which content sections are congruent such as between headline and body text and between image and body text.

Building on the insights and proposed obstacles shared throughout biweekly working group sessions, we conducted a comprehensive literature review to document indicators to support and validate a few prioritized standards. We then compared these proposed standards and measurements with other open-sources that measure the quality of information. For example, different sources and tools may measure reliability and credibility based on curated ratings from media quality indexes, disclosures of source ownership, or their own unique standards, such as the C2PA W3C verifiable credentials or the Trust Project’s 8 Trust Indicators. Our approach also carefully refrains from content moderation or calling out cases of false or misleading information, as research shows this is often counterproductive to the goal of users engaging more readily with quality information. Instead, the working group is focused on presenting users with a means to determine whether information is considered to be “of quality” based on our proposed standards and measurements. This context will then inform how one chooses to interpret the information presented. 

The working group’s ongoing research aims to test three hypotheses: First, when presented with simple quality standards, voters’ likelihood to consume high-quality election-related information increases. Second, when quality information is widely consumed, amplification of and engagement with false or misleading content declines. Third, by promoting the consumption of and demand for quality information, we can improve trust in election procedures and outcomes.

Putting Information Quality Standards into Action

These standards and measurements will later be integrated into a publicly available AI-powered dashboard that allows users to access and verify reliable and accurate election information, based on the standards they care most about. This dashboard will provide users with visual and quantitative indicators of quality, provenance, and accuracy of election-related information. This tool will also dampen asymmetries around election-related news and “information deserts” that are often weaponized and poisoned by malicious threat actors. These risks could affect the type of information users are exposed to and ultimately shatter user trust in election-related information.

Challenges and Key Considerations 

The working group has systematically discussed various challenges in aligning specific metrics with InQ standards. These challenges include: determining how the application and assignment of different standards and measurements will vary across different types of content (e.g., long-form articles versus shorter social media posts); whether the use of AI to generate information affects the authenticity and quality of the information outputs; whether measurements of virality can help determine the provenance of information; and accounting for the temporal elements of certain measurements including how the credibility of an information source may vary over time.

Harnessing Youth Perspectives 

Our efforts to shape standards on information quality are cross-sectoral and include the perspectives of those from different cultures and demographics. The working group is capitalizing on the ideas and experiences of young leaders through the TIM NextGen Initiative. The NextGen initiative aims to create a pipeline for young voices to engage in conversations about preserving information quality, and welcomes their perspectives on multi-faceted approaches to achieve this aim, especially as it relates to election integrity (our current use-case). The ultimate goal is to amplify youth perspectives and bridge the gap between “digital natives” and the older generations shaping the technologies and policies governing the information space and its downstream effects on users. We recently hosted a Trust in Media NextGen Information Quality (InQ) Virtual Conference, which featured presentations from young entrepreneurs working to develop technical solutions to empower users to stay well-informed online, as well as a moderated discussion with youth participants on the development of standards and measurements for quality information. This conversation gave us a better understanding of how young people think about information consumption in general and as it relates to election-related content. Participants shared that they believe there is no role for incitement in election-related information, the importance of holding media and news platforms accountable for the information they share, and emphasized that reliable information needs to be presented in ways that are “ fast, easy, and fun” in order for it to reach young people.

Conclusion

In our current era of information poisoning and fragile trust in institutions, it is imperative to create and expand the demand for quality information. This effort begins with developing and sharing scientifically tested standards and measurements that can promote quality information, thereby reducing the attention that misleading content attracts. We will also use these standards and measurements to test and examine how AI models generate election-related content. Information quality is a democratic necessity in the age of AI. Our standards work is building the foundation to restore trust in the political process. 

Our mission is to ensure access to information quality based on agreed-upon standards and tools created by a trusted, nonpartisan forum of experts. Our vision is to provide people with the tools needed to make their own choices about the election-related information they consume.

This is sponsored content.  Consider publishing your national security-related, sponsored content in The Cipher Brief, with a monthly audience reach of more than 500K national security influencers from the public and private sectors.  Drop us a note at [email protected].


Related Articles

Search

Close