32585 lrg
From Polarization to Pluralism: AI in Support of Human Flourishing
TWCF Number
32585
Project Duration
November 15 / 2024
- November 14 / 2027
Core Funding Area
Big Questions
Region
North America
Amount Awarded
$1,505,196

* A Grant DOI (digital object identifier) is a unique, open, global, persistent and machine-actionable identifier for a grant.

Director
Sydney Levine
Institution Allen Institute for Artificial Intelligence

coDirector
Jay Van Bavel
Institution New York University

Artificial intelligence (AI) has the potential to exacerbate or solve some of the world’s greatest social problems. AI has already encouraged polarization via social media – stoking outgroup hostility and solidifying filter bubbles. Deployed unreflectively, AI contributes to an increasingly divided society. As an antidote, a project team directed by Sydney Levine and co-directed by Jay Van Bavel seeks to use AI systems to identify and distill the plurality of values that humans hold; to develop mechanisms for incorporating value pluralism into AI decision-making systems; and to test a central use-case to combat polarization by encouraging depolarizing dialogue across the ideological spectrum.

The project team hypothesizes that:

  • The building blocks of human pluralistic values can be distilled from Large Language Models, using a wide range of theoretical foundations. This distillation process will be incomplete given the limited nature of the model’s training data and can be augmented by gathering data from diverse and representative human samples.
  • Consensus positions on value-laden decisions can be computed algorithmically, where compromises generated via an agreement-based model will be judged by human raters to be better than those generated by competing models.
  • Exposing participants to a moral consensus discovered by the team’s algorithm will reduce polarization and partisan animosity.

To test these hypotheses, 3 stages of work are planned. First, the team will use a previously developed method to distill values from multiple theoretical perspectives, develop and evaluate new models and datasets, augmenting these with human participation. They’ll then develop an algorithm that serves as a “moral parliament” to compute the compromise between competing values. Lastly, they will pilot methods for using AI to combat polarization.

Their goal is to build an agreement-based process for incorporating diverse values into AI systems and produce decisions judged fair by those impacted. The hope is that AI can be used to bring people together rather than driving them farther apart.

Disclaimer
Opinions expressed on this page, or any media linked to it, do not necessarily reflect the views of Templeton World Charity Foundation, Inc. Templeton World Charity Foundation, Inc. does not control the content of external links.
Person doing research
Projects &
Resources
Explore the projects we’ve funded. We’ve awarded hundreds of grants to researchers and institutions worldwide.

Projects & Resources