32589 lrg
Language Model Agents and Society
TWCF Number
32589
Project Duration
November 15 / 2024
- November 14 / 2027
Core Funding Area
Big Questions
Priority
Discovery
Region
Oceania
Amount Awarded
$999,986

* A Grant DOI (digital object identifier) is a unique, open, global, persistent and machine-actionable identifier for a grant.

Director
Seth Lazar
Institution Australian National University

Artificial Intelligence (AI) advancement, particularly the advent of large language models (LLMs), has potential to rapidly and greatly affect society. This project from a team led by Seth Lazar at The Australian National University explores the social implications of this advancement from a foundational perspective. Integrating the power of LLMs, this project will focus on Generative Agents — AI systems that can act autonomously in the world by calling on external resources to realize users’ goals.

Titled The Generative Agents and Society Project (GASP), this project seeks to understand whether and how Generative Agents can be designed to scaffold and enhance, rather than supplant, human agency. The project team proposes to achieve this through three phases: 1) anticipating and evaluating Generative Agents’ societal impacts, 2) articulating norms to guide design and regulation of virtuous Generative Agents that enhance human agency, and 3) operationalizing those norms through concrete design and regulatory proposals.

Research questions may include: Which values should guide the design of Generative Agents? What are the potential systemic societal harms of this technology? What should Generative Agents be permitted to do? How should Generative Agents interact with each other? Can Generative Agents enhance democracy or threaten it?

With the intent of informing design and regulation, GASP will investigate the limitations and prospects for learning human values through human or AI feedback on model responses, with particular interest in Constitutional AI or Reinforcement Learning with AI Feedback. They will develop ways to systematically evaluate LLM’s ethical sensitivity, with the goal of developing more effective strategies for aligning Agents to societal norms.

Working with a network of collaborators and an advisory board from across academia, the AI industry and civil society, the project also aims to broach key regulatory questions for the future of AI.

Disclaimer
Opinions expressed on this page, or any media linked to it, do not necessarily reflect the views of Templeton World Charity Foundation, Inc. Templeton World Charity Foundation, Inc. does not control the content of external links.
Person doing research
Projects &
Resources
Explore the projects we’ve funded. We’ve awarded hundreds of grants to researchers and institutions worldwide.

Projects & Resources