Reverse-Engineering the Moral Mind​​
TWCF Number
0319
Project Duration
July 25 / 2018
- July 24 / 2021
Core Funding Area
Big Questions
Region
North America
Amount Awarded
$228,251
Grant DOI*

* A Grant DOI (digital object identifier) is a unique, open, global, persistent and machine-actionable identifier for a grant.

Director
Joshua Tenenbaum
Institution Massachusetts Institute of Technology

Can approaches from engineering help us understand how moral reasoning works? Joshua TenenbaumMax Kleiman-Weiner and Sydney Levine seek to give humans cutting-edge tools from computer science to understand how we make decisions of right and wrong.

This project defines moral intelligence as the unique human ability for rapid and robust learning as well as generalizing moral knowledge across infinite situations and people. Throughout the ages, moral philosophers have proposed distinct accounts of moral judgment. Their theories—e.g., deontology, consequentialism, and contractualism—often seem at odds with each other.

This project hypothesizes that by combining aspects of these three theories in a unified computational model, they can act as the foundational principles for a reverse-engineering account of human moral thinking that is both predictive and interpretable. To test this hypothesis, the team’s main activities combine building formal models of moral reasoning that combine recent advances in artificial intelligence with computational cognitive modeling. In large-scale online experiments, they will conduct empirical tests on the model’s fine-grained predictions. These models point towards ways people and AI can communicate with each other and work together in morally charged scenarios.

Outputs will include workshops and publications aimed at bi-directional impact between the scientific study of human morality and industrial applications. Key outcomes of the research will be twofold:

  1. Increasing the philosophical sophistication of AI researchers and the prevalence of a computational approach among moral psychologists and philosophers.
  2. Expanding our ethical capacity by enabling machines to interact with us as partners in thought by strengthening the role of human judgment in industrial deployments of AI technology.

Together, this work will ensure that the power and flexibility of human morality remains robust even in the machine age.

Project Resources

This is a computational study of spatial planning under uncertainty using a novel Maze Search Task (MST), in which people search mazes for probabilistically hidden rewards. The MST is designed to resemble real-life spatial planning, where costs and rewards are physically situated as distances and locations. The researchers found that people’s spatial planning is best explained by planners with limited planning horizon, as opposed to both myopic heuristics or the optimal expected utility, showing that a limited planning horizon can generalize to spatial planning tasks. They also find that our results do not exclude the possibility that in human planning action values may be affected by cognitive mechanisms of numerosity and probability perception.

Disclaimer
Opinions expressed on this page, or any media linked to it, do not necessarily reflect the views of Templeton World Charity Foundation, Inc. Templeton World Charity Foundation, Inc. does not control the content of external links.
Person doing research
Projects &
Resources
Explore the projects we’ve funded. We’ve awarded hundreds of grants to researchers and institutions worldwide.

Projects & Resources