Can approaches from engineering help us understand how moral reasoning works? Joshua Tenenbaum, Max Kleiman-Weiner and Sydney Levine seek to give humans cutting-edge tools from computer science to understand how we make decisions of right and wrong.
This project defines moral intelligence as the unique human ability for rapid and robust learning as well as generalizing moral knowledge across infinite situations and people. Throughout the ages, moral philosophers have proposed distinct accounts of moral judgment. Their theories—e.g., deontology, consequentialism, and contractualism—often seem at odds with each other.
This project hypothesizes that by combining aspects of these three theories in a unified computational model, they can act as the foundational principles for a reverse-engineering account of human moral thinking that is both predictive and interpretable. To test this hypothesis, the team’s main activities combine building formal models of moral reasoning that combine recent advances in artificial intelligence with computational cognitive modeling. In large-scale online experiments, they will conduct empirical tests on the model’s fine-grained predictions. These models point towards ways people and AI can communicate with each other and work together in morally charged scenarios.
Outputs will include workshops and publications aimed at bi-directional impact between the scientific study of human morality and industrial applications. Key outcomes of the research will be twofold:
Together, this work will ensure that the power and flexibility of human morality remains robust even in the machine age.
This is a computational study of spatial planning under uncertainty using a novel Maze Search Task (MST), in which people search mazes for probabilistically hidden rewards. The MST is designed to resemble real-life spatial planning, where costs and rewards are physically situated as distances and locations. The researchers found that people’s spatial planning is best explained by planners with limited planning horizon, as opposed to both myopic heuristics or the optimal expected utility, showing that a limited planning horizon can generalize to spatial planning tasks. They also find that our results do not exclude the possibility that in human planning action values may be affected by cognitive mechanisms of numerosity and probability perception.