Aligning20 Artificial20 Intelligence20with20 Human20 Values
Discovery
Oct 28, 2023

Aligning Artificial Intelligence with Human Values with Brian Christian (podcast)

Ethical and existential risks arise when the AI systems we build and strive to teach will not do what we want or expect. Researchers call this the alignment problem.

By Templeton Staff

Ethical and existential risks arise when the artificial intelligence (AI) systems we build and strive to teach will not do what we want or expect. Researchers call this the "alignment problem." Author and researcher Brian Christian's work explores the human implications of computer science. "We’re living through this incredible boom in machine learning both in the capability of what these models can do, and also in how ubiquitous and pervasive they are," says Christian in conversation with Kensy Cooperrider. "Now that we live in this world full of these systems that are essentially trained implicitly by example rather than being explicitly programmed, the question is, 'Are they learning what we want them to be learning? Are they generalizing the way we want them to be generalizing? Are they going to do what we want them to do?' This is a fear that has gone all the way back to the beginning of computer science."

Christian's latest book examines the challenges confronting the field of AI and highlights the community of researchers working to address them. For this episode of Many Minds podcast, he offers insights into the evolving landscape of AI research, its connections with other disciplines, and the ethical considerations involved in AI development.

Many Minds podcast host, cognitive scientist, and writer Kensy Cooperrider introduces the episode:

"My guest is the writer, Brian Christian. Brian is a visiting scholar at the University of California Berkeley and the author of three widely acclaimed books: The Most Human Human, published in 2011; Algorithms To Live By, co-authored with Tom Griffiths and published in 2016; and most recently, The Alignment Problem, the focus of our conversation in this episode.

The alignment problem, put simply, is the problem of building artificial intelligences — machine learning systems, for instance — that do what we want them to do, that both reflect and further our values. This is harder to do than you might think, and it’s more important than ever.

As Brian and I discuss, machine learning is becoming increasingly pervasive in everyday life — though it’s sometimes invisible. It’s working in the background every time we snap a photo or hop on Facebook. Companies are using it to sift resumes; courts are using it to make parole decisions. We are already trusting these systems with a bunch of important tasks, in other words. And as we rely on them in more and more domains, the alignment problem will only become that much more pressing.

In the course of laying out this problem, Brian’s book also offers a captivating history of machine learning and AI. Since their very beginnings, these fields have been formed through interaction with philosophy, psychology, mathematics, and neuroscience. Brian traces these interactions in fascinating detail — and brings them right up to the present moment. As he describes, machine learning today is not only informed by the latest advances in the cognitive sciences, it’s also propelling those advances."

The conversation also touches upon the relationship between Brian's background in poetry and his work in computer science, highlighting the role of creativity and interdisciplinary thinking in AI and related fields.

Play the episode with the above player.

Learn more about Templeton World Charity Foundation's Diverse Intelligences priority.

 


Templeton World Charity Foundation's Diverse Intelligences is a multiyear, global effort to understand a world alive with brilliance in many forms. Its mission is to promote open-minded, forward-looking inquiry in animal, human, and machine intelligences. We collaborate with leading experts and emerging scholars from around the globe, developing high-caliber projects that advance our comprehension of the constellation of intelligences.

Many Minds is a project of the Diverse Intelligences Summer Institute (DISI), made possible through a grant from TWCF to the University of California, Los Angeles (UCLA). The Many Minds podcast is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte. Creative support is provided by DISI Directors Erica Cartmill and Jacob Foster. Artwork featured as the podcast badge is by Ben Oldroyd.