Propaganda Bias in Generative AI Large Language Models
Propaganda Bias in Generative AI Large Language Models
TWCF Number
33011
Project Duration
October 4 / 2024
- April 4 / 2026
Core Funding Area
Big Questions
Region
North America
Amount Awarded
$249,550

* A Grant DOI (digital object identifier) is a unique, open, global, persistent and machine-actionable identifier for a grant.

Director
Joshua Tucker
Institution New York University

There is growing recognition of bias in generative artificial intelligence (AI) and large language models (LLMs). There are cases of chatGPT incorrectly parsing prompts because it fails to recognize women can be doctors; proposing an algorithm to determine whether someone is a good scientist solely based on whether they are white and male; and suggesting country of origin as a criterion for whether people should be tortured. Bias in this framework is the tendency to reflect and promote facts, ideologies, and beliefs that either discriminate against protected identities or inaccurately reflect the true underlying distribution of those facts, ideologies, and beliefs. This is concerning because generative AI has capacity to translate that biased information into seemingly objective text.

Governments with control over the information environment are able to produce and disseminate propagandist texts through print and digital news, which then are utilized as training data for LLM development. Discerning propaganda from non-propaganda content at scale remains challenging, and thus language models may be inept at removing such biased content from their training data and predictions.

A project from Joshua Tucker’s lab at NYU will consider propaganda bias in AI. They will test the claim that when strategic actors influence global discourse online, they also shape the training data for LLMs, which in turn can shape the models’ output in ways favorable to the actor’s cause. China will be used as a case study because of preliminary work the lab has done.

In addition to the full analysis of propaganda bias in the Chinese context, the lab will also audit Russian propaganda bias for terms related to Russia and Russia’s invasion of Ukraine; Arabic propaganda bias for terms related to Saudi Arabia; Spanish language propaganda bias for terms related to Venezuela; and Vietnamese propaganda bias for terms related to the Communist Party of Vietnam and its leaders.

Disclaimer
Opinions expressed on this page, or any media linked to it, do not necessarily reflect the views of Templeton World Charity Foundation, Inc. Templeton World Charity Foundation, Inc. does not control the content of external links.
Person doing research
Projects &
Resources
Explore the projects we’ve funded. We’ve awarded hundreds of grants to researchers and institutions worldwide.

Projects & Resources