* A Grant DOI (digital object identifier) is a unique, open, global, persistent and machine-actionable identifier for a grant.
Research on the impact of social media on polarization has primarily focused on the US and UK. However, recent evidence suggests that the effects of social media on polarization may differ significantly in other countries. To help build a model of the various mechanisms driving polarization around the world, more research is needed to explore the interaction between online and offline social networks, cultural and political contexts, and their role in polarization across different countries.
A new project led by Jay Van Bavel and co-directed by Joshua Tucker at New York University aims to address this gap. The project team will collaborate with a team of hundreds of researchers across at least 20 (and up to 50) countries to conduct a cross-cultural field experiment in which participants are incentivized to temporarily deactivate their Facebook accounts for 2 weeks. The team plans to:
The project will also examine various variables, including individual differences, offline social networks, and country-level factors, to determine their predictive power in understanding polarization in a global context. The project will compare these findings with predictions from social media researchers and laypeople to evaluate existing polarization models. The dataset from this project, which will include a comprehensive list of translated measures of affective polarization in many countries, will be shared on the Open Science Framework so that other researchers can further study polarization in many cultural contexts.
Many fields—including psychology, sociology, communications, political science, and computer science—use computational methods to analyze text data. However, existing text analysis methods have a number of shortcomings. Dictionary methods, while easy to use, are often not very accurate when compared to recent methods. Machine learning models, while more accurate, can be difficult to train and use. This research demonstrates that the large-language model GPT is capable of accurately detecting various psychological constructs (as judged by manual annotators) in text across 12 languages, using simple prompts and no additional training data. GPT thus overcomes the limitations present in existing methods. GPT is also effective in several lesser-spoken languages, which could facilitate text analysis research from understudied contexts.