Join Login




 

 

 

 

 

Kimberly Doell
 
 

 
Elizabeth A. Harris
 
 

      

Philip Pärnamets
 
 

 
Steve Rathje
 
 
 
Joshua A. Tucker
 
 
 
Jay J. Van Bavel  
   

Political Psychology in the Digital (Mis)Information age


Kimberly Doell, New York University
Elizabeth A. Harris, New York University
Philip Pärnamets, New York University and Karolinska Institute

Steve Rathje, University of Cambridge
Joshua A. Tucker, New York University
Jay J. Van Bavel, New York University

The spread of misinformation, including fake news and conspiracy theories, represents a very serious threat to many societies. Thanks in part to the rapid growth of social media, it has become easier than ever to create and spread misinformation, negatively impacting beliefs, behaviors, and policy. The spread of misinformation has proven to be deadly in the global coronavirus pandemic and contributed to the recent Capitol insurrection. It is also actively contributing to the propagation of various conspiracy theories (e.g., QAnon, vaccination hesitancy, etc.). Further, misinformation poses a serious threat to democracy because it can make it harder for citizens to make informed choices, fosters social conflict, and undercuts trust in important institutions. Therefore, there is an urgent need to understand what drives the belief in and spread of misinformation.

In our recent review paper, we summarize the latest research about misinformation, and present an integrative model (shown below) to help understand the risk factors that underlie the spread of misinformation. There are multiple pathways that lead to the spread of misinformation, and multiple risk factors that can multiply their impact. Risk factors include, for example, the increasing number of social media bad actors/trolls (Path A), cognitive risk factors, such as memory and aging (Path B), and the motivation to share misinformation because it derogates political opponents (Path C). Thus, this model integrates findings from several theories and helps to explain how various processes contribute and interact.

We also discuss potential interventions designed to stem the flow of misinformation. This includes fact-checking, equipping people with the psychological resources needed to spot fake news (e.g., fake news “inoculation”), eliminating bad actors from social media platforms, and fixing social media incentive structures. Each type of intervention targets one or two pathways in our model, but unfortunately, there is no “one-size-fits-all” solution. For example, fact-checking may reduce false beliefs, but the path from exposure to sharing remains unaddressed. Some people intentionally create and spread information that they objectively know is false because they stand to gain something (e.g., former President Trump did not want to lose the election, people posting about QAnon gain likes/shares/followers, etc.), and here fact-checking would not work. Thus, a multifaceted approach that combines aspects of multiple interventions is likely to have the largest impact on reducing misinformation.

While our paper provides a framework for understanding how different factors impact the spread of misinformation, there are still multiple avenues of research yet to be addressed. We provide a roadmap to help guide future research and policy. Research needs to take on a more interdisciplinary approach that combines multiple fields of study, investigates the pathways from sharing to belief and exposure, utilizes datasets on multiple social media platforms, and causally tests competing hypotheses. While it is impossible to anticipate the role that misinformation will take in the coming years, it seems certain that this issue will continue to grow and evolve. We believe more academic and applied work is critically needed (and should be a priority for funding agencies) to help understand and prevent the spread of misinformation.

 

 

 

 

 


back to menu