Research statement
The best currently available evidence suggests that most public communication campaigns could substantially improve their impact. For example, large meta-studies of climate change communication, flu vaccination encouragement, and political advertising all find considerable variation in the impact of different messages, where the most persuasive messages have 2x greater impact than the average message. If campaigns can reliably identify such messages, they should expect to double their impact. If they additionally tailor their messaging to different audiences, they can potentially 3x their impact.
The challenge is that the “space” of messages for campaigns to decide between is enormous — there are very many things a campaign could say and many different ways to say them. Unfortunately, research shows that relying on theory and expert guidance about “what works” when designing campaign messages is unlikely to be effective by itself, because “what works” is difficult to predict and can change dramatically across contexts (e.g. see [1], [2], [3], [4], [5]).
To overcome this challenge, our central premise is that campaigns require new methods to decrease their reliance on expert-designed messaging, and be more responsive to public opinion. In particular, we focus on two research directions we believe are at the forefront of modern message development:
- Efficient message search. We design research pipelines that allow campaigns to explore the large space of potential messages more efficiently, and to quickly zero-in on the most impactful messaging strategies. Our core methodology involves conducting large-scale adaptive online survey RCTs, using a combination of large language models, surrogate metrics, and Bayesian statistics.
- Community involvement. For campaigns interested in communicating with a particular group or community, we design scalable methods that involve community-members themselves directly in the message development process. This provides intrinsic value—in the form of representation—as well as instrumental value: recent research suggests that regular people can often be far more effective than experts at predicting which messages will best resonate with others in their community.¹
Core to our research process are ethical checks and high standards for accuracy. We do not develop or test messages that are false, misleading, or that incite exclusionary attitudes or violence; we work only with campaigns whose goals are clearly aligned with public good; and we never share any data that could personally identify our research participants.
¹ This may be particularly important for reaching groups who are highly under-represented amongst those conducting the research. For example, Milkman et al. (2022) asked people to predict the effects of 22 different interventions designed to encourage flu shots in unvaccinated people. They found that the average predictions of behavioral scientists (96% of whom were themselves already vaccinated) were far less predictive of real-world impacts than those of lay-people.