Artificial Intelligence researcher and computational social scientist
- I’m co-founder/director of Rhetorical Impact Lab, a research org which uses RCT experiments and machine learning to help public communication campaigns improve the impact of their messaging.
- I’m a Senior Research Fellow at the Stanford Polarization and Social Change lab, where I study the capacity of Large Language Models to predict treatment effects in social/behavioral sciences.
- I’m co-PI for the SSRC Mercury Project team on Combatting health misinformation with community-crafted messaging.
- My PhD was advised by Josh Tenenbaum (MIT), primarily developing scalable Bayesian methods for explainable AI. For my thesis I also conducted the largest RCT meta-analysis of political advertisements to date, working with David Broockman, Alex Coppock and Ben Tappin.
- I worked on research methods at Swayable, designing their 2020 national polling methodology which successfully predicted Biden’s vote share to within 0.5pp (compared to the 538 bias of 2pp).
Academic research by topic
Political campaign advertising • How experiments help campaigns persuade voters: evidence from a large archive of campaigns’ own experiments (Hewitt et al. 2023)
Targeted messaging • Quantifying the persuasive returns to political microtargeting (Tappin et al. 2022) • Rank-heterogeneous effects of political messages: Evidence from randomized survey experiments testing 59 video treatments (Hewitt et al. 2022)
Persistence • Estimating the Persistence of Party Cue Influence in a Panel Survey Experiment (Tappin et al. 2021)
Structured generative models • Hybrid memoised wake-sleep: Approximate inference at the discrete-continuous interface (Le et al. 2022) • Learning to learn generative programs with memoised wake-sleep (Hewitt et al. 2020)
Deep generative models • The Variational Homoencoder: Learning to learn high capacity generative models from few examples (Hewitt et al. 2018)
Emotion • Emotion prediction as computation over a generative Theory of Mind (Houlihan et al. 2023)
Perception • Bayesian auditory scene synthesis explains human perception of illusions and everyday sounds (Cusimano et al. 2023) • Auditory scene analysis as Bayesian inference in sound source models (Cusimano et al. 2017)
Concept learning • Inferring structured visual concepts from minimal data (Qian et al. 2019)