Luke Hewitt
I work on computational & experimental tools for measuring what changes people’s beliefs/attitudes, with application to effective advocacy, public health communication, AI safety, and social science methodology. My research combines RCTs, LLMs, expert forecasting and hierarchical Bayesian models.
Currently:
- I’m co-founder/director of Rhetorical Labs, a research collective which uses RCT experiments and machine learning to help public communication campaigns improve the impact of their messaging.
- I’m a Senior Research Fellow at Stanford PACS, where I study the capacity of Large Language Models to predict treatment effects in the social/behavioral sciences.
- I’m a member of the South Park Commons technical community (SF Bay hub)
- I’m co-PI for the SSRC Mercury Project team on Combatting health misinformation with community-crafted messaging.
Previously:
- AI safety consulting, OpenAI (GPT-4o persuasion evaluation)
- Research data scientist, Swayable (persuasion measurement, expt. design & analysis)
- PhD in AI / Cognitive Science, MIT (advisor: Josh Tenenbaum)
- MEng in Mathematical Computation, UCL
↓
Research
→ Quantifying the returns to persuasive message-targeting using a large archive of campaigns’ own experiments* - Tappin, Hewitt, Coppock (APSA 2024)
→ How will advanced AI systems impact democracy? Summerfield et al. (in review)
→ Leveraging Large Language Models to Predict Results of Experiments in the Social Sciences Hewitt*, Ashokkumar* et al. (in review)
→ GPT-4o System Card: Persuasion OpenAI (2024)
→ How experiments help campaigns persuade voters: evidence from a large archive of campaigns’ own experiments Hewitt et al. (APSR, 2024)
→ Using survey experiment pre-testing to support future pandemic response Tappin and Hewitt (PNAS Nexus, 2024)
→ Listening with generative models Cusimano et al. (Cognition, 2024)
→ Quantifying the persuasive returns to political microtargeting Tappin et al. (PNAS, 2023)
→ Emotion prediction as computation over a generative Theory of Mind Houlihan et al. (Phil. Trans. A, 2023)
→ DreamCoder: growing generalizable, interpretable knowledge with wake-sleep bayesian program learning Ellis et al. (Phil. Trans. A, 2023)
→ Rank-heterogeneous effects of political messages: Evidence from randomized survey experiments testing 59 video treatments Hewitt et al. (working paper)
→ Hybrid memoised wake-sleep: Approximate inference at the discrete-continuous interface Le et al. (ICLR, 2022)
→ DreamCoder: Bootstrapping Inductive Program Synthesis with Wake-Sleep Library Learning Ellis et al. (PLDI, 2021)
→ Estimating the Persistence of Party Cue Influence in a Panel Survey Experiment Tappin et al. (JEPS, 2021)
→ Learning to learn generative programs with memoised wake-sleep Hewitt et al. (UAI, 2020)
→ Inferring structured visual concepts from minimal data Qian et al. (CogSci, 2019)
→ Learning to infer program sketches Nye et al. (ICML, 2019)
→ The Variational Homoencoder: Learning to learn high capacity generative models from few examples Hewitt et al. (UAI, 2018)
→ Auditory scene analysis as Bayesian inference in sound source models Cusimano et al. (CogSci, 2017)