Hey, I’m a PhD Candidate at MIT. My research focuses on the intersection of AI and policy: responsibly training, evaluating, and governing general-purpose AI systems. I lead the Data Provenance Initiative, led the Open Letter on A Safe Harbor for Independent AI Evaluation & Red Teaming, and have contributed to training models like Bloom, Aya, and Flan-T5/PaLM. I’m thankful for the recognition my research has received: Best Paper Awards from ACL 2024, NAACL 2024, as well as coverage by the NYT, Washington Post, Atlantic, 404 Media, Vox, and MIT Tech Review.
Prior:
See my full resume here, and full list of publications here.
2025.01: Core writing team for the International AI Safety Report.
2024.12: Lead organizer for The Future of Third-Party AI Evaluation Workshop, recorded here.
2025.10: Multimodal Data Provenance accepted to ICLR 2025. Covered by MIT Tech Review.
2024.08: Aya Model wins Best Paper Award at ACL 2024.
2024.08: Consent in Crisis accepted to NeurIPS 2024. Covered by the NYT, 404 Media, Vox, and Yahoo! Finance.
2024.07: A Pretrainer’s Guide to Training Data wins Outstanding Paper Award at NAACL 2024.
2024.07: 3 Oral and 1 Spotlight paper accepted to ICML 2024: (1) Safe Harbor, (2) Societal Impact of Open Foundation Models, (3) AI Autonomous Weapons Risk Geopolitical Instability, and (4) Data Authenticity, Consent, and Provenance for AI Are All Broken: What Will It Take to Fix Them?.
2024.06: The Data Provenance Initiative was awarded the Mozilla Data Futures Lab grant and wins the MIT Generative AI Impact Award, funded for $70,000. Presented at MozFest 2024.