David Bau, Ph.D.

Northeastern University
Khoury College of Computer Sciences
177 Huntington Avenue, Floor 22
Boston, MA 02115
Email:  davidbau@northeastern.edu
Phone:  +1-781-296-9825
Website:  https://baulab.info
Orcid ID:  0000-0003-1744-6765

Research Areas

Interpretable Machine Learning, Natural Language Processing, Computer Vision.

Research Projects

National Deep Inference Fabric, ndif.us. A nationwide research infrastructure for large-scale AI model inference. Studying huge AI has become prohibitive due to the size of the models and the lack of systems to share running models between researchers. NDIF is an NSF-funded project to lower costs and unlock the next generation of large-scale AI research by creating the necessary shared infrastructure.
Model Editing, rome.baulab.info. A research program to understand the mechanisms of large models well enough that a user can directly change the parameters of a deep model according to their own intentions, rather than using a data set for retraining. For example, using closed-form upates we are able to locate and edit factual knowledge within a large language model.
GAN Paint, gandissect.csail.mit.edu. An analysis method that reveals emergent object concepts represented in the middle layers of a GAN (trained without supervision of labels). The encoding of objects is simple enough that objects can be added or removed from a scene by activating or silencing units in the GAN directly. We apply this technique to semantic photo manipulation in GAN Paint, ganpaint.io.
Network Dissection, dissect.csail.mit.edu. A system that quantifies human-interpretable concept detectors within representations of deep networks for vision. This work is used to identify emergent semantics in a range of settings, and to quantify the disentanglement of meaningful individual units in vision networks.

Education

2015-2021
Massachusetts Institute of Technology, Cambridge, MA
Ph.D. in Electrical Engineering and Computer Science
Thesis: Dissection of Deep Neural Networks
Advisor: Antonio Torralba
1992-1994
Cornell University, Ithaca, NY
M.S. in Computer Science
Book coauthored: Numerical Linear Algebra
Advisor: Lloyd N. Trefethen
1988-1992
Harvard College, Cambridge, MA
A.B. in Mathematics

Employment

2022-current
Assistant Professor. Northeastern University Khoury College of Computer Sciences.
2024-current
National Deep Inference Fabric. ndif.us. Principal Investigator and Director.
2021-2022
Postdoctoral Fellow. Martin Wattenberg lab, Harvard University.
2015-2021
Research Assistant. Antonio Torralba lab, MIT CSAIL.
2013-current
Pencil Code. pencilcode.net. Director.
2009-2014
Google Image Search. images.google.com. Staff software engineer.
2007-2008
Google Search. www.google.com. Staff software engineer.
2004-2007
Google Talk. talk.google.com (now known as Hangouts). Staff software engineer.
2003
XML Beans. xmlbeans.apache.org Contributor to the Apache Foundation.
2000-2003
Weblogic Workshop. Crossgain and BEA Systems.
1993-2000
Microsoft. Several projects:

Awards

Ruth and Joel Spira Award for Excellence in Teaching, 2024
MIT EECS Great Educators Fellowship, 2015
NSF Graduate Research Fellowship, 1992

Peer-Reviewed Publications

Journals

Grace W. Lindsay and David Bau. Testing methods of neural systems understanding. Cognitive Systems Research (2023): 101156.
David Bau, Jun-Yan Zhu, Hendrik Strobelt, Agata Lapedriza, Bolei Zhou, and Antonio Torralba. Understanding the role of individual units in a deep neural network. Proceedings of the National Academy of Sciences (PNAS), Volume 117, no. 48, December 1 2020, pp. 30071-30078.
David Bau, Bolei Zhou, Aude Oliva, Antonio Torralba: Interpreting Deep Visual Representations via Network Dissection. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) Volume 41 Issue 9, September 2019, pp. 2131-2145.
David Bau, Jeff Gray, Caitlin Kelleher, Josh Sheldon, Franklyn Turbak. Learnable Programming: Blocks and Beyond. Communications of the ACM (CACM) Volume 60 Issue 6, June 2017. pp. 72-80.

Conference papers

Sheridan Feucht, David Atkinson, Byron Wallace, David Bau. Token Erasure as a Footprint of Implicit Vocabulary Items in LLMs. Findings of the Association for Computational Linguistics. (EMNLP 2024)
Arnab Sen Sharma, David Atkinson, David Bau. Locating and Editing Factual Associations in Mamba. Proceedings of the 2024 Conference on Langauge Modeling. (COLM 2024)
Kenneth Li, Tianle Liu, Naomi Bashkansky, David Bau, Fernanda Viégas, Hanspeter Pfister, Martin Wattenberg. Measuring and Controlling Instruction Instability in Language Model Dialogs. Proceedings of the 2024 Conference on Langauge Modeling. (COLM 2024)
Stephen Casper, Carson Ezell, Charlotte Siegmann, Noarm Kolt, Taylor Lynn Curtis, Benjamin Bucknall,Andreas Haupt, Kevin Wei, Jeremy Scheurer, Marius Hobbhahn, Lee Sharkey, Satyapriya Krishna, Marvin Von Hagen, Silas Alberti, Alan Chan, Qinyi Sun, Michael Gerovitch, David Bau, Max Tegmark, David Krueger, Dylan Hadfield-Menell. Black-box Access is Insufficient for Rigorous AI Audits. Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT 2024)
Evan Hernandez, Arnab Sen Sharma, Tal Haklay, Kevin Meng, Martin Wattenberg, Jacob Andreas, Yonatan Belinkov, David Bau. Linearity of Relation Decoding in Transformer Language Models. Proceedings of the 2024 International Conference on Learning Representations. (ICLR 2024 spotlight)
Eric Todd, Millicent Li, Arnab Sen Sharma, Aaron Mueller, Byron C Wallace, David Bau. Function Vectors in Large Language Models. Proceedings of the 2024 International Conference on Learning Representations. (ICLR 2024)
Nikhil Prakash, Tamar Rott Shaham, Tal Haklay, Yonatan Belinkov, David Bau. Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking. Proceedings of the 2024 International Conference on Learning Representations. (ICLR 2024)
Rohit Gandikota, Joanna Materzyńska, Tingrui Zhou, Antonio Torralba, David Bau. Concept Sliders: Lra adaptors for precise control in diffusion models. Proceedings of the European Conference on Computer Vision (ECCV 2024)
Rohit Gandikota, Hadas Orgad, Yonatan Belinkov, Joanna Materzyńska, David Bau. Unified Concept Editing in Diffusion Models. Proceedings of the 2024 IEEE/CVF Winter Conference on Applications of Computer Vision. (WACV 2024)
Koyena Pal, Jiuding Sun, Andrew Yuan, Byron C. Wallace, and David Bau. Future Lens: Anticipating Subsequent Tokens from a Single Hidden State. SIGNLL Conference on Computational Natural Language Learning. (CoNLL 2023)
Sarah Schwettmann, Tamar Rott Shaham, Joanna Materzyńska, Neil Chowdhury, Shuang Li, Jacob Andreas, David Bau, and Antonio Torralba. A Function Interpretation Benchmark for Evaluating Interpretability Methods. Advances in Neural Information Processing Systems 36. (NeurIPS 2023).
Rohit Gandikota, Joanna Materzyńska, Jaden Fiotto-Kaufman, David Bau. Erasing Concepts from Diffusion Models. Proceedings of the 2023 IEEE International Conference on Computer Vision (ICCV 2023).
Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, David Bau. Mass-Editing Memory in a Transformer. Eleventh International Conference on Learning Representations. (ICLR 2023 spotlight).
Kenneth Li, Aspen K Hopkins, David Bau, Fernanda Viégas, Hanspeter Pfister, Martin Wattenberg. Emergent world representations: Exploring a sequence model trained on a synthetic task. Eleventh International Conference on Learning Representations. (ICLR 2023 oral).
Kevin Meng, David Bau, Alex Andonian, Yonatan Belinkov. Locating and Editing Factual Associations in GPT. Advances in Neural Information Processing Systems 36. (NeurIPS 2022).
Sheng-Yu Wang, David Bau, Jun-Yan Zhu. Rewriting Geometric Rules of a GAN. ACM Transactions on Graphics (TOG). (SIGGRAPH 2022)
Joanna Materzyńska, Antonio Torralba, David Bau. Disentangling Visual and Written Concepts in CLIP. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (CVPR 2022 oral)
Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvilli, Antonio Torralba, Jacob Andreas. Natural Language Descriptions of Deep Visual Features. Proceedings of the International Conference on Learning Representations. (ICLR 2022)
Shibani Santurkar, Dimitris Tsipras, Mahalaxmi Elango, David Bau, Antonio Torralba, and Aleksander Madry. Editing a classifier by rewriting its prediction rules. Advances in Neural Information Processing Systems 34. (NeuIPS 2021)
Emma Andrews, David Bau, and Jeremiah Blanchard. From Droplet to Lilypad: Present and Future of Dual-Modality Environments. 2021 IEEE Symposium on Visual Languages and Human-Centric Computing. (VL/HCC 2021)
Sarah Schwettmann, Evan Hernandez, David Bau, Samuel Klein, Jacob Andreas, Antonio Torralba. Toward a Visual Concept Vocabulary for GAN Latent Space. Proceedings of the IEEE/CVF International Conference on Computer Vision. (ICCV 2021)
Sheng-Yu Wang, David Bau, and Jun-Yan Zhu. Sketch Your Own GAN. Proceedings of the IEEE/CVF International Conference on Computer Vision. (ICCV 2021)
David Bau, Steven Liu, Tongzhou Wang, Jun-Yan Zhu, and Antonio Torralba. Rewriting a Deep Generative Model. Proceedings of the European Conference on Computer Vision. (ECCV 2020 oral)
Lucy Chai, David Bau, Ser-Nam Lim, and Phillip Isola. What makes fake images detectable? Understanding properties that generalize. Proceedings of the European Conference on Computer Vision. (ECCV 2020)
Steven Liu, Tongzhou Wang, David Bau, Jun-Yan Zhu, and Antonio Torralba. Diverse Image Generation via Self-Conditioned GANs. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (CVPR 2020)
David Bau, Jun-Yan Zhu, Jonas Wulff, William Peebles, Hendrik Strobelt, Bolei Zhou, and Antonio Torralba. Seeing What a GAN Cannot Generate. Proceedings of the IEEE International Conference on Computer Vision, pp. 4502-4511. (ICCV 2019 oral presentation)
David Bau, Hendrik Strobelt, William Peebles, Jonas Wulff, Bolei Zhou, Jun-Yan Zhu, and Antonio Torralba. Semantic Photo Manipulation with a Generative Image Prior. ACM Transactions on Graphics (TOG) 38, no. 4. (SIGGRAPH 2019)
Didac Suris, Adria Recasens, David Bau, David Harwath, James Glass, and Antonio Torralba. Learning words by drawing images. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (CVPR 2019)
David Bau, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B. Tenenbaum, William T. Freeman, and Antonio Torralba. GAN Dissection: Visualizing and Understanding Generative Adversarial Networks. Proceedings of the Seventh International Conference on Learning Representations. (ICLR 2019)
David Weintrop, David Bau, and Uri Wilensky. The cloud is the limit: A case study of programming on the web, with the web. International Journal of Child-Computer Interaction 20. (IJCCI 2019)
Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, Lalana Kagal. Explaining Explanations: An Overview of Interpretability of Machine Learning. Proceedings of the IEEE 5th International Conference on Data Science and Advanced Analytics. (DSAA 2018)
Bolei Zhou, Yiyou Sun, David Bau, and Antonio Torralba. Interpretable Basis Decomposition for Visual Explanation. Proceedings of the European Conference on Computer Vision. (ECCV 2018)
David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, Antonio Torralba. Network Dissection: Quantifying Interpretability of Deep Visual Representations. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017 oral presentation)
David Bau, Matt Dawson M, Anthony Bau, C.S. Pickens Pencil Code: Block Code for a Text World. Proceedings of the 14th International Conference on Interaction Design and Children. pp 445-448. (IDC 2015)
Ming Zhao, Jay Yagnik, Hartwig Adam, David Bau. Large Scale Learning and Recognition of Faces in Web Videos. 8th IEEE International Conference on Automatic Face and Gesture Recognition. (FG 2008)
David Bau, Induprakas Kodukula, Vladimir Kotlyar, Keshav Pingali, Paul Stodghill. Solving Alignment Using Elementary Linear Algebra. Languages and Compilers for Parallel Computing, Lecture Notes in Computer Science Volume 892, pp 46-60. (LCPC 1994)

Workshop papers

Sheridan Feucht, Byron C Wallace, David Bau. Inducing Induction in Llama via Linear Probe Interventions. The 7th BlackboxNLP Workshop at EMNLP (BlackboxNLP 2024).
Nicholas Vincent, David Bau, Sarah Schwettmann, Joshua Tan. An Alternative to Regulation: The Case for Public AI. Socially Responsible Language Modelling Research workshop at NeurIPS (RegML 2023, NeurIPS 2023 Workshop).
Silen Naihin, David Atkinson, Marc Green, Merwane Hamadi, Craig Swift, Douglas Schonholtz, Adam Tauman Kalai, David Bau. Testing Language Model Agents Safely in the Wild. Socially Responsible Language Modelling Research workshop at NeurIPS (SoLaR 2023).
Xander Davies, Max Nadeau, Nikhil Prakash, Tamar Rott Shaham, David Bau. Discovering Variable Binding Circuitry with Desiderata. Workshop on Challenges in Deployable Generative AI (ICML 2023 Workshop)
David Bau, Steven Liu, Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba Horses With Blue Jeans - Creating New Worlds by Rewriting a GAN. 4th Workshop on Machine Learning for Creativity and Design (NeurIPS 2020 Workshop)
David Bau, Jun-Yan Zhu, Jonas Wulff, William Peebles, Hendrik Strobelt, Bolei Zhou, and Antonio Torralba. Inverting Layers of a Large Generator. ICLR Debugging Machine Learning Models Workshop. (ICLR 2019 workshop)
Jonathan Frankle, David Bau. Dissecting Pruned Neural Networks. ICLR Debugging Machine Learning Models Workshop. (ICLR 2019 workshop)
Saksham Aggarwal, David Anthony Bau, David Bau. A blocks-based editor for HTML code. IEEE Blocks and Beyond Workshop, pp. 83-85. (VL/HCC 2015 workshop)
David Bau, Anthony Bau. A Preview of Pencil Code: A Tool for Developing Mastery of Programming. Proceedings of the 2nd Workshop on Programming for Mobile & Touch. (PROMOTO 2014)

Book

Lloyd N. Trefethen, David Bau. Numerical Linear Algebra. (373pp.) Society for Industrial and Applied Mathematics. (1997)

Preprints

Jaden Fiotto-Kaufman, Alexander R Loftus, Eric Todd, Jannik Brinkmann, Caden Juang, Koyena Pal, Can Rager, Aaron Mueller, Samuel Marks, Arnab Sen Sharma, Francesca Lucchetti, Michael Ripa, Adam Belfki, Nikhil Prakash, Sumeet Multani, Carla Brodley, Arjun Guha, Jonathan Bell, Byron Wallace, David Bau. NNsight and NDIF: Democratizing Access to Foundation Model Internals. arxiv.org/abs/2407.14561 (2024)
Samuel Marks, Can Rager, Eric J Michaud, Yonatan Belinkov, David Bau, Aaron Mueller. Sparse feature circuits: Discovering and editing interpretable causal graphs in language models. arxiv.org/abs/2403.19647 (2024)
Aaron Mueller, Jannik Brinkmann, Millicent Li, Samuel Marks, Koyena Pal, Nikhil Prakash, Can Rager, Aruna Sankaranarayanan, Arnab Sen Sharma, Jiuding Sun, Eric Todd, David Bau, Yonatan Belinkov. The Quest for the Right Mediator: A History, Survey, and Theoretical Grounding of Causal Interpretability. arxiv.org/abs/2408.01416 (2024)
Adam Karvonen, Benjamin Wright, Can Rager, Rico Angell, Jannik Brinkmann, Logan Smith, Claudio Mayrink Verdun, David Bau, Samuel Marks. Measuring progress in dictionary learning for language model interpretability with board game models. arxiv.org/abs/2408.00113 (2024)
Koyena Pal, David Bau, Renée J Miller. Model Lakes. arxiv.org/abs/2403.02327 (2024)
Alex Andonian, Sabrina Osmany, Audrey Cui, Yeon-Hwan Park, Ali Jahanian, Antonio Torralba, David Bau. Paint by Word. arxiv.org/abs/2103.10951 (2021)

Selected Patents

David Bau, Google. Predictive hover triggering. US Patent 8621395. (2011)
David Bau, Gunes Erkan, O.A. Osman, Scott Safier, Conrad Lo, Google. Providing Images of Named Resources in Response to a Search Query. US Patent 8538943. (2008)
David Bau, Google. Determining Advertisements Using User Behavior Information Such as Past Navigation Information. WO Patent 2006039393. (2005)
David Bau. Method and System for Anonymous Login for Real Time Communications. US Patent 8725810. (2005)
David Bau, John Perlow, Google. Presenting Quick List of Contacts to Communication Application User US Patent 8392836. (2005)
Rod Chavez, David Bau, Gary Burd, Google. Method and System for Managing Real-time Communications in an Email Inbox. US Patent 8577967. (2005)
Reza Behforooz, Gary Burd, David Bau, John Perlow, Google. Managing Presence Subscriptions for Messaging Services. US Patent 8751582. (2005)
David Bau, Google. User-Friendly Features for Real-Time Communications. US Patent 8095665. (2005)
Kyle Marvin, David Remy, David Bau, Rod Chavez, David Read, BEA Systems. Systems and Methods for Creating Network-Based Software Services Using Source Code Annotations. US Patent 7707564. (2004)
David Bau, BEA Systems. XML Types in Java. US Patent 7650591. (2004)
David Bau, Adam Bosworth, Gary Burd, Rod Chavez, Kyle Marvin, BEA Systems. Annotation Based Development Platform for Asynchronous Web Services. US Patent 7356803. (2002)
Andrei C, Adam Bosworth, David Bau, BEA Systems. Declarative Specification and Engine for Non-Isomorphic Data Mapping. US Patent 6859810. (2001)
Adam Bosworth, David Bau, K. Eric Vasilik, Oracle. Multi-Language Execution Method. US Patent 7266814. (2001)
Adam Bosworth, David Bau, K. Eric Vasilik, Oracle. Cell Based Data Processing. US Patent 8312429. (2000)

Invited Talks

Opening Keynote: Reslience and Human Understanding in AI. 7th Workshop on Visualization for AI Explainability. St. Petes Beach, Florida. October 2024.
Interpretability and Responsibility in AI. ECCV Workshop on Responsibly Building Generative Models. Milan, Italy. September 2024.
Knowledge as Association. ECCV Workshop on Knowledge in Generative Models. Milan, Italy. September 2024.
Three Perspectives on Unlearning. Workshop on Unlearning and Model Editing at ECCV. Milan, Italy. September 2024.
Closing Keynote: A Third Copernican Revolution. New England Mechanistic Interpretability Workshop (NEMI 2024). Boston, MA. August 2024.
The Fundamental Duality of Interpretability in ML. Mechanistic Interpretability Workshop at ICML 2024. Vienna, Austria. July 2024.
Resilience and Interpretability. FAR.AI Alignment Workshop. Vienna, Austria. July 2024.
Locating and Editing Functionality in Deep Networks. 8th Annual Center for Human-Compatible AI Workshop. Asilomar, CA. June 2024.
Functions, Facts, and a Fabric. Stanford NLP Seminar Series. Palo Alto, CA. June 2024.
Unlearning. GenLaw DC Workshop. Washington, DC. April 2024.
Breaking Open the Black Box of AI. AI in Action, Northeastern University, Boston, MA. April 2024.
A pragmatic approach to open standards for AI. AI Safety Institute, Department for Science, Innovation and Technology, London, UK. March 2024.
Flying Blind: making models we do not understand and what to do instead. Invited Talk at the Harvard AI Safety Offstite. Essex, MA. March 2024.
Direct Model Editing and Large Model Interpretability. Invited Talk at UC Berkeley CS 294-267, Understanding Large Language Models. Berkeley, CA. November 2023.
A Pivotal Moment in Machine Learning. Invited Talk at the Harvard AI Safety Offstite. Essex, MA. November 2023.
Causal Explanations, Direct Model Editing, and Real-World Impact. Invited Talk at the Toyota Technology Institute at Chicago. Chicago, IL. October 2023.
Three Ideas on Interpretbability of Big ML Models. Invited Talk at the Machine Learning Department Seminar, Carnegie Mellon University. Pittsburgh, PA. October 2023.
Big Questions and Terrifying Tea. Invited Talk at the MIT CSAIL 20/60 Anniversary Celebration. Cambridge, MA. July 2023.
How Can we Avoid a Decades-Long Delay in Interpretable AI? Talk at the Workshop on Human-Level AI. Waltham, MA. June 2023.
Locating and Editing the Facts in a Big Network. Invited Talk at the Center for Human-Compatible AI Workshop. Asilomar, CA. June 2023.
AI Safety and the Need for National Deep Inference infrastructure. Invited Talk at the MIT/Harvard AI Safety Retreat. Essex, MA. March 2023.
The ROME and MEMIT methods. Invited Talk at the Google ML Research Seminar. Mountain View, CA. January 2023.
AI Alignment, Model Interpretation, and Direct Model Editing. Invited Talk at the Harvard AI Alignment Team. Cambridge MA. January 2023.
The ROME method and Direct Model Editing. Invited Talk at the AstraZenica ML Research Seminar, January 2023.
Machine Learning Interpretability and Direct Model Editing. Invited Talk at the MIT ML Interpretability Seminar. Cambridge, MA. January 2023.
Direct Model Editing and Mechanistic Interpretability. Keynote for BlackboxNLP, at EMNLP December 2022 .
Direct Model Editing to Understand Model Knowledge. Keynote for Machine Learning Safety Workshop, at NeurIPS. New Orleans, LA. December 2022.
Causal Tracing in Vision and Language Models. Machine Learning Interpretability Research Group, University of California Berkeley, November 2022 .
Tracing and Editing Large Models. Keynote for Workshop on Trustworthy Machine Learning, UOM Sri Lanka, July 2022.
Direct Model Editing. Keynote for AI for Content Creation Workshop at CVPR June 2022.
Controlling Light in Generative Image Synthesis. AI Research Summit, Signify Research, Boston MA. May 2022.
Advances in Generative Adversarial Networks. Invited Lecture, Northeastern University. April 2022.
Mathematical Puzzles in Intepretable Deep Learning. Computational Maths and Applications Seminar, University of Oxford. October 2021.
Interpretable Deep Learning. Invited Lecture, Brown University Department of Computer Science. December 2021.
Mathematical Puzzles in Intepretable Deep Learning. Computational Maths and Applications Seminar, University of Oxford. October 2021.
Opening Up AI For Human Insight and Creativity. Keynote for Workshop on Measurements of Machine Creativity, at CVPR June 2021.
Cracking Open AI for New Insights. Keynote for Workshop on Analysis and Modeling of Faces, at CVPR June 2021.
Painting with the Neurons of a GAN. Invited lecture at MIT 6.S192: Deep Learning for Art. Cambridge, MA. January 2021.
Analyzing the Role of Neurons in an Artificial Neural Network. Kanwisher Lab Meeting, MIT Dept of Brain and Cognitive Sciences. Cambridge, MA. September 2020.
Cracking Open the Black Box. MIT-IBM Seminar Series. Cambridge, MA. September 2020.
Human Agency and Network Rules: Rewriting a Generative Network. Google Magenta Group Meeting. Mountain View, CA. September 2020.
GAN Paint and GAN Rewriting. Boston University Computer Vision Semniar. Boston, MA. September 2020.
Interacting with the Structure of a Deep Net: Rewriting the Rules of a GAN. Adobe Research. San Jose, CA. August 2020.
Reflected Light and Doors in the Sky: Rewriting GANs. Advances in Image Manipulation Workshop, ETH Zurich. Zurich, Switzerland. August 2020.
Dissecting and Modifying the Rules Inside a GAN. Computer Vision Seminar, Berkeley. Berkeley, CA. August 2020.
Creativity, Human Agency and Rewriting Deep Generative Models. Computer Graphics Seminar, Stanford University. Palo Alto, CA. August 2020.
Semantic Photo Manipulation using a GAN. RealTime Conference at SIGGRAPH, June 2020.
Explaining the Units of Classifiers and Generators in Vision. Computer Vision Seminar, Brown University. Providence, RI. April 2020.
Dissecting the Semantic Structure of Deep Networks for Vision. Explainable AI for Vision Workshop. Seoul, Korea. November 2019.
Dissecting and Manipulating Generative Adversarial Networks. Image Synthesis Workshop. Seoul, Korea. October 2019.
Exploring a Generator with GANDissect. GANocracy Workshop. Cambridge, MA. May 2019.
Understanding the Internal Structure of a GAN. Re-Work Deep Learing Summit. Boston, MA. May 2019.
Dissecting Artificial Neural Networks for Vision. Martinos Center for Biomedical Imaging. Boston, MA. April 2019.
Semantic Paint using a Generative Adversarial Network. Samsung/MIT Design Workshop. Cambridge, MA. April 2019.
Dissecting What a Generative Network Can Learn Unsupervised. DARPA XAI PI Meeting. Berkeley, CA. February 2019.
Interpretation of Deep Networks for Vision. Trustworthy and Robust AI Initiative. Cambridge, MA. February 2019.
On the Units of Generative Adversarial Networks. AAAI Workshop on Network Interpretability. Honolulu, HI. January 2019.
Explaining Explanations: Interpretation of Deep Neural Networks. Trust.ML Workshop on Public Policy Aspects of ML. Cambridge, MA. June 2018.

Organized Workshops

New England Mechanistic Interpretability Workshop.
Boston, MA. August 2024.
3rd Workshop on Human-AI Co-Creation with Generative Models.
Virtual, from Helskinki Finland. March 2022.
Structure and Intpretation of Deep Networks, Workshop organizer.
Cambridge, MA. January 2020.
Explainable AI for Vision Workshop, Workshop organizer.
Seoul Korea. November 2019.
GANocracy Workshop on the Theory, Practice, and Artistry of Deep Generative Modeling, Workshop organizer.
Cambridge, MA. May 2019.
Robust and Interpretable Deep Learning Symposium, Workshop organizer.
Cambridge, MA. November 2018.
Blocks and Beyond, Workshop organizer.
Memphis, TN. July 2017.
Teaching with Pencil Code, Workshop organizer.
Cambridge, MA. February 2014.

Students Supervised

Masters Theses

Christine You. Contrasting Contrastive and Supervised Model Representations.
Mahi Elango. Rewriting a Classification Model
Brian Shimanuki. Joint GAN generation of text and images.
Richard Yip. Understanding What a Captioning Network Doesn't Know.

Undergraduate Research

Kevin Meng. Rewriting Facts in an Autoregressive Transformer Language Model.
Audrey Cui. Steerable GAN Paint for reelighting a scene.
Brian Park. A synthetic data set for lighting control.
Sam Boshar. Interactive saliency maps.
Ben Gardner. Detecting novelty using calibrated uncertainty.
Tony Peng. Segmenting lighting in a scene.
Steven Liu. Self-conditioned Generative Adversarial Networks.
William Peebles. Semantic manipulation of a user-provided photo.
Kaveri Nadhamuni. A search for bug-causing neurons in classifiers.
Wendy Wei. Visualization of semantic clusters in a population of networks.
James Gilles. Analysis of representation similarity across vision networks.

Other Activities

Lincoln Middle School Math Team Coach. 2009-2015.
Lincoln Gear Ticks FLL Robotics Coach. 2012-2013.