David Bau

Interpretable Neural Networks

Northeastern University Khoury College of Computer Sciences

Why we study deep network internals. David Bau (narrates), with Antonio Torralba, Jun-Yan Zhu, Hendrik Strobelt, Jonas Wulff, and William Peebles. Video by Lillie Paquette, MIT School of Engineering.

When we study artificial neural networks, we have the luxury of looking at everything that is going on. This presents us with a new opportunity to tackle the grand and fundamental research problem of understanding how cognition works.

This video interview was from my dissertation work on GANs, but as neural networks have continued to become more powerful, the urgency of understanding, anticipating, and controlling these black-box systems has also grown. Our lab is currently focused on understanding the structure of large generative models including LLMs and diffusion models.

Want to come to Boston to work on deep learning with me? Apply to Khoury here and contact me if you are interested in joining as a graduate student or postdoc.

Back to the main page to read more about our research