Some talks.

A recent talk I gave at MIT, outlining the phenomenon of double descent, and discussing interpolation as a new paradigm for machine learning.
Another talk at DeepMath 2020 conference on optimization in over-parameterized systems and transition to linearity.

An older talk at the Simons Institute discussing kernel learning, efficient algorithms, and suggesting kernels as a model for deep learning.

Some talks, including tutorials on manifold learning and semi-supervised learning with Partha Niyogi at

Slides for my old talk at the NIPS 2002 workshop on Spectral Methods in Dimensionality Reduction,
Clustering and Classification. Graph regularization was first introduced in that talk.

A short talk at COLT 2018 on the interesting tension between approximation and concentration for analyzing spectral properties of kernels.