Some representative talks.

A talk from Fall 2023 on feature learning in neural networks, Deep Learning Ansatz and Recursive Feature Machines (at Berkeley Climb/Neymann seminar).

talk I gave at MIT CBMM seminar in 2019, outlining the phenomenon of double descent, the issue with classical generalization bounds and discussing interpolation as a new paradigm for machine learning. A complementary talk at DeepMath 2020 conference concentrating on optimization in over-parameterized systems (through the PL-condition)  and transition to linearity.

An older talk (from 2017) at the Simons Institute discussing kernel learning, efficient algorithms, and suggesting kernels as a model for deep learning.

Some talks, including tutorials on manifold learning and semi-supervised learning with Partha Niyogi at

A short talk at COLT 2018 on the interesting tension between approximation and concentration for analyzing spectral properties of kernels.

Slides for my old talk at the NIPS 2002 workshop on Spectral Methods in Dimensionality Reduction, Clustering and Classification. Graph regularization was first introduced in that talk.