Mikhail Belkin

Mikhail (Misha) Belkin

HDSI Endowed Chair Professor in AI
Halicioglu Data Science Institute
Computer Science and Engineering (affiliated)
University of California San Diego

email: mbelkin at ucsd.edu    office: HDSI 424

In the past I have worked on a range of topics including manifold and semi-supervised learning introducing Laplacian Eigenmaps, a method for dimensionality reduction, data representation and visualization based on the geometry of the heat equation, and Graph Regularization and Manifold Regularization for semi-supervised learning. Other work includes spectral clustering, learning Gaussian mixture models, Stochastic Block Models and generalizations of Independent Components Analysis, among others.

More recently, I worked on the remarkable phenomena in deep learning, including overparameterization and interpolation. We introduced the double descent phenomenon, which reconciles the classical bias-variance trade-off with the modern overparameterized regime, where interpolating models can generalize well. A review of these and related results appeared in my paper in Acta Numerica, 2021 [arxiv].

In the last few years I have concentrated on understanding feature learning in Machine Learning models. In particular, we have developed a framework for understanding feature learning in neural networks and Recursive Feature Machines, a feature learning variant of classical kernel methods. We have been exploring its implications for representation of concepts in AI models and steering their output, in connection to the linear representation hypothesis in Deep Learning.

Key Scientific Papers and Results  [All papers in Google Scholar]

Implications and thoughts on AI

Short pieces for broader audiences.

From the blog Data, Machine Learning and AI