Tutorials

  • Lecture notes with Andrea Montanari: Six lectures on linearized neural networks. Based on lectures given by A. Montanari at the “Deep Learning Theory Summer School”, Princeton (2021), and at the summer school “Statistical Physics & Machine Learning”, Les Houches School of Physics (2022).

  • Simon's institute (Fall 2021): organizing a reading group on the Mean-Field description of neural networks (link). (Handwritten notes).

  • NSF collaboration reading group: handwritten notes on breaking the curse of dimensionality with neural networks.

Talks, Workshops and Conferences

  • JSM, Nashville (August 2025).

  • Statistics Seminar, Toulouse School of Economics (June 2025).

  • Random matrices and high-dimensional learning dynamics, CRM, Montreal (June 2025).

  • Lausanne Event on Machine Learning and Neural Network Theory (LEMAN-TH 2025), EPFL (May 2025).

  • Physics of AI algorithms, Les Houches (January 2025).

  • Statistics Seminar, Cornell (November 2024).

  • Wilks Memorial Seminar, Princeton (October 2024).

  • INFORMS, Seattle (October 2024).

  • Big Data and Artificial Intelligence in Econometrics, Finance, and Statistics, Stevanovich Center, UChicago (October 2024).

  • PDE Methods in Machine Learning: from Continuum Dynamics to Algorithms, Granada, Spain (June 2024).

  • Workshop on Statistical Inference and Learning Dynamics, IDEAL, Northwestern (May 2024).

  • MoDL meeting, San Diego (May 2024).

  • Random Structures, Computation, and Statistical Inference, AMS sectional meeting, San Francisco (May 2024).

  • Research at TTIC seminar, TTIC (April 2024).

  • NSF TRIPODS II Meeting (April 2024).

  • Statistics seminar, Yale University (April 2023).

  • New statistical and computational phenomena from Deep Learning, TTIC, Chicago (January 2023).

  • Statistics seminar, University of Chicago (January 2023).

  • Learning with convolution and pooling operations in kernel methods, NeurIPS 2022 (December 2022).

  • ML theory seminar (alg-ml) at Princeton (November 2022).

  • Learning Sparse functions with SGD on neural networks, Informs 2022, Topics in Theory of Neural Networks (October 2022).

  • Computational aspects of learning sparse functions with neural networks, Information Systems Laboratory (ISL) Colloquium, Stanford (October 2022).

  • The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networks, COLT 2022 (July 2022). With the amazing Enric Boix-Adsera. Slides.

  • Learning sparse functions in the mean-field regime, Learning: Optimization and Stochastics (Summer Research Institute 2022), EPFL (June 2022). Slides.

  • When do neural networks outperform kernel methods? Are kernel methods doomed?, ELLIS reading group on Mathematics of Deep Learning (June 2022). Slides.

  • Tutorial: Benign Overfitting, Double Descent and RKHS ridge regression made easy!, New Interactions Between Statistics and Optimization workshop, BIRS, Banff (May 2022). Slides.

  • Learning staircases: deep learning takes a staircase to heaven, Deep Learning Theory Symposium workshop, Simons Institute, Berkeley (December 2021). Video.

  • Learning with invariances in random features and kernel models. Conference of Learning Theory (August 2021). Video. Slides. Poster.

  • Minimum Complexity Interpolation in Random Features Models. Youth in High-Dimensions, Trieste (June 2021). Video. Slides.

  • Learning structured data with random features and kernel methods. ML lunch, Stanford (April 2021). Slides.

  • Learning structured data with random features and kernel methods. NSF-Simons Journal Club (April 2021).

  • When Do Neural Networks Outperform Kernel Methods? G-Research (February 2021).

  • When Do Neural Networks Outperform Kernel Methods? NeurIPS 2020. Video. Poster.

  • ML Foundations seminar, Microsoft Research (November 2020).

  • MoDL (NSF collaboration) (October 2020).

  • Limitations of Lazy-Training in two-layers neural networks. NeurIPS 2019. Slides. Poster.