Teaching

  • Simon's institute (Fall 2021): organizing a reading group on the Mean-Field description of neural networks (link). (Handwritten notes).

  • Teaching assistant at the Deep Learning Theory Summer School, Princeton, summer 2021 (link).

  • NSF collaboration reading group: handwritten notes on breaking the curse of dimensionality with neural networks.

  • Teaching assistant in the Statistics Department, Stanford (2017-2022)

    • STATS 310A: Theory of Probability I (Fall 2021).

    • MATH 20: Introduction to calculus (Summer 2021).

    • STATS 375: Mathematical problems in Machine Learning (Spring 2021).

    • STATS 300C: Theory of statistics III (Spring 2021).

    • STATS 216: Introduction to Statistical Learning (Winter 2021).

    • STATS 200: Introduction to Statistical Inference (Fall 2020, Winter 2018).

    • STATS 116: Theory of Probability (Spring 2020, Fall 2018).

    • STATS 221: Random Processes on Graphs and Lattices (Winter 2020).

    • STATS 110: Statistical Methods in Engineering and the Physical Sciences (Fall 2019).

    • STATS 310C: Theory of Probability III (Spring 2019).

    • STATS 202: Data Mining and Analysis (Summer 2018).

    • STATS 101: Data Science 101 (Fall 2017).

Talks, Workshops and Conferences

  • PDE Methods in Machine Learning: from Continuum Dynamics to Algorithms, Granada, Spain (June 2024).

  • New statistical and computational phenomena from Deep Learning, TTIC, Chicago (January 2023).

  • Learning with convolution and pooling operations in kernel methods, NeurIPS 2022 (December 2022).

  • ML theory seminar (alg-ml) at Princeton (November 2022).

  • Learning Sparse functions with SGD on neural networks, Informs 2022, Topics in Theory of Neural Networks (October 2022).

  • Computational aspects of learning sparse functions with neural networks, Information Systems Laboratory (ISL) Colloquium, Stanford (October 2022).

  • The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networks, COLT 2022 (July 2022). With the amazing Enric Boix-Adsera. Slides.

  • Learning sparse functions in the mean-field regime, Learning: Optimization and Stochastics (Summer Research Institute 2022), EPFL (June 2022). Slides.

  • When do neural networks outperform kernel methods? Are kernel methods doomed?, ELLIS reading group on Mathematics of Deep Learning (June 2022). Slides.

  • Tutorial: Benign Overfitting, Double Descent and RKHS ridge regression made easy!, New Interactions Between Statistics and Optimization workshop, BIRS, Banff (May 2022). Slides.

  • Learning staircases: deep learning takes a staircase to heaven, Deep Learning Theory Symposium workshop, Simons Institute, Berkeley (December 2021). Video.

  • Learning with invariances in random features and kernel models. Conference of Learning Theory (August 2021). Video. Slides. Poster.

  • Minimum Complexity Interpolation in Random Features Models. Youth in High-Dimensions, Trieste (June 2021). Video. Slides.

  • Learning structured data with random features and kernel methods. ML lunch, Stanford (April 2021). Slides.

  • Learning structured data with random features and kernel methods. NSF-Simons Journal Club (April 2021).

  • When Do Neural Networks Outperform Kernel Methods? G-Research (February 2021).

  • When Do Neural Networks Outperform Kernel Methods? NeurIPS 2020. Video. Poster.

  • ML Foundations seminar, Microsoft Research (November 2020).

  • MoDL (NSF collaboration) (October 2020).

  • Limitations of Lazy-Training in two-layers neural networks. NeurIPS 2019. Slides. Poster.