I am a fourth-year PhD candidate in Statistics at Stanford University working with Andrea Montanari. Before my Ph.D, I completed my undergraduate studies in France at Ecole Normale Superieure de Paris with a focus on pure math and physics. I received a B.Sc. in Mathematics and a B.Sc. in Physics in 2014, and a M.Sc. in Theoretical Physics at ICFP in 2016.
My interest lies broadly at the intersection of information theory, statistical physics, statistics and probability. Lately, I have been focusing on the theory of deep learning. Specifically, I am interested in the mean-field description of neural networks and the connection in some regime between neural networks trained by gradient descent and kernel methods.
Previous research projects led me to work on high-dimensional statistics (inverse Ising problem), non-convex optimization (max-cut, community detection, synchronization), LDPC codes, CMS experiment at LHC (detection of Higgs Boson in the 2015 data), D-Wave 2X (quantum annealer) and massive gravity.
Here is a link to my google scholar account.
- 02/2021: we posted our pre-print Learning with invariances in random features and kernel models on arXiv.
- 12/2020: our discussion of:“Nonparametric regression using deep neural networks with ReLU activation function” is out!
- 12/2020: we presented When Do Neural Networks Outperform Kernel Methods? at NeurIPS 2020. Here is the poster summarizing the paper.
- 09/2019: our pre-print on Limitations of Lazy Training of Two-layers Neural Networks was accepted at NeurIPS 2019. Here are the slides presented at the spotlight. Here is the poster summarizing the paper.