Theodor Misiakiewicz
About me
I am a sixth-year PhD candidate in Statistics at Stanford University working with Prof. Andrea Montanari. Before my Ph.D, I completed my undergraduate studies in France at Ecole Normale Superieure de Paris with a focus on pure math and physics. I received a B.Sc. in Mathematics and a B.Sc. in Physics in 2014, and a M.Sc. in Theoretical Physics at ICFP in 2016.
My interest lies broadly at the intersection of statistics, machine learning, probability and computer science. Lately, I have been focusing on the statistical and computational aspects of deep learning, and the performance of kernel and random feature methods in high dimension. Some of the questions I am currently interested in: When can we expect neural networks to outperform kernel methods? When can neural networks beat the curse of dimensionality? On the other hand, what are the computational limits of gradient-trained neural networks? What structures in real data allows for efficient learning? When is overfitting benign? How much overparametrization is optimal? When can we expect universal or non-universal behavior in empirical risk minimization?
Previous research projects led me to work on high-dimensional statistics (inverse Ising problem), non-convex optimization (max-cut, community detection, synchronization), LDPC codes, CMS experiment at LHC (detection of Higgs Boson in the 2015 data), D-Wave 2X (quantum annealer) and massive gravity.
Here is a link to my google scholar account.
Research
My research interests include
Theory of Deep Learning: mean-field description and neural tangent kernel
Kernel and random feature methods in high-dimension (benign overfitting, multiple descent, structured kernels)
Non-convex optimization, implicit regularization, landscape analysis
Computational limits of learning with neural networks
Random matrix theory, high-dimensional probability
Full list of publications.
CV (last updated 2022).
|