Theodor Misiakiewicz
About me
I am a sixthyear PhD candidate in Statistics at Stanford University working with Prof. Andrea Montanari. Before my Ph.D, I completed my undergraduate studies in France at Ecole Normale Superieure de Paris with a focus on pure math and physics. I received a B.Sc. in Mathematics and a B.Sc. in Physics in 2014, and a M.Sc. in Theoretical Physics at ICFP in 2016.
My interest lies broadly at the intersection of statistics, machine learning, probability and computer science. Lately, I have been focusing on the statistical and computational aspects of deep learning, and the performance of kernel and random feature methods in high dimension. Some of the questions I am currently interested in: When can we expect neural networks to outperform kernel methods? When can neural networks beat the curse of dimensionality? On the other hand, what are the computational limits of gradienttrained neural networks? What structures in real data allows for efficient learning? When is overfitting benign? How much overparametrization is optimal? When can we expect universal or nonuniversal behavior in empirical risk minimization?
Previous research projects led me to work on highdimensional statistics (inverse Ising problem), nonconvex optimization (maxcut, community detection, synchronization), LDPC codes, CMS experiment at LHC (detection of Higgs Boson in the 2015 data), DWave 2X (quantum annealer) and massive gravity.
Here is a link to my google scholar account.
Research
My research interests include
Theory of Deep Learning: meanfield description and neural tangent kernel
Kernel and random feature methods in highdimension (benign overfitting, multiple descent, structured kernels)
Nonconvex optimization, implicit regularization, landscape analysis
Computational limits of learning with neural networks
Random matrix theory, highdimensional probability
Full list of publications.
CV (last updated 2022).
