This page is irregularly updated! Here is a link to my google scholar account.
Preprints:
Dimension-free deterministic equivalents for random feature regression. L. Defilippis, B. Loureiro, T. Misiakiewicz. Preprint, 2024.
A non-asymptotic theory of Kernel Ridge Regression: deterministic equivalents, test error, and GCV estimator. T. Misiakiewicz, B. Saeed. Preprint, 2024.
Asymptotics of random feature regression beyond the linear scaling regime. H. Hu, YM. Lu, T. Misiakiewicz. Preprint, 2024.
Six lectures on linearized neural networks. T. Misiakiewicz, A. Montanari. Preprint, 2023.
Spectrum of inner-product kernel matrices in the polynomial regime and multiple descent phenomenon in kernel ridge regression. T. Misiakiewicz. Preprint, 2022.
Minimum complexity interpolation in random features models. M. Celentano, T. Misiakiewicz, A. Montanari. Preprint, 2021.
Conferences:
SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics. E. Abbe, E. Boix-Adsera, T. Misiakiewicz. Conference of Learning Theory (COLT) 2023.
Precise Learning Curves and Higher-Order Scalings for Dot-product Kernel Regression. L. Xiao, J. Pennington, T. Misiakiewicz, H. Hu, Y. M. Lu. NeurIPS, 2022. (Merged with paper)
Learning with convolution and pooling operations in kernel methods. T. Misiakiewicz, S. Mei. NeurIPS, 2022.
The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networks. E. Abbe, E. Boix-Adsera, T. Misiakiewicz. Conference of Learning Theory (COLT) 2022.
Learning with invariances in random features and kernel models. S. Mei, T. Misiakiewicz, A. Montanari. Conference of Learning Theory (COLT) 2021.
When do Neural Networks Outperform Kernel Methods? B. Ghorbani, S. Mei, T. Misiakiewicz, A. Montanari. NeurIPS, 2020.
Limitations of Lazy Training of Two-layers Neural Networks B. Ghorbani, S. Mei, T. Misiakiewicz, A. Montanari. NeurIPS, 2019.
Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit. S. Mei, T. Misiakiewicz, A. Montanari. Conference of Learning Theory (COLT) 2019.
Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality. S. Mei, T. Misiakiewicz, A. Montanari. Conference of Learning Theory (COLT) 2017.
Concentration to zero bit-error probability for regular LDPC codes on the binary symmetric channel: Proof by loop calculus. M. Vuffray, T. Misiakiewicz. Proceedings of the 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton, 2015).
Journals:
Generalization error of random features and kernel methods: hypercontractivity and kernel matrix concentration. S. Mei, T. Misiakiewicz, A. Montanari. Applied and Computational Harmonic Analysis, Vol 59, 3-84, Special Issue on Harmonic Analysis and Machine Learning, 2022. (arXiv)
When do Neural Networks Outperform Kernel Methods? B. Ghorbani, S. Mei, T. Misiakiewicz, A. Montanari. Journal of Statistical Mechanics: Theory and Experiment, Volume December 2021.
Linearized two-layers neural networks in high dimension. B. Ghorbani, S. Mei, T. Misiakiewicz, A. Montanari. Annals of Statistics, Vol 49, 1029-1054, April 2021. (arXiv)
Discussion of:“Nonparametric regression using deep neural networks with ReLU activation function”. B. Ghorbani, S. Mei, T. Misiakiewicz, A. Montanari. Annals of Statistics, volume 48, 1898-1901 (2020).
Undergraduate and graduate work:
Learning with neural networks in high dimensions. T. Misiakiewicz. PhD Dissertation from Stanford University, Statistics Department, 2023. This work received the Theodore W. Anderson Theory of Statistics Award, Stanford University.
Efficient reconstruction of transmission probabilities in a spreading process from partial observations. A. Lokhov, T. Misiakiewicz. Preprint, 2015.
Estimation du bruit de fond reductible par la méthode SS dans le canal de desintegration du boson de Higgs en 4 leptons. T. Misiakiewicz. Master Thesis (2016). Laboratoire Leprince-Ringuet (LLR). CMS experiment at the Large Hadron Collider (LHC), Geneva.
Application of Graphical Models to Decoding and Machine Learning. T. Misiakiewicz. Work done at Center for Linear-Studies (CNLS) at Los Alamos National Laboratory (LANL), New Mexico, 2015.
Ondes gravitationnelles en theorie de la gravite massive. T. Misiakiewicz. Astroparticle and Cosmology Laboratory (APC), Cosmology and Gravitation theory group. Advisor: Prof. Daniele Steer (2014).