* indicates equal contribution or alphabetical author order.

The Local Landscape of Phase Retrieval Under Limited Samples Kaizhao Liu, Zihao Wang, Lei Wu arXiv preprint, 2023.

Achieving Margin Maximization Exponentially Fast via Progressive Norm Rescaling Mingze Wang, Zeping Min, Lei Wu arXiv preprint, 2023.

The noise geometry of stochastic gradient descent: A quantitative and analytical characterization Mingze Wang, Lei Wu arXiv preprint, 2023.

The L^infty Learnability of Reproducing Kernel Hilbert Spaces Hongrui Chen*, Jihao Long*, Lei Wu* arXiv preprint, 2023.

Embedding Inequalities for Barron-type Spaces Lei Wu Journal of Machine Learning, 2023.

A duality framework for generalization analysis of random feature models and two-layer neural networks Hongrui Chen*, Jihao Long*, Lei Wu* arXiv preprint, 2023.

Theoretical Analysis of Inductive Biases in Deep Convolutional Networks Zihao Wang*, Lei Wu* NeurIPS 2023.

The Implicit Regularization of Dynamical Stability in Stochastic Gradient Descent Lei Wu, Weijie J. Su. ICML 2023.

The alignment property of SGD noise and how it helps select flat minima: A stability analysis Lei Wu, Mingze Wang, Weijie J. Su NeurIPS 2022.

Beyond the quadratic approximation: the multiscale structure of neural network loss landscapes Chao Ma, Daniel Kunin, Lei Wu, Lexing Ying Journal of Machine Learning, 2022.

A spectral-based analysis of the separation between two-layer neural networks and linear methods Lei Wu, Jihao Long Journal of Machine Learning Research, 2022.

Learning a single neuron for non-monotonic activation functions Lei Wu AISTATS 2022.

A qualitative study of the dynamic behavior of adaptive gradient algorithms Chao Ma*, Lei Wu*, Weinan E Mathematical and Scientific Machine Learning (MSML), 2021.

Towards a mathematical understanding of neural network-based machine learning: what we know and what we don't Weinan E*, Chao Ma*, Stephan Wojtowytsch*, Lei Wu* CSIAM Trans. Appl. Math., 2020.

The quenching-activation behavior of the gradient descent dynamics for two-layer neural network models Chao Ma*, Lei Wu*, Weinan E arXiv Preprint, 2020.

Machine learning based non-Newtonian fluid model with molecular fidelity Huan Lei, Lei Wu, Weinan E Physical Review E, 2020.

Machine learning from a continuous viewpoint, I Weinan E*, Chao Ma*, Lei Wu* Science China Mathematics, 2020.

The slow deterioration of the generalization error of the random feature model Chao Ma*, Lei Wu*, Weinan E Mathematical and Scientific Machine Learning (MSML), 2020.

The Barron space and flow-induced function spaces for neural network models Weinan E*, Chao Ma*, Lei Wu* Constructive Approximation, 2021.

Complexity measures for neural networks with general activation functions using path-based norms Zhong Li, Chao Ma, Lei Wu arxiv preprint, 2020.

A comparative analysis of the optimization and generalization property of two-layer neural network and random feature models under gradient descent dynamics Weinan E*, Chao Ma*, Lei Wu* Science China Mathematics, 2020.

Analysis of the gradient descent algorithm for a deep neural network model with skip-connections Weinan E*, Chao Ma*, Lei Wu* arXiv preprint, 2019.

The generalization error of minimum-norm solutions for over-parameterized neural networks Weinan E*, Chao Ma*, Lei Wu* Journal of Pure and Applied Functional Analysis, 2020.

Global convergence of gradient descent for deep linear residual networks Lei Wu*, Qingcan Wang*, Chao Ma NeurIPS 2019.

A priori estimates of the population risk for two-layer neural networks Weinan E*, Chao Ma*, Lei Wu* Communications in Mathematical Sciences, 2019.

The anisotropic noise in stochastic gradient descent: Its behavior of escaping from minima and regularization effects Zhanxing Zhu, Jingfeng Wu, Bing Yu, Lei Wu, Jinwen Ma ICML 2019.

How SGD selects the global minima in over-parameterized learning: A dynamical stability perspective Lei Wu, Chao Ma, Weinan E NeurIPS 2018.

Towards understanding and improving the transferability of adversarial examples in deep neural networks Lei Wu, Zhanxing Zhu Asian Conference on Machine Learning (ACML), 2020 [arXiv version].

Irreversible samplers from jump and continuous Markov processes Yi-An Ma, Emily B Fox, Tianqi Chen, Lei Wu Statistics and Computing, 2018.

Towards understanding generalization of deep learning: perspective of loss landscapes Lei Wu, Zhanxing Zhu, Weinan E Workshop on Principled Approaches to Deep Learning, ICML2017.

Smoothed dissipative particle dynamics model for mesoscopic multiphase flows in the presence of thermal fluctuations Huan Lei, Nathan A Baker, Lei Wu, Gregory K Schenter, Christopher J Mundy, Alexandre M Tartakovsky Physical Review E, 2016.