Applied Math Seminar Series: Zecheng Zhang, Florida State University
Operator Learning Neural Scaling and Distributed Applications
In this talk, we will focus on operator learning — a framework for approximating mappings between function spaces that has broad applications in PDE-related problems. We will begin by discussing the mathematical foundations of operator approximation, which inform the design of neural network architectures and provide a basis for analyzing the performance of trained models on test samples. Specifically, we will introduce the neural scaling law, which characterizes error convergence in relation to network size and generalization error in relation to training dataset size. Building on these theoretical insights, we will present a distributed learning algorithm based on the theoretical architectures to address two key computational challenges: (1) efficiently handling heterogeneous problems where input functions exhibit vastly different properties, and (2) multi-operator learning, where use one network model to approximate multiple operators simultaneously such that the model can be extrapolated and rapidly adapt to the new problems.
A short bio:
Dr. Zecheng Zhang got his BSc from the Department of Mathematics at Hong Kong Baptist University. Later, he received full sponsorship to do a MSc (supervisors: Professor Yaushu Wong and Professor Peter Minev) in Mathematics at the Department of Mathematics at the University of Alberta. He then did his Ph.D. in Mathematics under the supervise of Professor Yalchin Efendiev, and Professor Eric Chung, at Texas A&M University and graduated in 2021. Following graduation, he joined the Department of Mathematics at Purdue as a visiting assistant professor under the supervise of Professor Guang Lin (Purdue). Subsequently, he moved to the Department of Mathematics at Carneigie Mellon University and did postdoc with Professor Hayden Schaeffer (UCLA). In August 2023, he joined the Department of Mathematics at Florida State University as an assistant professor.