Mechanistic Machine Learning: Theory, Methods, and Applications
2020-07-01ParisPerdikaris,ShaoqiangTang
Recent advances in machine learning are currently influencing the way we gather data, recognize patterns, and build predictive models across a wide range of scientific disciplines. Noticeable successes include solutions in image and voice recognition that have already become part of our everyday lives, mainly enabled by algorithmic developments, hardware advances, and,of course, the availability of massive data-sets. Many of such predictive tasks are currently being tackled using over-parameterized, black-box discriminative models such as deep neural networks, in which theoretical rigor, interpretability and adherence to first physical principles are often sacrificed in favor of flexibility in representation and scalability in computation.
While there is currently a lot of enthusiasm about big data,useful and reliable data is typically small, of variable fidelity, and expensive to acquire. We need new mathematical and computational paradigms as well as broad flexible frameworks, which can lead to probabilistic predictions using the minimum amount of information that can be processed expeditiously and be sufficiently accurate for decision making under uncertainty. The emerging area of “mechanistic" machine learning is trying to address capability gap by establishing a new interface between machine learning and computational mechanics that seeks to answer some of the following questions. (i) Can we discover parsimonious mechanistic models from data? (ii) Can we develop robust, efficient and interpretable machine learning algorithms that are constrained by first physical principles? (iii) Can we construct predictive models that can accurately generalize/extrapolate away from the observed data? (iv) Can we develop accurate surrogate and reduced order models for complex multiscale and multi-physics systems? (v) Can we accelerate largescale computational models and endow their predictions with reliable uncertainty estimates? (vi) can we accelerate the exploration of large design spaces? The answer to these questions defines a marriage between machine learning and computational mechanics and gives rise to new research directions that have the potential to accelerate the convergence of model-driven and data-driven discovery.
This special issue features methodological and applied contributions that put forth recent advances towards addressing the aforementioned challenges, and propose new possible directions for synergistically combining prior knowledge from computational mechanics with modern tools from machine learning.The twelve featured research papers are written by experts in engineering mechanics and data-driven modeling, and span the following topics: (i) mechanistic model discovery from data; (ii)data-driven modeling and simulation of physical systems; (iii)model reduction and coarse-graining of multi-scale systems; (iv)parameter identification and inverse problems; (v) uncertainty quantification; (vi) experimental design and active learning.These research papers aim to help readers gain a basic understanding of the current capabilities and state-of-the-art in mechanistic machine learning, as well as expose themselves into a diverse collection of applications in engineering mechanics [1–11].To this end, we would like to thank all the authors for contributing their works to this special issue.
Paris Perdikarisis an Assistant Professor in the Department of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania. He received his PhD in Applied Mathematics at Brown University in 2015 working under the supervision of George Em Karniadakis. Prior to joining Penn in 2018, Paris was a post-doctoral research at the department of Mechanichal Engineering at the Massachusetts Institute of Technology working on physics-informed machine learning and design optimization under uncertainty. His work spans a wide range of areas in computational science and engineering, with a particular focus on the analysis and design of complex physical and biological systems using machine learning, stochastic modeling, computational mechanics, and high-performance computing. Current research thrusts include physics-informed machine learning, uncertainty quantification in deep learning, engineering design optimization, and datadriven non-invasive medical diagnostics. His work and service has received several distinctions including the DOE Early Career Award (2018),the AFOSR Young Investigator Award (2019), and the Ford Motor Company Award for Faculty Advising (2020).
Shaoqiang Tanggraduated from Hong Kong University of Science and Technology in 1995.He has been a faculty member since 1997, and the chair since 2018, in Department of Mechanics and Engineering Science, Peking University.His recent research activities focus on the design and analysis of multiscale algorithms, including accurate artificial boundary treatments for concurrent multiscale simulations of crystalline solids, and data-driven clustering analysis for material homogenization. He has published more than 80 journal papers in computational mechanics and applied mathematics. Being a devoted teacher, he was twice voted by undergraduate students to be Ten Best Teachers of Peking University, and has earned other teaching awards from Peking University and the Beijing municipality. He has published three textbooks. Professor Tang serves in the editorial board of several international journals including Computational Mechanics, and as an Associate-Chief-Editor of Mechanics in Engineering (in Chinese), etc. He stands in several committees of Chinese Society of Theoretical and Applied Mechanics. He was also the founding director of Ministry of Education Key Lab of High Energy Density Physics Simulations (2010-2017).
杂志排行
Theoretical & Applied Mechanics Letters的其它文章
- Deep density estimation via invertible block-triangular mapping
- Classifying wakes produced by self-propelled fish-like swimmers using neural networks
- Physics-constrained indirect supervised learning
- Physics-constrained bayesian neural network for fluid flow reconstruction with sparse and noisy data
- Reducing parameter space for neural network training
- Nonnegativity-enforced Gaussian process regression