TY - JOUR
T1 - Learning-based local weighted least squares for algebraic multigrid method
AU - Wang, Fan
AU - Gu, Xiang
AU - Sun, Jian
AU - Xu, Zongben
N1 - Publisher Copyright:
© 2023 Elsevier Inc.
PY - 2023/11/15
Y1 - 2023/11/15
N2 - Algebraic multigrid (AMG) is an effective iterative algorithm for solving large-scale linear systems. One challenge of constructing the AMG algorithm is the determination of the prolongation operator, which affects the convergence rate of AMG and is problem-dependent. In this paper, we propose a new Learning-based Local Weighted Least Squares (L-LWLS) method to construct the prolongation operator of AMG. Specifically, we construct the prolongation operator by solving the LWLS model with learned spatially-varying weights. We use the gradient descent algorithm to optimize the model with a learned initialization of the solution. Then the constructed prolongation operator is further corrected by a learned correction function to improve the convergence rate of AMG. We conduct experiments on solving graph Laplacian linear systems, diffusion partial differential equations, and Helmholtz equations. Experiments show that the proposed method can construct a better prolongation operator leading to a faster convergence rate than the compared methods, including the classical AMG, the smoothed aggregation AMG, the bootstrap AMG, and the learning-based AMG method. The results show that the proposed method can generalize well to different parameter distributions and problem sizes, i.e., the number of variables in the linear system.
AB - Algebraic multigrid (AMG) is an effective iterative algorithm for solving large-scale linear systems. One challenge of constructing the AMG algorithm is the determination of the prolongation operator, which affects the convergence rate of AMG and is problem-dependent. In this paper, we propose a new Learning-based Local Weighted Least Squares (L-LWLS) method to construct the prolongation operator of AMG. Specifically, we construct the prolongation operator by solving the LWLS model with learned spatially-varying weights. We use the gradient descent algorithm to optimize the model with a learned initialization of the solution. Then the constructed prolongation operator is further corrected by a learned correction function to improve the convergence rate of AMG. We conduct experiments on solving graph Laplacian linear systems, diffusion partial differential equations, and Helmholtz equations. Experiments show that the proposed method can construct a better prolongation operator leading to a faster convergence rate than the compared methods, including the classical AMG, the smoothed aggregation AMG, the bootstrap AMG, and the learning-based AMG method. The results show that the proposed method can generalize well to different parameter distributions and problem sizes, i.e., the number of variables in the linear system.
KW - Algebraic multigrid method
KW - Deep learning
KW - Local weighted least squares model
KW - Prolongation operator
UR - https://www.scopus.com/pages/publications/85170285817
U2 - 10.1016/j.jcp.2023.112437
DO - 10.1016/j.jcp.2023.112437
M3 - 文章
AN - SCOPUS:85170285817
SN - 0021-9991
VL - 493
JO - Journal of Computational Physics
JF - Journal of Computational Physics
M1 - 112437
ER -