Authors:
(1) Vladislav Trifonov, Skoltech ([email protected]);
(2) Alexander Rudikov, AIRI, Skoltech;
(3) Oleg Iliev, Fraunhofer ITWM;
(4) Ivan Oseledets, AIRI, Skoltech;
(5) Ekaterina Muravleva, Skoltech.
Table of Links
2 Neural design of preconditioner
3 Learn correction for ILU and 3.1 Graph neural network with preserving sparsity pattern
5.1 Experiment environment and 5.2 Comparison with classical preconditioners
5.4 Generalization to different grids and datasets
7 Conclusion and further work, and References
3 Learn correction for ILU
Our main goal is to construct preconditioners that will reduce condition number of a SPD matrix greater, than classical preconditioners with the same sparisy pattern. We work with SPD matrices so ILU, ILU(p) and ILUt(p) results in incomplete Choletsky factorization IC, IC(p) and ICt(p)
3.1 Graph neural network with preserving sparsity pattern
Following the idea from Li et al. [2023], we use of GNN architecture Zhou et al. [2020] to preserve the sparsity pattern and predict the lower triangular matrix to create a preconditioner in a form of IC decomposition.
This paper is