This story draft by @escholar has not been reviewed by an editor, YET.

Experiment environment

EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
0-item

Authors:

(1) Vladislav Trifonov, Skoltech ([email protected]);

(2) Alexander Rudikov, AIRI, Skoltech;

(3) Oleg Iliev, Fraunhofer ITWM;

(4) Ivan Oseledets, AIRI, Skoltech;

(5) Ekaterina Muravleva, Skoltech.

Table of Links

Abstract and 1 Introduction

2 Neural design of preconditioner

3 Learn correction for ILU and 3.1 Graph neural network with preserving sparsity pattern

3.2 PreCorrector

4 Dataset

5 Experiments

5.1 Experiment environment and 5.2 Comparison with classical preconditioners

5.3 Loss function

5.4 Generalization to different grids and datasets

6 Related work

7 Conclusion and further work, and References

Appendix

5.1 Experiment environment

5.2 Comparison with classical preconditioners

The proposed approach construct better preconditioner with increasing complexity of linear systems (Table 1). As the variance and/or grid size of the dataset grows, PreCorrector IC(0) preconditioner, made with IC(0) sparsity pattern, outperform the same vanilla preconditioner up to a factor of 3. This effect is particularly important for memory/efficiency trade-off. If one can afford memory, the PreCorrector ICt(1) preconditioner produces speed-up up to a factor of 2 compared to ICt(1).


While it is not completely fair to compare results of preconditioners with different densities, we observed that PreCorrector can outperform classical preconditioners with greater nnz values. The PreCorrector IC(0) outperform ICt(1) up to a factor of 1.2 − 1.5, meaning we can achieve better approximation P ≈ A with less nnz value. Moreover, the effect of the PreCorrector ICt(1) preconditioner is comparable to the ICt(5) preconditioner, which has 1.5 times larger nnz value than initial matrix A (A.2.


Architecture of the PreCorrector opens for interpretations value of the correction coefficient α in (6). In our experiments, the value of α is always negative and its values are clustered in intervals [−0.135, −0.095] and [−0.08, −0.04]. One can find greater details about values of coefficient α in Appendix A.3.



This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.


L O A D I N G
. . . comments & more!

About Author

EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
EScholar: Electronic Academic Papers for Scholars@escholar
We publish the best academic work (that's too often lost to peer reviews & the TA's desk) to the global tech community

Topics

Around The Web...

Trending Topics

blockchaincryptocurrencyhackernoon-top-storyprogrammingsoftware-developmenttechnologystartuphackernoon-booksBitcoinbooks