This story draft by @escholar has not been reviewed by an editor, YET.

Loss function

EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
0-item

Authors:

(1) Vladislav Trifonov, Skoltech ([email protected]);

(2) Alexander Rudikov, AIRI, Skoltech;

(3) Oleg Iliev, Fraunhofer ITWM;

(4) Ivan Oseledets, AIRI, Skoltech;

(5) Ekaterina Muravleva, Skoltech.

Table of Links

Abstract and 1 Introduction

2 Neural design of preconditioner

3 Learn correction for ILU and 3.1 Graph neural network with preserving sparsity pattern

3.2 PreCorrector

4 Dataset

5 Experiments

5.1 Experiment environment and 5.2 Comparison with classical preconditioners 5.3 Loss function

5.4 Generalization to different grids and datasets

6 Related work

7 Conclusion and further work, and References

Appendix

5.3 Loss function


As stated in Section 2, one should focus on approximation of low frequency components. In the Table 2 we can see that the proposed loss does indeed reduce the distance between extreme eigenvalues compared to IC(0). Moreover, the gap between the extreme eigenvalues is covered by the increase in the minimum eigenvalue, which supports the hypothesis of low frequency cancellation. Maximum eigenvalue also grows but with way less order of magnitude.



This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.


L O A D I N G
. . . comments & more!

About Author

EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
EScholar: Electronic Academic Papers for Scholars@escholar
We publish the best academic work (that's too often lost to peer reviews & the TA's desk) to the global tech community

Topics

Around The Web...

Trending Topics

blockchaincryptocurrencyhackernoon-top-storyprogrammingsoftware-developmenttechnologystartuphackernoon-booksBitcoinbooks