This story draft by @escholar has not been reviewed by an editor, YET.

Dataset

EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
0-item

Authors:

(1) Vladislav Trifonov, Skoltech ([email protected]);

(2) Alexander Rudikov, AIRI, Skoltech;

(3) Oleg Iliev, Fraunhofer ITWM;

(4) Ivan Oseledets, AIRI, Skoltech;

(5) Ekaterina Muravleva, Skoltech.

Table of Links

Abstract and 1 Introduction

2 Neural design of preconditioner

3 Learn correction for ILU and 3.1 Graph neural network with preserving sparsity pattern

3.2 PreCorrector

4 Dataset

5 Experiments

5.1 Experiment environment and 5.2 Comparison with classical preconditioners

5.3 Loss function

5.4 Generalization to different grids and datasets

6 Related work

7 Conclusion and further work, and References

Appendix

4 Dataset

We want to validate our approach on the data that addresses real-world problems. We consider a 2D diffusion equation:



where k(x) is a diffusion coefficient, u(x) is a solution and f(x) is a forcing term.


The diffusion equation is chosen because of its frequent appearance in many engineering applications, such as: composite modeling Carr and Turner [2016], geophysical surveys Oristaglio and Hohmann [1984], fluid flow modeling Muravleva et al. [2021]. In these cases, the coefficient functions are discontinuous, i.e. they change rapidly within neighbouring cells. An example of this is the flow of fluids of different viscosities.


We propose to measure the complexity of the dataset by contrast of the coefficient function:



Figure 2: The Gaussian random field coefficient k(x) for grid 128 × 128 and variance 0.7.


The higher the contrast (8), the more iterations are required in CG to achieve the desired tolerance, and the more complex the dataset. Condition number of resulting linear system depends on grid and the contrast, but usually high contrast is not taken into account


As a coefficient function in diffusion equation we use Gaussian random field (GRF) with efficient realization in parafields library[1] (Figure 2). The forcing term f is sampled from the standard normal distribution and each PDE is discretized using the 5-point finite difference method.


We generate four different datasets with different complexity for each grid value from {32, 64, 128}. Contrast in datasets is controlled with a variance in coefficient function GRF and takes value in {0.1, 0.5, 0.7}. Datasets are discretized with finite difference method with five-point stencil. One can find greater details about datasets in Appendix A.1


This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.


[1] https://github.com/parafields/parafield

L O A D I N G
. . . comments & more!

About Author

EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
EScholar: Electronic Academic Papers for Scholars@escholar
We publish the best academic work (that's too often lost to peer reviews & the TA's desk) to the global tech community

Topics

Around The Web...

Trending Topics

blockchaincryptocurrencyhackernoon-top-storyprogrammingsoftware-developmenttechnologystartuphackernoon-booksBitcoinbooks