This paper is available on arxiv under CC 4.0 license.
Authors:
(1) Amir Noorizadegan, Department of Civil Engineering, National Taiwan University;
(2) D.L. Young, Core Tech System Co. Ltd, Moldex3D, Department of Civil Engineering, National Taiwan University & [email protected];
(3) Y.C. Hon, Department of Mathematics, City University of Hong Kong;
(4) C.S. Chen, Department of Civil Engineering, National Taiwan University & [email protected].
PINN for Solving Inverse Burgers’ Equation
Results, Acknowledgments & References
In this study, we propose a power-enhanced variant of the ResNet that skips every other layer, denoted as the “Power-Enhanced SkipResNet.” The modification involves altering the recursive definition in (9) as follows:
For the purpose of comparison among Plain NN, ResNet, and SQR-SkipResNet (Figs. 2(a)-(c), respectively), we evaluate the output of the third hidden layer concerning the input y0 = X . The results for the plain neural network are as follows:
Figure 2(d) visually represents the “expression tree” for the case with p = 2, providing an insightful illustration of the data flow from input to output. The graph demonstrates the existence of multiple paths that the data can traverse. Each of these paths represents a distinct configuration, determining which residual modules are entered and which ones are skipped.
Our extensive numerical experiments support our approach, indicating that a power of 2 is effective for networks with fewer than 30 hidden layers. However, for deeper networks, a larger power can contribute to network stability. Nonetheless, deploying such deep networks does not substantially enhance accuracy and notably increases CPU time. In tasks like interpolation and solving PDEs, a power of 2 generally suffices, and going beyond may not justify the added complexity in terms of accuracy and efficiency