Automatic Sparsity Detection for Nonlinear Equations: What You Need to Know

Written by linearization | Published 2025/03/27
Tech Story Tags: nonlinearsolve.jl | robust-nonlinear-solvers | julia-programming-language | gpu-accelerated-computation | sparse-matrix-computations | jacobian-free-krylov-methods | scientific-machine-learning | benchmarking-nonlinear-solvers

TLDRWe provide an approximate algorithm to determine the Jacobian sparsity pattern in those setups. We compute the dense Jacobian for randomly generated inputs to approximate the pattern. We take a union over the non-zero elements of Jacobian to obtain the Sparsity Pattern. Automatic sparsity detection has a high overhead for smaller systems with well-defined sparsity patterns. We show that the overall linear system enables the overall system to solve the equivalent large dense linear systems.via the TL;DR App

Table of Links

Abstract and 1. Introduction

2. Mathematical Description and 2.1. Numerical Algorithms for Nonlinear Equations

2.2. Globalization Strategies

2.3. Sensitivity Analysis

2.4. Matrix Coloring & Sparse Automatic Differentiation

3. Special Capabilities

3.1. Composable Building Blocks

3.2. Smart PolyAlgortihm Defaults

3.3. Non-Allocating Static Algorithms inside GPU Kernels

3.4. Automatic Sparsity Exploitation

3.5. Generalized Jacobian-Free Nonlinear Solvers using Krylov Methods

4. Results and 4.1. Robustness on 23 Test Problems

4.2. Initializing the Doyle-Fuller-Newman (DFN) Battery Model

4.3. Large Ill-Conditioned Nonlinear Brusselator System

5. Conclusion and References

3.4. Automatic Sparsity Exploitation

Symbolic sparsity detection has a high overhead for smaller systems with well-defined sparsity patterns. We provide an approximate algorithm to determine the Jacobian sparsity pattern in those setups. We compute the dense Jacobian for š¯‘› randomly generated inputs to approximate the pattern. We take a union over the non-zero elements of Jacobian to obtain the Sparsity Pattern. As evident, computing the sparsity pattern costs š¯‘› times the cost of computing the dense Jacobian, typically via automatic forward mode differentiation.

Approximate sparsity detection has poor scaling beyond a certain problem size, as evident from Figure 10. Similar to the shortcomings of other numerical sparsity detection software [43, 44], our method fails to accurately predict the sparsity pattern in the presence of state-dependent branches and might over-predict or under-predict sparsity due to floating point errors. Regardless, we observe in Figure 10 that approximate sparsity detection is extremely efficient for moderately sized problems. In addition to computing the Jacobian faster, sparsity detection enables us to sparse linear solvers that are significantly more efficient than solving the equivalent large dense linear systems [Subsection 4.3].

sparsity detection techniques will outperform other techniques. Finally, for large systems, using exact symbolic sparsity detection followed by colored AD is the most efficient.

This paper is available on arxiv under CC BY 4.0 DEED license.

Authors:

(1) AVIK PAL, CSAIL MIT, Cambridge, MA;

(2) FLEMMING HOLTORF;

(3) AXEL LARSSON;

(4) TORKEL LOMAN;

(5) UTKARSH;

(6) FRANK SCHĆ„FER;

(7) QINGYU QU;

(8) ALAN EDELMAN;

(9) CHRIS RACKAUCKAS, CSAIL MIT, Cambridge, MA.


Written by linearization | We publish those who illuminate the path and make the intricate intuitive.
Published by HackerNoon on 2025/03/27