paint-brush
Automatic Sparsity Detection for Nonlinear Equations: What You Need to Knowby@linearization
New Story

Automatic Sparsity Detection for Nonlinear Equations: What You Need to Know

by Linearization TechnologyMarch 27th, 2025
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

We provide an approximate algorithm to determine the Jacobian sparsity pattern in those setups. We compute the dense Jacobian for randomly generated inputs to approximate the pattern. We take a union over the non-zero elements of Jacobian to obtain the Sparsity Pattern. Automatic sparsity detection has a high overhead for smaller systems with well-defined sparsity patterns. We show that the overall linear system enables the overall system to solve the equivalent large dense linear systems.
featured image - Automatic Sparsity Detection for Nonlinear Equations: What You Need to Know
Linearization Technology HackerNoon profile picture
0-item

Abstract and 1. Introduction

2. Mathematical Description and 2.1. Numerical Algorithms for Nonlinear Equations

2.2. Globalization Strategies

2.3. Sensitivity Analysis

2.4. Matrix Coloring & Sparse Automatic Differentiation

3. Special Capabilities

3.1. Composable Building Blocks

3.2. Smart PolyAlgortihm Defaults

3.3. Non-Allocating Static Algorithms inside GPU Kernels

3.4. Automatic Sparsity Exploitation

3.5. Generalized Jacobian-Free Nonlinear Solvers using Krylov Methods

4. Results and 4.1. Robustness on 23 Test Problems

4.2. Initializing the Doyle-Fuller-Newman (DFN) Battery Model

4.3. Large Ill-Conditioned Nonlinear Brusselator System

5. Conclusion and References

3.4. Automatic Sparsity Exploitation

Symbolic sparsity detection has a high overhead for smaller systems with well-defined sparsity patterns. We provide an approximate algorithm to determine the Jacobian sparsity pattern in those setups. We compute the dense Jacobian for 𝑛 randomly generated inputs to approximate the pattern. We take a union over the non-zero elements of Jacobian to obtain the Sparsity Pattern. As evident, computing the sparsity pattern costs 𝑛 times the cost of computing the dense Jacobian, typically via automatic forward mode differentiation.


Approximate sparsity detection has poor scaling beyond a certain problem size, as evident from Figure 10. Similar to the shortcomings of other numerical sparsity detection software [43, 44], our method fails to accurately predict the sparsity pattern in the presence of state-dependent branches and might over-predict or under-predict sparsity due to floating point errors. Regardless, we observe in Figure 10 that approximate sparsity detection is extremely efficient for moderately sized problems. In addition to computing the Jacobian faster, sparsity detection enables us to sparse linear solvers that are significantly more efficient than solving the equivalent large dense linear systems [Subsection 4.3].


Fig. 7: Automatic Sparsity Detection and Jacobian Computation for 2D Brusselator: We benchmark the time taken to perform sparsity detection with automatic matrix coloring (left figure) and computing the Jacobian using the colored matrix. Sparsity detection has a high overhead for small systems, and threaded Forward Mode AD shines here. Threaded forward AD is always faster than approximate sparsity detection; however, in Figure 10, we show that detecting sparsity pattern enables the overall linear solve (and, in turn, the nonlinear solve) to be significantly faster. Additionally, we note that exact symbolic sparsity detection asymptotically leads to better scaling.


sparsity detection techniques will outperform other techniques. Finally, for large systems, using exact symbolic sparsity detection followed by colored AD is the most efficient.


This paper is available on arxiv under CC BY 4.0 DEED license.

Authors:

(1) AVIK PAL, CSAIL MIT, Cambridge, MA;

(2) FLEMMING HOLTORF;

(3) AXEL LARSSON;

(4) TORKEL LOMAN;

(5) UTKARSH;

(6) FRANK SCHÄFER;

(7) QINGYU QU;

(8) ALAN EDELMAN;

(9) CHRIS RACKAUCKAS, CSAIL MIT, Cambridge, MA.