Table of Links Abstract and 1. Introduction Abstract and 1. Introduction 2. Mathematical Description and 2.1. Numerical Algorithms for Nonlinear Equations 2. Mathematical Description and 2.1. Numerical Algorithms for Nonlinear Equations 2.2. Globalization Strategies 2.2. Globalization Strategies 2.3. Sensitivity Analysis 2.3. Sensitivity Analysis 2.4. Matrix Coloring & Sparse Automatic Differentiation 2.4. Matrix Coloring & Sparse Automatic Differentiation 3. Special Capabilities 3. Special Capabilities 3.1. Composable Building Blocks 3.1. Composable Building Blocks 3.2. Smart PolyAlgortihm Defaults 3.2. Smart PolyAlgortihm Defaults 3.3. Non-Allocating Static Algorithms inside GPU Kernels 3.3. Non-Allocating Static Algorithms inside GPU Kernels 3.4. Automatic Sparsity Exploitation 3.4. Automatic Sparsity Exploitation 3.5. Generalized Jacobian-Free Nonlinear Solvers using Krylov Methods 3.5. Generalized Jacobian-Free Nonlinear Solvers using Krylov Methods 4. Results and 4.1. Robustness on 23 Test Problems 4. Results and 4.1. Robustness on 23 Test Problems 4.2. Initializing the Doyle-Fuller-Newman (DFN) Battery Model 4.2. Initializing the Doyle-Fuller-Newman (DFN) Battery Model 4.3. Large Ill-Conditioned Nonlinear Brusselator System 4.3. Large Ill-Conditioned Nonlinear Brusselator System 5. Conclusion and References 5. Conclusion and References 2.4. Matrix Coloring & Sparse Automatic Differentiation The Jacobian computation and linear solver are typical bottlenecks for numerical nonlinear equation solvers. If the sparsity pattern of a Jacobian is known, then computing the Jacobian can be done with much higher efficiency [42]. Sparsity patterns for arbitrary programs can be automatically generated using numerical techniques [43, 44] or symbolic methods [45]. Given the sparsity pattern, we can use graph coloring algorithms [46, 47] to compute the matrix colors for the given sparse matrix. This paper is available on arxiv under CC BY 4.0 DEED license. This paper is available on arxiv under CC BY 4.0 DEED license. available on arxiv [7] Sparse symbolic AD is the most optimal way to compute sparse Jacobians in certain situations. We currently lack the capability for built-in symbolic sparse Jacobians. [8] For wider availability of the sparse Reverse Mode functionality, we make it available via SparseDiffTools.jl instead of NonlinearSolve.jl. Authors: (1) AVIK PAL, CSAIL MIT, Cambridge, MA; (2) FLEMMING HOLTORF; (3) AXEL LARSSON; (4) TORKEL LOMAN; (5) UTKARSH; (6) FRANK SCHÄFER; (7) QINGYU QU; (8) ALAN EDELMAN; (9) CHRIS RACKAUCKAS, CSAIL MIT, Cambridge, MA. Authors: Authors: (1) AVIK PAL, CSAIL MIT, Cambridge, MA; (2) FLEMMING HOLTORF; (3) AXEL LARSSON; (4) TORKEL LOMAN; (5) UTKARSH; (6) FRANK SCHÄFER; (7) QINGYU QU; (8) ALAN EDELMAN; (9) CHRIS RACKAUCKAS, CSAIL MIT, Cambridge, MA.