Dual Cones & Quadratic Modules: The Geometry of Global Optimality

Written by hyperbole | Published 2026/01/16
Tech Story Tags: machine-learning | polynomial-optimization | moment-sos-hierarchy | sum-of-squares-relaxation | convex-optimization-methods | semidefinite-programming | algebraic-geometry | correlated-sparsity-patterns

TLDRIn this section, we provide necessary background on moment-sum-of-squares hierarchy. We start by considering a general Polynomial Optimization Problem (POP) and introduce the sparse version. We address the problem using convex optimization methods using polynomials.via the TL;DR App

Abstract and 1. Introduction

  1. Related Works

  2. Convex Relaxation Techniques for Hyperbolic SVMs

    3.1 Preliminaries

    3.2 Original Formulation of the HSVM

    3.3 Semidefinite Formulation

    3.4 Moment-Sum-of-Squares Relaxation

  3. Experiments

    4.1 Synthetic Dataset

    4.2 Real Dataset

  4. Discussions, Acknowledgements, and References

A. Proofs

B. Solution Extraction in Relaxed Formulation

C. On Moment Sum-of-Squares Relaxation Hierarchy

D. Platt Scaling [31]

E. Detailed Experimental Results

F. Robust Hyperbolic Support Vector Machine

C On Moment Sum-of-Squares Relaxation Hierarchy

In this section, we provide necessary background on moment-sum-of-squares hierarchy. We start by considering a general Polynomial Optimization Problem (POP) and introduce the sparse version. This section borrows substantially from the course note [4].

C.1 Polynomial Optimization and Dual Cones

Polynomial optimization problem (POP) in the most generic form can be presented as

where 𝑝(π‘₯) is our polynomial objective and β„Žπ‘–(π‘₯), 𝑔𝑖(π‘₯) are our polynomial equality and inequality constraints respectively. However, in general, solving such POP to global optimality is NP-hard [9, 29]. To address this challenge, we leverage methods from algebraic geometry [9, 21], allowing us to approximate global solutions using convex optimization methods.

To start with, we define sum-of-squares (SOS) polynomials as polynomials that could be expressed as a sum of squares of some other polynomials, and we define Ξ£[π‘₯] to be the collection of SOS polynomials. More formally, we have

where R[π‘₯] denotes the polynomial ring over R.

Next, we recall the definitions of quadratic module and its dual. Given a set of polynomials g = [𝑔1, 𝑔2, ..., 𝑔𝑙], the quadratic module generated by g is defined as

and its degree 2𝑑-truncation is defined as,

Similarly, given a set of polynomials h = [β„Ž1, β„Ž2, ..., β„Žπ‘š], the ideal generated by h is defined as,

and its degree 2𝑑-truncation is defined as,

where πœ†π‘– ’s are also called polynomial multipliers. Interestingly, it is shown that we can perfectly characterize the dual of the sum of ideal and quadratic module,

(Ideal[h]2𝑑 + Qmodule[g]2𝑑) βˆ— = 𝒡[β„Ž]2𝑑 ∩ β„³[𝑔]2𝑑 ,

With these notions setup, we can reformulate the POP above into the following SOS program for arbitrary πœ… ∈ N as the relaxataion order,

C.2 Sparse Polynomial Optimization

Definition 1. Correlated Sparsity for an objective 𝑝 ∈ R[π‘₯] and associated set of constraints means

With correlated sparsity and data regularity (Putinar’s Positivestellentz outlined in Nie [9]), we are able to decompose the Qmodule generated by the entire set of decision variables into the Minkowski sum of Qmodules generated by each sparsity group of variables, effectively reducing the number of decision variables in the implementations. For a problem with only inequality constraints, which is our case for HSVM, the sparse POP for our problem reads as

and we could derive its dual accordingly and present the SDP form for implementation in Equation (14).

Authors:

(1) Sheng Yang, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA ([email protected]);

(2) Peihan Liu, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA ([email protected]);

(3) Cengiz Pehlevan, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, Center for Brain Science, Harvard University, Cambridge, MA, and Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Cambridge, MA ([email protected]).


This paper is available on arxiv under CC by-SA 4.0 Deed (Attribution-Sharealike 4.0 International) license.

[4] Chapter 5 Moment Relaxation: https://hankyang.seas.harvard.edu/Semidefinite/Moment.html


Written by hyperbole | Amplifying words and ideas to separate the ordinary from the extraordinary, making the mundane majestic.
Published by HackerNoon on 2026/01/16