Beyond Adversarial Training: A Robust Counterpart Approach to HSVM

Written by hyperbole | Published 2026/01/18
Tech Story Tags: deep-learning | robust-hyperbolic-svm | data-feature-uncertainty | counterpart-optimization | hsvm-sdp-relaxation | sparse-moment-relaxation | non-convex-qcqp-solving | minkowski-product-robustness

TLDRThe Robust HSVM manages data uncertainty structures using robust counterpart formulations and SDP relaxation for stable non-convex optimization.via the TL;DR App

Abstract and 1. Introduction

  1. Related Works

  2. Convex Relaxation Techniques for Hyperbolic SVMs

    3.1 Preliminaries

    3.2 Original Formulation of the HSVM

    3.3 Semidefinite Formulation

    3.4 Moment-Sum-of-Squares Relaxation

  3. Experiments

    4.1 Synthetic Dataset

    4.2 Real Dataset

  4. Discussions, Acknowledgements, and References

A. Proofs

B. Solution Extraction in Relaxed Formulation

C. On Moment Sum-of-Squares Relaxation Hierarchy

D. Platt Scaling [31]

E. Detailed Experimental Results

F. Robust Hyperbolic Support Vector Machine

F Robust Hyperbolic Support Vector Machine

In this section, we propose the robust version of hyperbolic support vector machine without implemention. This is different from the practice of adversarial training that searches for adversarial samples on the fly used in the machine learning community, such as Weber et al. [7]. Rather, we predefine an uncertainty structure for data features and attempt to write down the corresponding optimization formulation, which we call the robust counterpart, as described in [42, 43].

Then, by adding the uncertainty set to the constraints, we have

where the last step is a rewriting into the robust counterpart (RC). We present the 𝑙∞ norm bounded robust HSVM as follows,

Note that since 𝑦𝑖 ∈ {−1, 1}, we may drop the 𝑦𝑖 term in the norm and subsequently write down the SDP relaxation to this non-convex QCQP problem and solve it efficiently with

For the implementation in MOSEK, we linearize the 𝑙1 norm term by introducing extra auxiliary variables, which we do not show here. The moment relaxation can be implemented likewise, since this is constraint-wise uncertainty and we preserve the same sparsity pattern so that the same sparse moment relaxation applies.

Authors:

(1) Sheng Yang, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA ([email protected]);

(2) Peihan Liu, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA ([email protected]);

(3) Cengiz Pehlevan, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, Center for Brain Science, Harvard University, Cambridge, MA, and Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Cambridge, MA ([email protected]).


This paper is available on arxiv under CC by-SA 4.0 Deed (Attribution-Sharealike 4.0 International) license.


Written by hyperbole | Amplifying words and ideas to separate the ordinary from the extraordinary, making the mundane majestic.
Published by HackerNoon on 2026/01/18