Graph Information Theory: The Mathematical Proofs Behind LSEnet and DSI

Written by hyperbole | Published 2026/02/19
Tech Story Tags: deep-learning | dsi | graph-entropy-theorems | graph-clustering-bounds | hierarchical-partitioning | indicator-function-lemma | lsenet-technical-appendix | h-dimensional-information

TLDRAccess the comprehensive mathematical proofs for Differentiable Structural Information (DSI). Explore the rigorous validation of equivalence, additivity, and graph clustering bounds that power LSEnet and self-organizing networks.via the TL;DR App

Abstract and 1. Introduction

  1. Related Work

  2. Preliminaries and Notations

  3. Differentiable Structural Information

    4.1. A New Formulation

    4.2. Properties

    4.3. Differentiability & Deep Graph Clustering

  4. LSEnet

    5.1. Embedding Leaf Nodes

    5.2. Learning Parent Nodes

    5.3. Hyperbolic Partitioning Tree

  5. Experiments

    6.1. Graph Clustering

    6.2. Discussion on Structural Entropy

  6. Conclusion, Broader Impact, and References Appendix

A. Proofs

B. Hyperbolic Space

C. Technical Details

D. Additional Results

Appendix

The appendix is structured in four sections. A. Proofs on differential structural information, B. Hyperbolic Space, including important notions and facts in hyperbolic geometry, C. Technical Details on algorithms, datasets, baselines, etc., and D. Additional Results on real datasets.

A. Proofs

Here, we detail lemmas and theorems on the proposed Differential Structural Information. In particular, we prove the equivalence, additivity, flexibility, bound and the connection to graph clustering, and show the supporting lemmas.

A.1. Proof of Theorem 4.4

Utilizing the equations above, we have completed the proof.

A.2. Proof of Theorem 4.7

Then we have the following lemma

Lemma A.1. For a weighted Graph G = (V, E) with a weight function w, and a partitioning tree T of G with height H, we can rewrite the formula of H-dimensional structural information of G w.r.t T as follows:

where I(·) is the indicator function.

Proof. We leave the proof of Lemma A.1 in Appendix A.5.

Now we start our proof of Theorem 4.7.

A.3. Proof of Lemma 4.5

A.4. Proof of Theorem 4.6

A.5. Proof of Lemma A.1

Lemma A.1 For a weighted Graph G = (V, E) with a weight function w, and a partitioning tree T of G with height H, we can rewrite the formula of H-dimensional structural information of G w.r.t T as follows:

where I(·) is the indicator function.

Proof. From Equation.1, we can rewrite structural information of G w.r.t all nodes of T at height h as follows:

Then the H-dimensional structural information of G can be written as

Utilizing the equations above, we have completed the proof.

A.6. Proof of Theorem 4.8

Authors:

(1) Li Sun, North China Electric Power University, Beijing 102206, China ([email protected]);

(2) Zhenhao Huang, North China Electric Power University, Beijing 102206, China;

(3) Hao Peng, Beihang University, Beijing 100191, China;

(4) Yujie Wang, North China Electric Power University, Beijing 102206, China;

(5) Chunyang Liu, Didi Chuxing, Beijing, China;

(6) Philip S. Yu, University of Illinois at Chicago, IL, USA.


This paper is available on arxiv under CC BY-NC-SA 4.0 Deed (Attribution-Noncommercial-Sharelike 4.0 International) license.


Written by hyperbole | Amplifying words and ideas to separate the ordinary from the extraordinary, making the mundane majestic.
Published by HackerNoon on 2026/02/19