This story draft by @escholar has not been reviewed by an editor, YET.

Machine learning models

EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
0-item

Table of Links

Abstract and 1. Introduction

  1. Related work

  2. HypNF Model

    3.1 HypNF Model

    3.2 The S1/H2 model

    3.3 Assigning labels to nodes

  3. HypNF benchmarking framework

  4. Experiments

    5.1 Parameter Space

    5.2 Machine learning models

  5. Results

  6. Conclusion, Acknowledgments and Disclosure of Funding, and References


A. Empirical validation of HypNF

B. Degree distribution and clustering control in HypNF

C. Hyperparameters of the machine learning models

D. Fluctuations in the performance of machine learning models

E. Homophily in the synthetic networks

F. Exploring the parameters’ space

5.2 Machine learning models

In this work, we focus on two primary methodologies: feature-based methods, which entail node embedding based on their features, and GNNs, which integrate both features and network topology.


MLP: A vanilla neural network transforms node feature vectors through linear layers and non-linear activations to learn embeddings in Euclidean space.


HNN [12]: A variant of MLP that operates in hyperbolic space to capture complex patterns and hierarchical structures.


GCN [18]: A pioneering model that averages the states of neighboring nodes at each iteration.


GAT [33]: A model that uses attention mechanisms to assign different importance to different nodes in a neighborhood.


HGCN [9]: A model that integrates hyperbolic geometry with graph convolutional networks to capture complex structures in graph data more effectively.


Table 2 in Appendix C lists the hyperparameters for training. In the LP task, links are split into training (85%), validation (5%), and test (10%) sets. For the NC task, nodes are distributed as 70%


Figure 3: Impact of each individual parameter on the performance of NC and LP. In the case of NC, we set NL = 6 and α = 10.


training, 15% validation, and 15% test [9]. Both tasks follow the methodology in [9], with results averaged over five test-train splits. Models were trained on an NVIDIA GeForce RTX 3080 GPU using Python 3.9, CUDA 11.7, and PyTorch 1.13.


Authors:

(1) Roya Aliakbarisani, this author contributed equally from Universitat de Barcelona & UBICS ([email protected]);

(2) Robert Jankowski, this author contributed equally from Universitat de Barcelona & UBICS ([email protected]);

(3) M. Ángeles Serrano, Universitat de Barcelona, UBICS & ICREA ([email protected]);

(4) Marián Boguñá, Universitat de Barcelona & UBICS ([email protected]).


This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.


L O A D I N G
. . . comments & more!

About Author

EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
EScholar: Electronic Academic Papers for Scholars@escholar
We publish the best academic work (that's too often lost to peer reviews & the TA's desk) to the global tech community

Topics

Around The Web...

Trending Topics

blockchaincryptocurrencyhackernoon-top-storyprogrammingsoftware-developmenttechnologystartuphackernoon-booksBitcoinbooks