A comprehensive benchmarking framework for graph machine learning, focusing on the performance of GNNs across varied network structures.
(a) Representation of a model-based framework integrating
-
Generate synthetic networks or download from Zenodo (https://zenodo.org/records/11473505)
cd src/ python generate_synthetic_networks.py [config_file.yaml]
The example config file is located in
config/net_test.yaml
.The script creates a folder which name contains all the parameters for each set of parameters in the config file.
-
Run benchmark on generated dataset
cd hgcn/ # Follow instractions there to install necessary libraries # Copy datasets into data/ folder cp -r "../src/output*" data/ # Run experiments for a given model and downstream task python run_ml_models_on_S1.py nc HGCN
You can measure the properties of your graph data using notebooks/extract-parameters-from-real-networks.ipynb
. For
If you want to generate synthetic networks in the PyTorch Geometric library, go to hypnf.py file, where we provide a loader for the HypNF model. For usage, please refer to the tutorial.
If you find our benchmark or data useful, please cite our paper:
@misc{aliakbarisani2024hyperbolic,
title={Hyperbolic Benchmarking Unveils Network Topology-Feature Relationship in GNN Performance},
author={Roya Aliakbarisani and Robert Jankowski and M. Ángeles Serrano and Marián Boguñá},
year={2024},
eprint={2406.02772},
archivePrefix={arXiv},
primaryClass={cs.LG}
}