In this study, we implemented a one-stage supervised contrastive learning framework that employs contrastive loss as a regularization term to enhance the generalization capabilities of supervised learning. The loss objective of our model comprises two components: the supervised loss derived from the original graph
We conducted extensive experiments to identify the optimal
Experiments for
The project is based on PyGCL(Zhu, Yanqiao et al. “An Empirical Study of Graph Contrastive Learning.” ArXiv abs/2109.01116 (2021): n. pag.) library. Specifically, We have adopted and applied the GRACE (Y. Zhu et al., Deep Graph Contrastive Representation Learning, GRL+@ICML, 2020) and SupCon (P. Khosla et al., Supervised Contrastive Learning, NeurIPS, 2020) frameworks to our graph representative learning task.
Find out more about our data here: Data
View our final report here: Final Report