SPMGAE: Self-purified masked graph autoencoders release robust expression power
To tackle the scarcity of labeled graph data, graph self-supervised learning (SSL) has branched into two paradigms: Generative methods and Contrastive methods. Inspired by MAE and BERT in computer vision (CV) and natural language processing (NLP), masked graph autoencoders (MGAEs) are gaining popula...
Saved in:
| Published in: | Neurocomputing (Amsterdam) Vol. 611; p. 128631 |
|---|---|
| Main Authors: | , , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Elsevier B.V
01.01.2025
|
| Subjects: | |
| ISSN: | 0925-2312 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Be the first to leave a comment!