DyGNN: Algorithm and Architecture Support of Dynamic Pruning for Graph Neural Networks

Recently, graph neural networks (GNNs) have achieved great success for graph representation learning tasks. Enlightened by the fact that numerous message passing redundancies exist in GNNs, we propose DyGNN, which speeds up GNNs by reducing redundancies. DyGNN is supported by an algorithm and archit...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2021 58th ACM/IEEE Design Automation Conference (DAC) s. 1201 - 1206
Hlavní autoři: Chen, Cen, Li, Kenli, Zou, Xiaofeng, Li, Yangfan
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 05.12.2021
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Recently, graph neural networks (GNNs) have achieved great success for graph representation learning tasks. Enlightened by the fact that numerous message passing redundancies exist in GNNs, we propose DyGNN, which speeds up GNNs by reducing redundancies. DyGNN is supported by an algorithm and architecture co-design. The proposed algorithm can dynamically prune vertices and edges during execution without accuracy loss. An architecture is designed to support dynamic pruning and transform it into performance improvement. DyGNN opens new directions for accelerating GNNs by pruning vertices and edges. DyGNN gains average 2\times speedup with accuracy improvement of 4% compared with state-of-the-art GNN accelerators.
DOI:10.1109/DAC18074.2021.9586298