Can Short Hypervectors Drive Feature-Rich GNNs? Strengthening the Graph Representation of Hyperdimensional Computing for Memory-efficient GNNs

Hyperdimensional computing (HDC) based GNNs are significantly advancing the brain-like cognition in terms of mathematical rigorousness and computational tractability. However, the researches in this field seem to have a "long vector consensus" that the length of HDC-hypervectors must be de...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:2025 62nd ACM/IEEE Design Automation Conference (DAC) S. 1 - 7
Hauptverfasser: Wang, Jihe, Han, Yuxi, Wang, Danghui
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 22.06.2025
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Hyperdimensional computing (HDC) based GNNs are significantly advancing the brain-like cognition in terms of mathematical rigorousness and computational tractability. However, the researches in this field seem to have a "long vector consensus" that the length of HDC-hypervectors must be designed to mimic that of cerebellar cortex, i.e., ten thousands of bits, to express human's feature-rich memory. To system architects, this choice presents a formidable challenge that the combination of numerous nodes and ultra-long hypervectors could create a new memory bottleneck that undermines the operational brevity from HDC. To overcome above problem, in this work, we shift our focus to rebuilding a set of more GNN-friendly HDC-operations, by which, short hypervectors are sufficient to encode rich features via enjoining the strong error tolerance of neural cognition. To achieve that, three behavioral incompatibilities of HDC with general GNNs, i.e., feature distortion, structural bias, and central-node vacancy, are found and successfully resolved for more efficient feature-extraction in graphs. Taken as a whole, a memory-efficient HDC-based GNN framework, called CiliaGraph, is designed to drive one-shot graph classifying tasks with only hundreds of bits in hypervector aggregation, which offers 1 to 2 orders of memory savings. The results show that, compared to the SOTA GNNs, CiliaGraph reduces the memory access and training latency by an average of 292 \times (up to 2341 \times) and 103 \times (up to 313 \times), respectively, while maintaining the competitively accuracy.
DOI:10.1109/DAC63849.2025.11133067