Can Short Hypervectors Drive Feature-Rich GNNs? Strengthening the Graph Representation of Hyperdimensional Computing for Memory-efficient GNNs

Hyperdimensional computing (HDC) based GNNs are significantly advancing the brain-like cognition in terms of mathematical rigorousness and computational tractability. However, the researches in this field seem to have a "long vector consensus" that the length of HDC-hypervectors must be de...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2025 62nd ACM/IEEE Design Automation Conference (DAC) s. 1 - 7
Hlavní autoři: Wang, Jihe, Han, Yuxi, Wang, Danghui
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 22.06.2025
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Hyperdimensional computing (HDC) based GNNs are significantly advancing the brain-like cognition in terms of mathematical rigorousness and computational tractability. However, the researches in this field seem to have a "long vector consensus" that the length of HDC-hypervectors must be designed to mimic that of cerebellar cortex, i.e., ten thousands of bits, to express human's feature-rich memory. To system architects, this choice presents a formidable challenge that the combination of numerous nodes and ultra-long hypervectors could create a new memory bottleneck that undermines the operational brevity from HDC. To overcome above problem, in this work, we shift our focus to rebuilding a set of more GNN-friendly HDC-operations, by which, short hypervectors are sufficient to encode rich features via enjoining the strong error tolerance of neural cognition. To achieve that, three behavioral incompatibilities of HDC with general GNNs, i.e., feature distortion, structural bias, and central-node vacancy, are found and successfully resolved for more efficient feature-extraction in graphs. Taken as a whole, a memory-efficient HDC-based GNN framework, called CiliaGraph, is designed to drive one-shot graph classifying tasks with only hundreds of bits in hypervector aggregation, which offers 1 to 2 orders of memory savings. The results show that, compared to the SOTA GNNs, CiliaGraph reduces the memory access and training latency by an average of 292 \times (up to 2341 \times) and 103 \times (up to 313 \times), respectively, while maintaining the competitively accuracy.
AbstractList Hyperdimensional computing (HDC) based GNNs are significantly advancing the brain-like cognition in terms of mathematical rigorousness and computational tractability. However, the researches in this field seem to have a "long vector consensus" that the length of HDC-hypervectors must be designed to mimic that of cerebellar cortex, i.e., ten thousands of bits, to express human's feature-rich memory. To system architects, this choice presents a formidable challenge that the combination of numerous nodes and ultra-long hypervectors could create a new memory bottleneck that undermines the operational brevity from HDC. To overcome above problem, in this work, we shift our focus to rebuilding a set of more GNN-friendly HDC-operations, by which, short hypervectors are sufficient to encode rich features via enjoining the strong error tolerance of neural cognition. To achieve that, three behavioral incompatibilities of HDC with general GNNs, i.e., feature distortion, structural bias, and central-node vacancy, are found and successfully resolved for more efficient feature-extraction in graphs. Taken as a whole, a memory-efficient HDC-based GNN framework, called CiliaGraph, is designed to drive one-shot graph classifying tasks with only hundreds of bits in hypervector aggregation, which offers 1 to 2 orders of memory savings. The results show that, compared to the SOTA GNNs, CiliaGraph reduces the memory access and training latency by an average of 292 \times (up to 2341 \times) and 103 \times (up to 313 \times), respectively, while maintaining the competitively accuracy.
Author Wang, Jihe
Wang, Danghui
Han, Yuxi
Author_xml – sequence: 1
  givenname: Jihe
  surname: Wang
  fullname: Wang, Jihe
  email: wangjihe@nwpu.edu.cn
  organization: Northwestern Polytechnical University,School of Computer Science,Xi'an,China
– sequence: 2
  givenname: Yuxi
  surname: Han
  fullname: Han, Yuxi
  email: hyx956@mail.nwpu.edu.cn
  organization: Northwestern Polytechnical University,Engineering Research Center of Embedded System Integration Ministry of Education,Xi'an,China
– sequence: 3
  givenname: Danghui
  surname: Wang
  fullname: Wang, Danghui
  email: wangdh@nwpu.edu.cn
  organization: Northwestern Polytechnical University,Engineering Research Center of Embedded System Integration Ministry of Education,Xi'an,China
BookMark eNo1kNFOwjAYRmuiF4q8gTF9gWG7tlt7ZcgQMEFMQK_Jz_jHmrB26QoJL-EzO0WvTvIl51x8d-TaeYeEPHI24pyZp8m4yISWZpSyVPUTF4Jl-RUZmtxoIbhigkl9S74KcHRd-xDp_NxiOGEZfejoJNgT0ilCPAZMVras6Wy57J7pOgZ0-1ijs25Pe9JZgLamK2wDdugiROsd9dWlt7MNuq5f4EAL37TH-KNVPtA3bHw4J1hVtrS999u_JzcVHDoc_nFAPqcvH8U8WbzPXovxIgGem5ikGaRM73hqtjlKiSo1JWbbTHKQZca04lpIrsotGIGKaQZKKRDZTqCWFeRiQB4uXYuImzbYBsJ583-T-AYjpWOv
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
DOI 10.1109/DAC63849.2025.11133067
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP) 1998-present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
EISBN 9798331503048
EndPage 7
ExternalDocumentID 11133067
Genre orig-research
GrantInformation_xml – fundername: National Natural Science Foundation of China
  funderid: 10.13039/501100001809
– fundername: Aeronautical Science Foundation of China
  funderid: 10.13039/501100004750
GroupedDBID 6IE
6IH
CBEJK
RIE
RIO
ID FETCH-LOGICAL-a179t-26a208d129b7e44e529ce6b641a4c6085183415cba93e5080a555a36d3e84fa73
IEDL.DBID RIE
IngestDate Wed Oct 01 07:05:15 EDT 2025
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-a179t-26a208d129b7e44e529ce6b641a4c6085183415cba93e5080a555a36d3e84fa73
PageCount 7
ParticipantIDs ieee_primary_11133067
PublicationCentury 2000
PublicationDate 2025-June-22
PublicationDateYYYYMMDD 2025-06-22
PublicationDate_xml – month: 06
  year: 2025
  text: 2025-June-22
  day: 22
PublicationDecade 2020
PublicationTitle 2025 62nd ACM/IEEE Design Automation Conference (DAC)
PublicationTitleAbbrev DAC
PublicationYear 2025
Publisher IEEE
Publisher_xml – name: IEEE
Score 2.2951362
Snippet Hyperdimensional computing (HDC) based GNNs are significantly advancing the brain-like cognition in terms of mathematical rigorousness and computational...
SourceID ieee
SourceType Publisher
StartPage 1
SubjectTerms Accuracy
Cognition
Design automation
Distortion
Feature extraction
GNN
graph representation
hyperdimensional computing
memory
Memory management
Training
Vectors
Title Can Short Hypervectors Drive Feature-Rich GNNs? Strengthening the Graph Representation of Hyperdimensional Computing for Memory-efficient GNNs
URI https://ieeexplore.ieee.org/document/11133067
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07T8MwELZoxcAEiCLe8sDqNk3s2J4Qamm7EFU8pG6V7VyAJUXpQ-JP8JvxOSmIgYEpVmRfJD9yn-37viPkOrIcuNCCSZMbv0ERhlkAxfzUMsaqQuhChWQTMsvUbKanDVk9cGEAIASfQReL4S4_X7g1HpX1MC06QtwWaUkpa7JWw_rtR7o3vB342cSRfhKL7rbyr7QpwWuM9v_5vQPS-eHf0em3ZzkkO1Aekc-BKenjq4fLdOI3j9UmnLcv6bDyPyyKUG5dAUOiPB1n2fKG4oVz-YLqBt4I9U86RnVq-hCCXxvOUUkXRW0vR53_WqOD1rkesJnHtPQeo3E_GAS1Cd8u2O-Q59Hd02DCmmwKzPhFt2JxauJI5d6_Wwmcg4i1g9SmvG-4SxF5Ke_RhLNGJ-BhW2SEECZJ8wQUL4xMjkm7XJRwQmjhhFESIE0k54V12gjdj61IoBAo735KOtiZ8_daMGO-7cezP96fkz0cMozAiuML0l5Va7gku26zeltWV2GYvwAcN6yc
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07T8MwELagIMEEiCLeeGB1mzp24kwItbRFtFEFRepW2ckFWFKUPiT-BL8Zn5OCGBiYYkX2RfIj99m-7ztCrj0jQMhIslCn2m5QpGYGQDE7tbQ2KpNRplyyiTCO1WQSjSqyuuPCAIALPoMGFt1dfjpLlnhU1sS06AhxN8mWFIK3SrpWxftteVGzc9u280kgAYXLxrr6r8Qpzm909_75xX1S_2Hg0dG3bzkgG5Afks-2zunTqwXMtG-3j8XKnbjPaaewvyyKYG5ZAEOqPO3F8fyG4pVz_oL6BtYItU_aQ31q-ujCXyvWUU5nWWkvRaX_UqWDltkesJlFtXSI8bgfDJzehG3n7NfJc_du3O6zKp8C03bZLRgPNPdUaj28CUEIkDxKIDCBaGmRBIi9lPVpMjE68sECN09LKbUfpD4okenQPyK1fJbDMaFZIrUKAQI_FCIzSaRl1OJG-pBJFHg_IXXszOl7KZkxXffj6R_vr8hOfzwcTAf38cMZ2cXhw3gszs9JbVEs4YJsJ6vF27y4dEP-BUADr-M
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2025+62nd+ACM%2FIEEE+Design+Automation+Conference+%28DAC%29&rft.atitle=Can+Short+Hypervectors+Drive+Feature-Rich+GNNs%3F+Strengthening+the+Graph+Representation+of+Hyperdimensional+Computing+for+Memory-efficient+GNNs&rft.au=Wang%2C+Jihe&rft.au=Han%2C+Yuxi&rft.au=Wang%2C+Danghui&rft.date=2025-06-22&rft.pub=IEEE&rft.spage=1&rft.epage=7&rft_id=info:doi/10.1109%2FDAC63849.2025.11133067&rft.externalDocID=11133067