Attention-Aware Encoder-Decoder Neural Networks for Heterogeneous Graphs of Things

Recent trend focuses on using heterogeneous graph of things (HGoT) to represent things and their relations in the Internet of Things, thereby facilitating the applying of advanced learning frameworks, i.e., deep learning (DL). Nevertheless, this is a challenging task since the existing DL models are...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on industrial informatics Ročník 17; číslo 4; s. 2890 - 2898
Hlavní autoři: Li, Yangfan, Chen, Cen, Duan, Mingxing, Zeng, Zeng, Li, Kenli
Médium: Journal Article
Jazyk:angličtina
Vydáno: Piscataway IEEE 01.04.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:1551-3203, 1941-0050
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Recent trend focuses on using heterogeneous graph of things (HGoT) to represent things and their relations in the Internet of Things, thereby facilitating the applying of advanced learning frameworks, i.e., deep learning (DL). Nevertheless, this is a challenging task since the existing DL models are hard to accurately express the complex semantics and attributes for those heterogeneous nodes and links in HGoT. To address this issue, we develop attention-aware encoder-decoder graph neural networks for HGoT, termed as HGAED. Specifically, we utilize the attention-based separate-and-merge method to improve the accuracy, and leverage the encoder-decoder architecture for implementation. In the heart of HGAED, the separate-and-merge processes can be encapsulated into encoding and decoding blocks. Then, blocks are stacked for constructing an encoder-decoder architecture to jointly and hierarchically fuse heterogeneous structures and contents of nodes. Extensive experiments on three real-world datasets demonstrate the superior performance of HGAED over state-of-the-art baselines.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1551-3203
1941-0050
DOI:10.1109/TII.2020.3025592