HGNN+: General Hypergraph Neural Networks
Graph Neural Networks have attracted increasing attention in recent years. However, existing GNN frameworks are deployed based upon simple graphs, which limits their applications in dealing with complex data correlation of multi-modal/multi-type data in practice. A few hypergraph-based methods have...
Saved in:
| Published in: | IEEE transactions on pattern analysis and machine intelligence Vol. 45; no. 3; pp. 3181 - 3199 |
|---|---|
| Main Authors: | , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
United States
IEEE
01.03.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects: | |
| ISSN: | 0162-8828, 1939-3539, 2160-9292, 1939-3539 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | Graph Neural Networks have attracted increasing attention in recent years. However, existing GNN frameworks are deployed based upon simple graphs, which limits their applications in dealing with complex data correlation of multi-modal/multi-type data in practice. A few hypergraph-based methods have recently been proposed to address the problem of multi-modal/multi-type data correlation by directly concatenating the hypergraphs constructed from each single individual modality/type, which is difficult to learn an adaptive weight for each modality/type. In this paper, we extend the original conference version HGNN, and introduce a general high-order multi-modal/multi-type data correlation modeling framework called HGNN<inline-formula><tex-math notation="LaTeX">^+</tex-math> <mml:math><mml:msup><mml:mrow/><mml:mo>+</mml:mo></mml:msup></mml:math><inline-graphic xlink:href="feng-ieq1-3182052.gif"/> </inline-formula> to learn an optimal representation in a single hypergraph based framework. It is achieved by bridging multi-modal/multi-type data and hyperedge with hyperedge groups. Specifically, in our method, hyperedge groups are first constructed to represent latent high-order correlations in each specific modality/type with explicit or implicit graph structures. An adaptive hyperedge group fusion strategy is then used to effectively fuse the correlations from different modalities/types in a unified hypergraph. After that a new hypergraph convolution scheme performed in spatial domain is used to learn a general data representation for various tasks. We have evaluated this framework on several popular datasets and compared it with recent state-of-the-art methods. The comprehensive evaluations indicate that the proposed HGNN<inline-formula><tex-math notation="LaTeX">^+</tex-math> <mml:math><mml:msup><mml:mrow/><mml:mo>+</mml:mo></mml:msup></mml:math><inline-graphic xlink:href="feng-ieq2-3182052.gif"/> </inline-formula> framework can consistently outperform existing methods with a significant margin, especially when modeling implicit data correlations. We also release a toolbox called THU-DeepHypergraph for the proposed framework, which can be used for various of applications, such as data classification, retrieval and recommendation. |
|---|---|
| AbstractList | Graph Neural Networks have attracted increasing attention in recent years. However, existing GNN frameworks are deployed based upon simple graphs, which limits their applications in dealing with complex data correlation of multi-modal/multi-type data in practice. A few hypergraph-based methods have recently been proposed to address the problem of multi-modal/multi-type data correlation by directly concatenating the hypergraphs constructed from each single individual modality/type, which is difficult to learn an adaptive weight for each modality/type. In this paper, we extend the original conference version HGNN, and introduce a general high-order multi-modal/multi-type data correlation modeling framework called HGNN<inline-formula><tex-math notation="LaTeX">^+</tex-math> <mml:math><mml:msup><mml:mrow/><mml:mo>+</mml:mo></mml:msup></mml:math><inline-graphic xlink:href="feng-ieq1-3182052.gif"/> </inline-formula> to learn an optimal representation in a single hypergraph based framework. It is achieved by bridging multi-modal/multi-type data and hyperedge with hyperedge groups. Specifically, in our method, hyperedge groups are first constructed to represent latent high-order correlations in each specific modality/type with explicit or implicit graph structures. An adaptive hyperedge group fusion strategy is then used to effectively fuse the correlations from different modalities/types in a unified hypergraph. After that a new hypergraph convolution scheme performed in spatial domain is used to learn a general data representation for various tasks. We have evaluated this framework on several popular datasets and compared it with recent state-of-the-art methods. The comprehensive evaluations indicate that the proposed HGNN<inline-formula><tex-math notation="LaTeX">^+</tex-math> <mml:math><mml:msup><mml:mrow/><mml:mo>+</mml:mo></mml:msup></mml:math><inline-graphic xlink:href="feng-ieq2-3182052.gif"/> </inline-formula> framework can consistently outperform existing methods with a significant margin, especially when modeling implicit data correlations. We also release a toolbox called THU-DeepHypergraph for the proposed framework, which can be used for various of applications, such as data classification, retrieval and recommendation. Graph Neural Networks have attracted increasing attention in recent years. However, existing GNN frameworks are deployed based upon simple graphs, which limits their applications in dealing with complex data correlation of multi-modal/multi-type data in practice. A few hypergraph-based methods have recently been proposed to address the problem of multi-modal/multi-type data correlation by directly concatenating the hypergraphs constructed from each single individual modality/type, which is difficult to learn an adaptive weight for each modality/type. In this paper, we extend the original conference version HGNN, and introduce a general high-order multi-modal/multi-type data correlation modeling framework called HGNN to learn an optimal representation in a single hypergraph based framework. It is achieved by bridging multi-modal/multi-type data and hyperedge with hyperedge groups. Specifically, in our method, hyperedge groups are first constructed to represent latent high-order correlations in each specific modality/type with explicit or implicit graph structures. An adaptive hyperedge group fusion strategy is then used to effectively fuse the correlations from different modalities/types in a unified hypergraph. After that a new hypergraph convolution scheme performed in spatial domain is used to learn a general data representation for various tasks. We have evaluated this framework on several popular datasets and compared it with recent state-of-the-art methods. The comprehensive evaluations indicate that the proposed HGNN framework can consistently outperform existing methods with a significant margin, especially when modeling implicit data correlations. We also release a toolbox called THU-DeepHypergraph for the proposed framework, which can be used for various of applications, such as data classification, retrieval and recommendation. Graph Neural Networks have attracted increasing attention in recent years. However, existing GNN frameworks are deployed based upon simple graphs, which limits their applications in dealing with complex data correlation of multi-modal/multi-type data in practice. A few hypergraph-based methods have recently been proposed to address the problem of multi-modal/multi-type data correlation by directly concatenating the hypergraphs constructed from each single individual modality/type, which is difficult to learn an adaptive weight for each modality/type. In this paper, we extend the original conference version HGNN, and introduce a general high-order multi-modal/multi-type data correlation modeling framework called HGNN[Formula Omitted] to learn an optimal representation in a single hypergraph based framework. It is achieved by bridging multi-modal/multi-type data and hyperedge with hyperedge groups. Specifically, in our method, hyperedge groups are first constructed to represent latent high-order correlations in each specific modality/type with explicit or implicit graph structures. An adaptive hyperedge group fusion strategy is then used to effectively fuse the correlations from different modalities/types in a unified hypergraph. After that a new hypergraph convolution scheme performed in spatial domain is used to learn a general data representation for various tasks. We have evaluated this framework on several popular datasets and compared it with recent state-of-the-art methods. The comprehensive evaluations indicate that the proposed HGNN[Formula Omitted] framework can consistently outperform existing methods with a significant margin, especially when modeling implicit data correlations. We also release a toolbox called THU-DeepHypergraph for the proposed framework, which can be used for various of applications, such as data classification, retrieval and recommendation. Graph Neural Networks have attracted increasing attention in recent years. However, existing GNN frameworks are deployed based upon simple graphs, which limits their applications in dealing with complex data correlation of multi-modal/multi-type data in practice. A few hypergraph-based methods have recently been proposed to address the problem of multi-modal/multi-type data correlation by directly concatenating the hypergraphs constructed from each single individual modality/type, which is difficult to learn an adaptive weight for each modality/type. In this paper, we extend the original conference version HGNN, and introduce a general high-order multi-modal/multi-type data correlation modeling framework called HGNN + to learn an optimal representation in a single hypergraph based framework. It is achieved by bridging multi-modal/multi-type data and hyperedge with hyperedge groups. Specifically, in our method, hyperedge groups are first constructed to represent latent high-order correlations in each specific modality/type with explicit or implicit graph structures. An adaptive hyperedge group fusion strategy is then used to effectively fuse the correlations from different modalities/types in a unified hypergraph. After that a new hypergraph convolution scheme performed in spatial domain is used to learn a general data representation for various tasks. We have evaluated this framework on several popular datasets and compared it with recent state-of-the-art methods. The comprehensive evaluations indicate that the proposed HGNN + framework can consistently outperform existing methods with a significant margin, especially when modeling implicit data correlations. We also release a toolbox called THU-DeepHypergraph for the proposed framework, which can be used for various of applications, such as data classification, retrieval and recommendation.Graph Neural Networks have attracted increasing attention in recent years. However, existing GNN frameworks are deployed based upon simple graphs, which limits their applications in dealing with complex data correlation of multi-modal/multi-type data in practice. A few hypergraph-based methods have recently been proposed to address the problem of multi-modal/multi-type data correlation by directly concatenating the hypergraphs constructed from each single individual modality/type, which is difficult to learn an adaptive weight for each modality/type. In this paper, we extend the original conference version HGNN, and introduce a general high-order multi-modal/multi-type data correlation modeling framework called HGNN + to learn an optimal representation in a single hypergraph based framework. It is achieved by bridging multi-modal/multi-type data and hyperedge with hyperedge groups. Specifically, in our method, hyperedge groups are first constructed to represent latent high-order correlations in each specific modality/type with explicit or implicit graph structures. An adaptive hyperedge group fusion strategy is then used to effectively fuse the correlations from different modalities/types in a unified hypergraph. After that a new hypergraph convolution scheme performed in spatial domain is used to learn a general data representation for various tasks. We have evaluated this framework on several popular datasets and compared it with recent state-of-the-art methods. The comprehensive evaluations indicate that the proposed HGNN + framework can consistently outperform existing methods with a significant margin, especially when modeling implicit data correlations. We also release a toolbox called THU-DeepHypergraph for the proposed framework, which can be used for various of applications, such as data classification, retrieval and recommendation. |
| Author | Feng, Yifan Ji, Rongrong Gao, Yue Ji, Shuyi |
| Author_xml | – sequence: 1 givenname: Yue orcidid: 0000-0002-4971-590X surname: Gao fullname: Gao, Yue email: kevin.gaoy@gmail.com organization: BNRist, KLISS, School of Software, BLBCI, THUIBCS, Tsinghua University, Beijing, China – sequence: 2 givenname: Yifan orcidid: 0000-0003-0878-2986 surname: Feng fullname: Feng, Yifan email: evanfeng97@gmail.com organization: BNRist, KLISS, School of Software, BLBCI, THUIBCS, Tsinghua University, Beijing, China – sequence: 3 givenname: Shuyi orcidid: 0000-0003-3795-3545 surname: Ji fullname: Ji, Shuyi email: jisy19@mails.tsinghua.edu.cn organization: BNRist, KLISS, School of Software, BLBCI, THUIBCS, Tsinghua University, Beijing, China – sequence: 4 givenname: Rongrong orcidid: 0000-0001-9163-2932 surname: Ji fullname: Ji, Rongrong email: rrji@xmu.edu.cn organization: Media Analytics and Computing Laboratory, Department of Artificial Intelligence, School of Informatics, Institute of Artificial Intelligence, Fujian Engineering Research Center of Trusted Artificial Intelligence Analysis and Application, Xiamen University, Xiamen, Fujian, China |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/35696461$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kE1Lw0AQhhdRtFb_gIIUvCiSujv77a2ItoJWD3peNulEo2lSdxPEf29qqwcPzmVgeN6Z4dklm1VdISEHjA4Zo_b88WF0dzMECjDkzACVsEF6wBRNLFjYJD3KFCTGgNkhuzG-UsqEpHyb7HCprBKK9cjpZDydnl0Mxlhh8OVg8rnA8Bz84mUwxXY5mWLzUYe3uEe2cl9G3F_3Pnm6vnq8nCS39-Oby9FtknHJmoSLTCpmUSs6M1awzMpU6BSlEWKmvOK5TRFMamimc5NCTjPuU8G1mYHVXvA-OVntXYT6vcXYuHkRMyxLX2HdRgdKKymZVdChx3_Q17oNVfedA625kJJ31SdHa6pN5zhzi1DMffh0PxI6AFZAFuoYA-a_CKNuadp9m3ZL025tuguZP6GsaHxT1FUTfFH-Hz1cRQtE_L1ltZUgGf8CxHOHVg |
| CODEN | ITPIDJ |
| CitedBy_id | crossref_primary_10_1016_j_patcog_2025_111864 crossref_primary_10_1109_TGRS_2025_3567746 crossref_primary_10_26599_BDMA_2024_9020094 crossref_primary_10_1007_s10489_024_05939_4 crossref_primary_10_1016_j_knosys_2024_111903 crossref_primary_10_1155_ijdm_7220522 crossref_primary_10_1109_TGRS_2025_3565203 crossref_primary_10_1109_TIFS_2025_3580218 crossref_primary_10_1007_s10723_025_09807_4 crossref_primary_10_1109_TNSE_2025_3547349 crossref_primary_10_1109_TNSRE_2025_3595655 crossref_primary_10_1145_3745020 crossref_primary_10_3390_s24237655 crossref_primary_10_1016_j_sigpro_2024_109797 crossref_primary_10_1109_TBDATA_2025_3527216 crossref_primary_10_1007_s10723_024_09767_1 crossref_primary_10_1007_s42979_024_03453_5 crossref_primary_10_1016_j_mineng_2025_109764 crossref_primary_10_1016_j_neucom_2024_128539 crossref_primary_10_1109_TPAMI_2024_3353199 crossref_primary_10_3390_electronics13071306 crossref_primary_10_1038_s41598_025_01856_9 crossref_primary_10_1016_j_patcog_2025_112397 crossref_primary_10_1007_s44295_025_00070_7 crossref_primary_10_1109_JBHI_2024_3434394 crossref_primary_10_1360_SSI_2024_0282 crossref_primary_10_1007_s13042_024_02414_x crossref_primary_10_1016_j_inffus_2023_101939 crossref_primary_10_1016_j_trc_2025_105271 crossref_primary_10_1007_s11263_024_02298_y crossref_primary_10_1016_j_neunet_2025_107451 crossref_primary_10_1093_bib_bbae355 crossref_primary_10_1109_JSTARS_2024_3360431 crossref_primary_10_1016_j_ins_2023_119412 crossref_primary_10_1007_s40305_025_00630_y crossref_primary_10_1109_TNNLS_2025_3542176 crossref_primary_10_1109_TBDATA_2025_3533908 crossref_primary_10_1109_JSEN_2024_3492041 crossref_primary_10_1016_j_neucom_2023_126992 crossref_primary_10_1109_TKDE_2024_3380643 crossref_primary_10_1109_TBDATA_2024_3442549 crossref_primary_10_1109_TR_2024_3393415 crossref_primary_10_1016_j_ins_2025_121960 crossref_primary_10_1007_s11227_024_06003_1 crossref_primary_10_1109_TAI_2024_3524984 crossref_primary_10_1057_s41599_025_05180_5 crossref_primary_10_1109_TNNLS_2024_3422265 crossref_primary_10_1016_j_knosys_2024_112177 crossref_primary_10_1145_3719013 crossref_primary_10_1109_ACCESS_2024_3398367 crossref_primary_10_3390_electronics14091902 crossref_primary_10_1016_j_patcog_2025_111921 crossref_primary_10_1109_TGRS_2025_3543556 crossref_primary_10_1016_j_amc_2025_129698 crossref_primary_10_1109_TGRS_2025_3534288 crossref_primary_10_1016_j_patcog_2025_111926 crossref_primary_10_1145_3762665 crossref_primary_10_1088_2632_2153_adf375 crossref_primary_10_1016_j_neucom_2024_127989 crossref_primary_10_1109_TMM_2024_3521738 crossref_primary_10_1016_j_energy_2025_135170 crossref_primary_10_1140_epjb_s10051_024_00791_4 crossref_primary_10_3390_e26030239 crossref_primary_10_1016_j_asoc_2025_113644 crossref_primary_10_1016_j_asoc_2025_113645 crossref_primary_10_1109_TCSVT_2024_3359681 crossref_primary_10_1016_j_ipm_2023_103518 crossref_primary_10_1016_j_jmapro_2025_08_043 crossref_primary_10_1109_TMM_2025_3535298 crossref_primary_10_1093_bib_bbae658 crossref_primary_10_1371_journal_pone_0310189 crossref_primary_10_1016_j_ipm_2024_103877 crossref_primary_10_3390_biology12091203 crossref_primary_10_1016_j_patcog_2025_111995 crossref_primary_10_1109_TNNLS_2024_3368111 crossref_primary_10_3390_a16030126 crossref_primary_10_1109_TIP_2025_3592551 crossref_primary_10_1016_j_patcog_2024_110364 crossref_primary_10_1109_TII_2024_3393137 crossref_primary_10_1109_TNNLS_2024_3456593 crossref_primary_10_1038_s41598_025_05891_4 crossref_primary_10_1109_TCOMM_2024_3511952 crossref_primary_10_3390_rs16183533 crossref_primary_10_1016_j_asoc_2025_112721 crossref_primary_10_1109_TPAMI_2024_3524377 crossref_primary_10_1109_TRO_2025_3600144 crossref_primary_10_1016_j_cmpb_2024_108237 crossref_primary_10_1007_s11263_025_02438_y crossref_primary_10_1007_s10489_025_06348_x crossref_primary_10_1016_j_rineng_2025_104443 crossref_primary_10_1109_TGRS_2025_3598612 crossref_primary_10_3389_fmed_2024_1496573 crossref_primary_10_3390_app142210646 crossref_primary_10_1016_j_physa_2025_130725 crossref_primary_10_1109_TAI_2024_3450658 crossref_primary_10_1016_j_patcog_2025_111544 crossref_primary_10_1109_TMM_2025_3542958 crossref_primary_10_1109_TBDATA_2024_3453757 crossref_primary_10_1109_TIM_2025_3600734 crossref_primary_10_1109_TMM_2024_3521678 crossref_primary_10_1109_TPAMI_2025_3557391 crossref_primary_10_1021_acs_jcim_5c01164 crossref_primary_10_1109_TKDE_2024_3524627 crossref_primary_10_3390_app14093832 crossref_primary_10_1088_1674_1056_ad20db crossref_primary_10_1016_j_cities_2025_106334 crossref_primary_10_1016_j_neucom_2025_129835 crossref_primary_10_1145_3733234 crossref_primary_10_1016_j_media_2025_103700 crossref_primary_10_1038_s41467_025_60695_4 crossref_primary_10_1007_s10489_025_06491_5 crossref_primary_10_1109_TMC_2025_3556243 crossref_primary_10_1109_TFUZZ_2025_3588125 crossref_primary_10_3390_brainsci14080738 crossref_primary_10_1007_s10115_024_02259_4 crossref_primary_10_1007_s00371_024_03541_w crossref_primary_10_1145_3664651 crossref_primary_10_1007_s10489_025_06685_x crossref_primary_10_1145_3735135 crossref_primary_10_1007_s10462_025_11199_6 crossref_primary_10_1016_j_media_2025_103661 crossref_primary_10_1016_j_media_2024_103144 crossref_primary_10_1093_bib_bbae718 crossref_primary_10_1109_TCSVT_2025_3535930 crossref_primary_10_1093_bib_bbae274 crossref_primary_10_1016_j_bspc_2024_107075 crossref_primary_10_1109_TPAMI_2024_3459932 crossref_primary_10_3389_fmed_2025_1654199 crossref_primary_10_14778_3749646_3749692 crossref_primary_10_1287_ijoc_2024_0660 crossref_primary_10_1016_j_eswa_2023_119842 crossref_primary_10_4018_IJDWM_349975 crossref_primary_10_1007_s10618_023_00956_2 crossref_primary_10_1109_TGRS_2024_3440271 crossref_primary_10_1109_TBDATA_2023_3278988 crossref_primary_10_1007_s10115_025_02476_5 crossref_primary_10_1007_s41109_023_00568_1 crossref_primary_10_1109_TKDE_2025_3539769 crossref_primary_10_1007_s00521_024_10365_1 crossref_primary_10_1109_TGRS_2025_3542422 crossref_primary_10_1109_TKDE_2025_3570098 crossref_primary_10_1109_TVCG_2024_3381152 crossref_primary_10_1109_TAES_2025_3560256 crossref_primary_10_1016_j_ins_2025_122484 crossref_primary_10_1109_TKDE_2024_3435861 crossref_primary_10_1016_j_media_2024_103368 crossref_primary_10_1016_j_scs_2024_105366 crossref_primary_10_1109_TGRS_2023_3345159 crossref_primary_10_1016_j_inffus_2024_102769 crossref_primary_10_1016_j_ipm_2024_103958 crossref_primary_10_1016_j_neucom_2024_128761 crossref_primary_10_1038_s41467_025_61149_7 crossref_primary_10_1145_3605776 crossref_primary_10_1016_j_physa_2024_129891 crossref_primary_10_1109_JSTARS_2024_3403863 crossref_primary_10_1145_3744918 crossref_primary_10_1109_TIM_2025_3566845 crossref_primary_10_1016_j_patcog_2025_112125 crossref_primary_10_1109_ACCESS_2025_3592104 crossref_primary_10_1109_TIP_2025_3570604 crossref_primary_10_1109_TPAMI_2025_3585179 crossref_primary_10_1016_j_inffus_2025_103016 crossref_primary_10_1016_j_eneco_2025_108895 crossref_primary_10_1109_TMC_2024_3515154 crossref_primary_10_1007_s40747_025_02016_2 crossref_primary_10_1007_s11227_025_07835_1 crossref_primary_10_1145_3653015 crossref_primary_10_1016_j_bspc_2025_108049 crossref_primary_10_1016_j_knosys_2025_114435 crossref_primary_10_3390_electronics13224435 crossref_primary_10_1016_j_eswa_2025_127242 crossref_primary_10_3389_fdgth_2024_1503831 crossref_primary_10_1016_j_neunet_2024_106562 crossref_primary_10_1109_TSIPN_2025_3599780 |
| Cites_doi | 10.12732/ijpam.v98i3.2 10.1109/ICCV.2015.114 10.1103/PhysRevE.101.022308 10.1016/j.cviu.2013.10.012 10.1145/2043932.2044016 10.1109/CVPR.2018.00035 10.1609/aaai.v33i01.33014602 10.1016/j.acha.2010.04.005 10.1145/3219819.3219829 10.1145/1143844.1143847 10.1117/12.2543089 10.1016/j.patcog.2020.107637 10.1145/3331184.3331267 10.1609/aimag.v29i3.2157 10.1109/TIP.2018.2837219 10.1109/MSP.2012.2235192 10.1109/43.784130 10.7551/mitpress/7503.003.0205 10.1016/0166-218X(93)90045-P 10.1109/BHI.2019.8834572 10.24963/ijcai.2019/366 10.1609/aaai.v33i01.33013558 10.1111/1467-8659.00669 10.1093/comnet/cnab014 10.1109/CVPR.2016.90 10.1145/3366423.3380152 10.1109/CVPR.2015.7298801 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| DOI | 10.1109/TPAMI.2022.3182052 |
| DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitleList | PubMed Technology Research Database MEDLINE - Academic |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Computer Science |
| EISSN | 2160-9292 1939-3539 |
| EndPage | 3199 |
| ExternalDocumentID | 35696461 10_1109_TPAMI_2022_3182052 9795251 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: Open Research Projects of Zhejiang Lab grantid: 2021KG0AB05 – fundername: National Science Fund for Distinguished Young Scholars grantid: 62025603 funderid: 10.13039/501100014219 – fundername: National Natural Science Foundation of China; National Natural Science Funds of China grantid: 62088102; 62021002 funderid: 10.13039/501100001809 |
| GroupedDBID | --- -DZ -~X .DC 0R~ 29I 4.4 53G 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK ACNCT AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ IEDLZ IFIPE IPLJI JAVBF LAI M43 MS~ O9- OCL P2P PQQKQ RIA RIE RNS RXW TAE TN5 UHB ~02 AAYXX CITATION 5VS 9M8 AAYOK ABFSI ADRHT AETIX AGSQL AI. AIBXA ALLEH FA8 H~9 IBMZZ ICLAB IFJZH NPM PKN RIC RIG RNI RZB VH1 XJT Z5M 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| ID | FETCH-LOGICAL-c351t-34c5619e760d8941c95b47be5844d6a63f9be28b80c7f8b2f0c3ab4378d297a43 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 257 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000966221400001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0162-8828 1939-3539 |
| IngestDate | Wed Oct 01 12:34:41 EDT 2025 Mon Jun 30 07:09:06 EDT 2025 Wed Feb 19 02:24:39 EST 2025 Tue Nov 18 21:01:02 EST 2025 Sat Nov 29 02:58:20 EST 2025 Wed Aug 27 02:06:04 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 3 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c351t-34c5619e760d8941c95b47be5844d6a63f9be28b80c7f8b2f0c3ab4378d297a43 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0002-4971-590X 0000-0003-0878-2986 0000-0003-3795-3545 0000-0001-9163-2932 |
| PMID | 35696461 |
| PQID | 2773455333 |
| PQPubID | 85458 |
| PageCount | 19 |
| ParticipantIDs | pubmed_primary_35696461 proquest_miscellaneous_2676551962 crossref_primary_10_1109_TPAMI_2022_3182052 crossref_citationtrail_10_1109_TPAMI_2022_3182052 ieee_primary_9795251 proquest_journals_2773455333 |
| PublicationCentury | 2000 |
| PublicationDate | 2023-03-01 |
| PublicationDateYYYYMMDD | 2023-03-01 |
| PublicationDate_xml | – month: 03 year: 2023 text: 2023-03-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: New York |
| PublicationTitle | IEEE transactions on pattern analysis and machine intelligence |
| PublicationTitleAbbrev | TPAMI |
| PublicationTitleAlternate | IEEE Trans Pattern Anal Mach Intell |
| PublicationYear | 2023 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref13 ref11 ref10 Defferrard (ref5) ref19 ref18 Hamilton (ref1) Xu (ref17) You (ref15) Shchur (ref29) ref45 Hein (ref48) Chien (ref46) ref42 ref41 ref43 Veličković (ref3) ref7 ref9 ref6 ref40 Yang (ref44) 2020 Aldous (ref38) 2002 Yadati (ref8) ref34 ref37 ref36 ref31 ref30 ref33 ref32 Kipf (ref2) Defferrard (ref49) Namata (ref26) Atwood (ref16) Kelly (ref39) 2011 ref23 ref25 ref20 ref22 ref21 Gilmer (ref4) ref28 ref27 Chitra (ref35) Shervashidze (ref24) 2011; 9 Henaff (ref14) 2015 Li (ref47) Bruna (ref12) |
| References_xml | – ident: ref10 doi: 10.12732/ijpam.v98i3.2 – ident: ref42 doi: 10.1109/ICCV.2015.114 – ident: ref36 doi: 10.1103/PhysRevE.101.022308 – ident: ref34 doi: 10.1016/j.cviu.2013.10.012 – ident: ref32 doi: 10.1145/2043932.2044016 – start-page: 1172 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref35 article-title: Random walks on hypergraphs with edge-dependent vertex weights – start-page: 2466 volume-title: Proc. 22nd Int. Conf. Artif. Intell. Statist. ident: ref46 article-title: $HS^{2}$HS2: Active learning over hypergraphs with pointwise and pairwise queries – volume-title: Proc. Int. Conf. Learn. Representations ident: ref12 article-title: Spectral networks and locally connected networks on graphs – year: 2015 ident: ref14 article-title: Deep convolutional networks on graph-structured data – year: 2002 ident: ref38 article-title: Reversible markov chains and random walks on graphs – volume-title: Proc. Workshop RRL ident: ref29 article-title: Pitfalls of graph neural network evaluation – ident: ref43 doi: 10.1109/CVPR.2018.00035 – ident: ref28 doi: 10.1609/aaai.v33i01.33014602 – year: 2020 ident: ref44 article-title: Hypergraph learning with line expansion – start-page: 1263 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref4 article-title: Neural message passing for quantum chemistry – volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref48 article-title: The total variation on hypergraphs-learning on hypergraphs revisited – ident: ref13 doi: 10.1016/j.acha.2010.04.005 – ident: ref37 doi: 10.1145/3219819.3219829 – start-page: 1024 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref1 article-title: Inductive representation learning on large graphs – start-page: 3844 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref5 article-title: Convolutional neural networks on graphs with fast localized spectral filtering – ident: ref33 doi: 10.1145/1143844.1143847 – ident: ref22 doi: 10.1117/12.2543089 – ident: ref7 doi: 10.1016/j.patcog.2020.107637 – ident: ref19 doi: 10.1145/3331184.3331267 – ident: ref25 doi: 10.1609/aimag.v29i3.2157 – start-page: 1993 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref16 article-title: Diffusion-convolutional neural networks – ident: ref21 doi: 10.1109/TIP.2018.2837219 – volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref49 article-title: Convolutional neural networks on graphs with fast localized spectral filtering – start-page: 3014 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref47 article-title: Submodular hypergraphs: P-Laplacians, cheeger inequalities and spectral clustering – ident: ref11 doi: 10.1109/MSP.2012.2235192 – ident: ref45 doi: 10.1109/43.784130 – ident: ref20 doi: 10.7551/mitpress/7503.003.0205 – ident: ref23 doi: 10.1016/0166-218X(93)90045-P – ident: ref18 doi: 10.1109/BHI.2019.8834572 – ident: ref9 doi: 10.24963/ijcai.2019/366 – volume-title: Proc. Int. Conf. Mach. Learn. ident: ref3 article-title: Graph attention networks – volume-title: Proc. Int. Conf. Learn. Representations ident: ref17 article-title: How powerful are graph neural networks? – volume: 9 year: 2011 ident: ref24 article-title: Weisfeiler-lehman graph kernels publication-title: J. Mach. Learn. Res. – start-page: 7134 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref15 article-title: Position-aware graph neural networks – ident: ref6 doi: 10.1609/aaai.v33i01.33013558 – volume-title: Proc. Int. Conf. Learn. Representations ident: ref2 article-title: Semi-supervised classification with graph convolutional networks – ident: ref41 doi: 10.1111/1467-8659.00669 – ident: ref27 doi: 10.1093/comnet/cnab014 – volume-title: Proc. 10th Int. Workshop Mining Learn. Graphs ident: ref26 article-title: Query-driven active surveying for collective classification – volume-title: Reversibility and Stochastic Networks[M] year: 2011 ident: ref39 – start-page: 1511 volume-title: Proc. Conf. Assoc. Advance. Artif. Intell. ident: ref8 article-title: HyperGCN: A new method for training graph convolutional networks on hypergraphs – ident: ref30 doi: 10.1109/CVPR.2016.90 – ident: ref31 doi: 10.1145/3366423.3380152 – ident: ref40 doi: 10.1109/CVPR.2015.7298801 |
| SSID | ssj0014503 |
| Score | 2.741128 |
| Snippet | Graph Neural Networks have attracted increasing attention in recent years. However, existing GNN frameworks are deployed based upon simple graphs, which limits... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 3181 |
| SubjectTerms | classification Convolution Correlation Data correlation Data models Evaluation Graph neural networks Graph theory Hypergraph hypergraph convolution Mathematical models Modelling Neural networks Representation learning Representations Smart structures Social networking (online) Task analysis |
| Title | HGNN+: General Hypergraph Neural Networks |
| URI | https://ieeexplore.ieee.org/document/9795251 https://www.ncbi.nlm.nih.gov/pubmed/35696461 https://www.proquest.com/docview/2773455333 https://www.proquest.com/docview/2676551962 |
| Volume | 45 |
| WOSCitedRecordID | wos000966221400001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 2160-9292 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014503 issn: 0162-8828 databaseCode: RIE dateStart: 19790101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwEB6VikM5tNDSslCqIPVSlVDH7-FWIcoiQdRDQXuLYseRKlW7aB_9_Yydh4oElbhFieNEnpn4-2LPNwCnngvlNXM5MzXm0jYhd0XNciODRIHSySRg-vObKUs7m-H1Frwfc2FCCGnzWfgQD9NafrPwm_ir7AINKh7zpZ8YY7pcrXHFQKpUBZkQDEU40YghQYbhxc315fevRAU5J4ZKM56KJWyE0qilLv6Yj1KBlX9jzTTnXO3939s-h90eW2aXnTO8gK0w34e9oW5D1ofxPjx7IEJ4AGfTL2V5_jHrBaizKTHTZdKxzqJyB50pu63iq5fw4-rzzadp3hdQyL1QxToX0hM8wmA0ayzKwqNy0rhAoEM2utaiRRe4dZZ501rHW-ZF7aQwtuFoaikOYXu-mIdXkFnliVgwx03rJRbKGdYi946-AZYYlZpAMQxj5Xt18Vjk4q5KLINhlaxQRStUvRUmcD7e86vT1ni09UEc47FlP7wTOB6sVfXht6q4MUIqQrJiAu_GyxQ4cTWknofFhtpoowkuoqaejzorj30PzvH67898Azux6ny3Fe0YttfLTXgLT_39-na1PCHvnNmT5J2_Aasx2Vk |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwEB5VLRJwaKHlsaVAkLigEurY4xe3ClG2Yhv1sKDeothxJCS0i_bB72fsPFQkQOIWJRMn8nji74s98wG89lxIr5jLma5tjqYJuStqlmsMaIVFh6mA6deZLktzc2Ovd-DtmAsTQkibz8K7eJjW8pul38ZfZWdWW8ljvvSeRORFl601rhmgTDrIhGEoxolIDCkyzJ7Nr8-vLokMck4cleY8GUVshFRWoSp-m5GSxMrf0WaadS4O_u99H8B-jy6z8244PISdsDiEg0G5IesD-RDu3ypDeARvpp_K8vR91pegzqbETVepknUWa3fQmbLbLL5-BF8uPs4_TPNeQiH3QhabXKAngGSDVqwxFgtvpUPtAsEObFStRGtd4MYZ5nVrHG-ZF7VDoU3Dra5RPIbdxXIRnkJmpCdqwRzXrUdbSKdZa7l39BUwxKnkBIqhGyvf1xePMhffq8QzmK2SF6rohar3wgROx3t-dNU1_ml9FPt4tOy7dwIng7eqPgDXFddaoCQsKybwarxMoRPXQ-pFWG7JRmlFgNEqavlJ5-Wx7WFwHP_5mS_h7nR-Natml-XnZ3AvatB3G9NOYHez2obncMf_3Hxbr16kMfoLB13buA |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=HGNN%2B%3A+General+Hypergraph+Neural+Networks&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Gao%2C+Yue&rft.au=Feng%2C+Yifan&rft.au=Ji%2C+Shuyi&rft.au=Ji%2C+Rongrong&rft.date=2023-03-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=0162-8828&rft.eissn=1939-3539&rft.volume=45&rft.issue=3&rft.spage=3181&rft_id=info:doi/10.1109%2FTPAMI.2022.3182052&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon |