Model Stealing Attacks Against Inductive Graph Neural Networks
Many real-world data come in the form of graphs. Graph neural networks (GNNs), a new family of machine learning (ML) models, have been proposed to fully leverage graph data to build powerful applications. In particular, the inductive GNNs, which can generalize to unseen data, become mainstream in th...
Uloženo v:
| Vydáno v: | Proceedings - IEEE Symposium on Security and Privacy s. 1175 - 1192 |
|---|---|
| Hlavní autoři: | , , , |
| Médium: | Konferenční příspěvek |
| Jazyk: | angličtina |
| Vydáno: |
IEEE
01.05.2022
|
| Témata: | |
| ISSN: | 2375-1207 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | Many real-world data come in the form of graphs. Graph neural networks (GNNs), a new family of machine learning (ML) models, have been proposed to fully leverage graph data to build powerful applications. In particular, the inductive GNNs, which can generalize to unseen data, become mainstream in this direction. Machine learning models have shown great potential in various tasks and have been deployed in many real-world scenarios. To train a good model, a large amount of data as well as computational resources are needed, leading to valuable intellectual property. Previous research has shown that ML models are prone to model stealing attacks, which aim to steal the functionality of the target models. However, most of them focus on the models trained with images and texts. On the other hand, little attention has been paid to models trained with graph data, i.e., GNNs. In this paper, we fill the gap by proposing the first model stealing attacks against inductive GNNs. We systematically define the threat model and propose six attacks based on the adversary's background knowledge and the responses of the target models. Our evaluation on six benchmark datasets shows that the proposed model stealing attacks against GNNs achieve promising performance. 1 |
|---|---|
| AbstractList | Many real-world data come in the form of graphs. Graph neural networks (GNNs), a new family of machine learning (ML) models, have been proposed to fully leverage graph data to build powerful applications. In particular, the inductive GNNs, which can generalize to unseen data, become mainstream in this direction. Machine learning models have shown great potential in various tasks and have been deployed in many real-world scenarios. To train a good model, a large amount of data as well as computational resources are needed, leading to valuable intellectual property. Previous research has shown that ML models are prone to model stealing attacks, which aim to steal the functionality of the target models. However, most of them focus on the models trained with images and texts. On the other hand, little attention has been paid to models trained with graph data, i.e., GNNs. In this paper, we fill the gap by proposing the first model stealing attacks against inductive GNNs. We systematically define the threat model and propose six attacks based on the adversary's background knowledge and the responses of the target models. Our evaluation on six benchmark datasets shows that the proposed model stealing attacks against GNNs achieve promising performance. 1 |
| Author | Shen, Yun Zhang, Yang Han, Yufei He, Xinlei |
| Author_xml | – sequence: 1 givenname: Yun surname: Shen fullname: Shen, Yun organization: Norton Research Group – sequence: 2 givenname: Xinlei surname: He fullname: He, Xinlei organization: CISPA Helmholtz Center for Information Security – sequence: 3 givenname: Yufei surname: Han fullname: Han, Yufei organization: INRIA – sequence: 4 givenname: Yang surname: Zhang fullname: Zhang, Yang organization: CISPA Helmholtz Center for Information Security |
| BookMark | eNotj8tKw0AUQEdRsK39Al3kBxLnzns2Qii2FuoDqusyydzUsTEpmani31uwqwNnceCMyUXXd0jILdACgNq79atQDETBKGOFNZwrqs_I1GoDSkkBHJQ9JyPGtcyBUX1FxjF-Usoot2JE7p96j222Tuja0G2zMiVX72JWbl3oYsqWnT_UKXxjthjc_iN7xsPg2iPSTz_s4jW5bFwbcXrihLzPH95mj_nqZbGclas8AJiUO42-UsawCitqal0JadVRaM2M9NYplEJy1jCjG6ikV9ZKrwFqqYQHi3xCbv67ARE3-yF8ueF3c9rlf0u1SkU |
| CODEN | IEEPAD |
| ContentType | Conference Proceeding |
| DBID | 6IE 6IH CBEJK RIE RIO |
| DOI | 10.1109/SP46214.2022.9833607 |
| DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Proceedings Order Plan (POP) 1998-present by volume IEEE Xplore All Conference Proceedings IEEE Electronic Library (IEL) IEEE Proceedings Order Plans (POP) 1998-present |
| DatabaseTitleList | |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISBN | 9781665413169 1665413166 |
| EISSN | 2375-1207 |
| EndPage | 1192 |
| ExternalDocumentID | 9833607 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: Helmholtz Association funderid: 10.13039/501100009318 |
| GroupedDBID | 23M 29O 6IE 6IF 6IH 6IL 6IN AAJGR AAWTH ABLEC ACGFS ADZIZ ALMA_UNASSIGNED_HOLDINGS BEFXN BFFAM BGNUA BKEBE BPEOZ CBEJK CHZPO IEGSK IJVOP M43 OCL RIE RIL RIO RNS |
| ID | FETCH-LOGICAL-i118t-a7edb6882beb08c7b4596b6877285d9a6e54532f287f1b5d6995d711c564d19e3 |
| IEDL.DBID | RIE |
| IngestDate | Wed Aug 27 02:37:20 EDT 2025 |
| IsPeerReviewed | false |
| IsScholarly | true |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-i118t-a7edb6882beb08c7b4596b6877285d9a6e54532f287f1b5d6995d711c564d19e3 |
| PageCount | 18 |
| ParticipantIDs | ieee_primary_9833607 |
| PublicationCentury | 2000 |
| PublicationDate | 2022-May |
| PublicationDateYYYYMMDD | 2022-05-01 |
| PublicationDate_xml | – month: 05 year: 2022 text: 2022-May |
| PublicationDecade | 2020 |
| PublicationTitle | Proceedings - IEEE Symposium on Security and Privacy |
| PublicationTitleAbbrev | SP |
| PublicationYear | 2022 |
| Publisher | IEEE |
| Publisher_xml | – name: IEEE |
| SSID | ssj0020394 |
| Score | 2.52867 |
| Snippet | Many real-world data come in the form of graphs. Graph neural networks (GNNs), a new family of machine learning (ML) models, have been proposed to fully... |
| SourceID | ieee |
| SourceType | Publisher |
| StartPage | 1175 |
| SubjectTerms | Benchmark testing Computational modeling Data models Graph neural networks Machine learning Privacy Reconstruction algorithms |
| Title | Model Stealing Attacks Against Inductive Graph Neural Networks |
| URI | https://ieeexplore.ieee.org/document/9833607 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LSwMxEA61ePBUtRXf5ODRbTeb1-YiFLF6kFLwQW9lk8xKobSl3fr7zaRrRfDiKUvYbGDCZuZL5puPkBvvshQYZ4l1ygaAkvPEltonylgptGQeIkPu_VkPh_l4bEYNcrvjwgBATD6DLj7Gu3y_cBs8KuuZnHOF1PE9rfWWq7UDVyk3oqbGsdT0XkZCZQwPTbKsW4_7JaAS_ceg9b-ZD0nnh4hHRzsXc0QaMD8mrW8lBlr_mG1yh5JmM4rJuUgvp_2qQu487X8E4L-uKAp0xI2NPmKBaoolOYpZaGIO-LpD3gYPr_dPSa2MkEwDIKiSQoO3KgTHFmyaO22FNCp0hFA5l94UCkJgxLMywKGSWemVMdJrxpxUwjMD_IQ054s5nBLqjQmvCQjfUsKVaQ5FiApLg24rE7I4I200x2S5LX4xqS1x_nf3BTlAi28zAi9Js1pt4Irsu89qul5dxxX7Ata3lnI |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3PS8MwFH6MKehp6ib-NgePdmvSJG0uwhDnxDkGTtltNMnrGIxNts6_36SrE8GLp5TQNPBC896XvO99ADfWsBBpRANtpHYAJYkCncU2kEoLHgtqsWDIvffifj8ZjdSgArdbLgwiFsln2PSPxV2-XZi1PyprqSSKpKeO7wjOGd2wtbbwKowUL8lxNFSt1wGXjPpjE8aa5chfEiqFB-nU_jf3ATR-qHhksHUyh1DB-RHUvrUYSPlr1uHOi5rNiE_P9QRz0s5zz54n7YmD_quceImOYmsjj75ENfFFOdKZa4os8FUD3joPw_tuUGojBFMHCfIgjdFq6cJjjTpMTKy5UNJ1uGA5EValEl1oFLHMAaKMamGlUsLGlBohuaUKo2OozhdzPAFilXKvcXTfktxkYYKpiwsz5R0X4yI9hbo3x_hjU_5iXFri7O_ua9jrDl96495T__kc9r31N_mBF1DNl2u8hF3zmU9Xy6ti9b4A3IuZuQ |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=Proceedings+-+IEEE+Symposium+on+Security+and+Privacy&rft.atitle=Model+Stealing+Attacks+Against+Inductive+Graph+Neural+Networks&rft.au=Shen%2C+Yun&rft.au=He%2C+Xinlei&rft.au=Han%2C+Yufei&rft.au=Zhang%2C+Yang&rft.date=2022-05-01&rft.pub=IEEE&rft.eissn=2375-1207&rft.spage=1175&rft.epage=1192&rft_id=info:doi/10.1109%2FSP46214.2022.9833607&rft.externalDocID=9833607 |