A Survey on Neural Network Interpretability
Along with the great success of deep neural networks, there is also growing concern about their black-box nature. The interpretability issue affects people's trust on deep learning systems. It is also related to many ethical problems, e.g., algorithmic discrimination. Moreover, interpretability...
Uložené v:
| Vydané v: | IEEE transactions on emerging topics in computational intelligence Ročník 5; číslo 5; s. 726 - 742 |
|---|---|
| Hlavní autori: | , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
Piscataway
IEEE
01.10.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Predmet: | |
| ISSN: | 2471-285X |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | Along with the great success of deep neural networks, there is also growing concern about their black-box nature. The interpretability issue affects people's trust on deep learning systems. It is also related to many ethical problems, e.g., algorithmic discrimination. Moreover, interpretability is a desired property for deep networks to become powerful tools in other research fields, e.g., drug discovery and genomics. In this survey, we conduct a comprehensive review of the neural network interpretability research. We first clarify the definition of interpretability as it has been used in many different contexts. Then we elaborate on the importance of interpretability and propose a novel taxonomy organized along three dimensions: type of engagement (passive vs. active interpretation approaches), the type of explanation, and the focus (from local to global interpretability). This taxonomy provides a meaningful 3D view of distribution of papers from the relevant literature as two of the dimensions are not simply categorical but allow ordinal subcategories. Finally, we summarize the existing interpretability evaluation methods and suggest possible research directions inspired by our new taxonomy. |
|---|---|
| AbstractList | Along with the great success of deep neural networks, there is also growing concern about their black-box nature. The interpretability issue affects people's trust on deep learning systems. It is also related to many ethical problems, e.g., algorithmic discrimination. Moreover, interpretability is a desired property for deep networks to become powerful tools in other research fields, e.g., drug discovery and genomics. In this survey, we conduct a comprehensive review of the neural network interpretability research. We first clarify the definition of interpretability as it has been used in many different contexts. Then we elaborate on the importance of interpretability and propose a novel taxonomy organized along three dimensions: type of engagement (passive vs. active interpretation approaches), the type of explanation, and the focus (from local to global interpretability). This taxonomy provides a meaningful 3D view of distribution of papers from the relevant literature as two of the dimensions are not simply categorical but allow ordinal subcategories. Finally, we summarize the existing interpretability evaluation methods and suggest possible research directions inspired by our new taxonomy. |
| Author | Tang, Ke Tino, Peter Leonardis, Ales Zhang, Yu |
| Author_xml | – sequence: 1 givenname: Yu orcidid: 0000-0001-7442-375X surname: Zhang fullname: Zhang, Yu email: zhangy3@mail.sustech.edu.cn organization: Guangdong Key Laboratory of Brain-Inspired Intelligent Computation, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China – sequence: 2 givenname: Peter orcidid: 0000-0003-2330-128X surname: Tino fullname: Tino, Peter email: P.Tino@cs.bham.ac.uk organization: School of Computer Science, University of Birmingham, Edgbaston, Birmingham, U.K – sequence: 3 givenname: Ales orcidid: 0000-0003-0773-3277 surname: Leonardis fullname: Leonardis, Ales email: a.leonardis@cs.bham.ac.uk organization: School of Computer Science, University of Birmingham, Edgbaston, Birmingham, U.K – sequence: 4 givenname: Ke orcidid: 0000-0002-6236-2002 surname: Tang fullname: Tang, Ke email: tangk3@sustech.edu.cn organization: Guangdong Key Laboratory of Brain-Inspired Intelligent Computation, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, P.R. China |
| BookMark | eNotj01Lw0AURQdRsNb-Ad0EXErqezN5SWZZih-BogsruAuT-AZSYxInEyX_3oG6uWdzuYd7IU67vmMhrhDWiKDv9vf7bbGWIHGtECBN8EQsZJJhLHN6PxercTwAgNSEipKFuN1Er5P74Tnqu-iZJ2faAP_bu8-o6Dy7wbE3VdM2fr4UZ9a0I6_-uRRvD0H3FO9eHovtZhc3EpSPTcVkADAnJJvWqkotWmZNOiTY-oOrHIE4AzBAlalTVVOG2iImlq1WS3Fz3B1c_z3x6MtDP7kuKEtJGWF4klBoXR9bDTOXg2u-jJtLTRKlRPUH-EVOLw |
| CODEN | ITETCU |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
| DBID | 97E RIA RIE 7SP 8FD L7M |
| DOI | 10.1109/TETCI.2021.3100641 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEL Electronics & Communications Abstracts Technology Research Database Advanced Technologies Database with Aerospace |
| DatabaseTitle | Technology Research Database Advanced Technologies Database with Aerospace Electronics & Communications Abstracts |
| DatabaseTitleList | Technology Research Database |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE/IET Electronic Library url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| EISSN | 2471-285X |
| EndPage | 742 |
| ExternalDocumentID | 9521221 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: Support Plan Program of Shenzhen Natural Science Fund grantid: 20200925154942002 – fundername: European Commission Horizon 2020 Innovative Training Network SUNDIAL – fundername: Alan Turing Institute, ATI grantid: 1056900 – fundername: National Leading Youth Talent Support Program of China – fundername: Machine Learning in the Space of Inferential Models – fundername: Survey Network for Deep Imaging Analysis, and Learning grantid: 721463 – fundername: Program for Guangdong Introducing Innovative and Entrepreneurial Teams grantid: 2017ZT07X386 – fundername: Science and Technology Commission of Shanghai Municipality grantid: 19511120602 funderid: 10.13039/501100003399 – fundername: MOE University Scientific-Technological Innovation Plan Program – fundername: Guangdong Provincial Key Laboratory grantid: 2020B121201001 |
| GroupedDBID | 0R~ 97E AAJGR AASAJ AAWTH ABAZT ABJNI ABQJQ ABVLG ACGFS AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD IFIPE JAVBF OCL RIA RIE 7SP 8FD L7M |
| ID | FETCH-LOGICAL-i203t-abe5a0018515f6c3b6f1fee959fee0fcdeb8105e700a05bac63c5719f114fef93 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 579 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000700388500001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| IngestDate | Mon Jun 30 04:53:13 EDT 2025 Wed Aug 27 02:27:16 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 5 |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-i203t-abe5a0018515f6c3b6f1fee959fee0fcdeb8105e700a05bac63c5719f114fef93 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0002-6236-2002 0000-0003-2330-128X 0000-0001-7442-375X 0000-0003-0773-3277 |
| PQID | 2575128545 |
| PQPubID | 4437216 |
| PageCount | 17 |
| ParticipantIDs | ieee_primary_9521221 proquest_journals_2575128545 |
| PublicationCentury | 2000 |
| PublicationDate | 2021-10-01 |
| PublicationDateYYYYMMDD | 2021-10-01 |
| PublicationDate_xml | – month: 10 year: 2021 text: 2021-10-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | Piscataway |
| PublicationPlace_xml | – name: Piscataway |
| PublicationTitle | IEEE transactions on emerging topics in computational intelligence |
| PublicationTitleAbbrev | TETCI |
| PublicationYear | 2021 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| SSID | ssj0002951354 |
| Score | 2.667788 |
| SecondaryResourceType | review_article |
| Snippet | Along with the great success of deep neural networks, there is also growing concern about their black-box nature. The interpretability issue affects people's... |
| SourceID | proquest ieee |
| SourceType | Aggregation Database Publisher |
| StartPage | 726 |
| SubjectTerms | Artificial neural networks Decision trees Deep learning inter-pretability Machine learning Neural networks Reliability survey Task analysis Taxonomy Training |
| Title | A Survey on Neural Network Interpretability |
| URI | https://ieeexplore.ieee.org/document/9521221 https://www.proquest.com/docview/2575128545 |
| Volume | 5 |
| WOSCitedRecordID | wos000700388500001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE/IET Electronic Library databaseCode: RIE dateStart: 20170101 customDbUrl: isFulltext: true eissn: 2471-285X dateEnd: 99991231 titleUrlDefault: https://ieeexplore.ieee.org/ omitProxy: false ssIdentifier: ssj0002951354 providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LTwMhEJ7UxoMXH6nGajV78Ka0dBFYjo2x0UtjYk162_AYkl62pq-k_15gt3rQixfCARJgCDMM38cHcGejY3FMEqMLQSJ1kRjLLHHBvViDwWEZl8Qm5GRSzGbqrQUP31wYREzgM-zHanrLdwu7iamygYpE08gaP5BS1Fyt73xKHkIFxh_3vBiqBtPn6dNruAHmw37MYoso-p4UVH4du8mXjE_-N4pTOG5ixmxUG_kMWlh14H6UvW-WW9xliyqLX2yEFpMa0539IAkT9HV3Dh_jMMwX0igfkHlO2Zpog1xHvbwQbXhhmRF-6BEVV6Gk3jo0RQiMUFKqKTfaCma5HCofbjcevWIX0K4WFV5C5nOtKBrpTDAEVVo7ROGdQulzb3jRhU6cYvlZf25RNrPrQm-_RmWzq1dlHh9pIuWSX_3d6xqO4nrXYLcetNfLDd7Aod2u56vlbTLYF7psl04 |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LSwMxEB5KFfTigypWq-7Bm26bJs3u5ljE0mJdBCv0tuQxgV620hf035vsbutBL15CDgkkmZCZTL4vH8CD9o7FsDhUMolCT10MlWY6NM69aIXOYSlTiE3EaZpMp-K9Bk97LgwiFuAzbPtq8ZZv5nrtU2Ud4YmmnjV-wHs9Skq21j6jQl2wwHhvx4whojN5mTyP3B2Qdts-jx152fdCQ-XXwVt4k8Hp_8ZxBidV1Bj0SzOfQw3zBjz2g4_1YoPbYJ4H_pMN1yItUd3BD5awAL9uL-Bz4IY5DCvtg3BGCVuFUiGXXjHPxRs20kxFtmsRBReuJFYbVIkLjTAmRBKupI6Y5nFXWHe_sWgFu4R6Ps_xCgJLpSCoYqOcKYiQ0iBG1giMLbWKJ01o-ClmX-X3Flk1uya0dmuUVft6mVH_TONJl_z67173cDScvI2z8Sh9vYFjv_Yl9K0F9dVijbdwqDer2XJxVxjvG8qDmpU |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+Survey+on+Neural+Network+Interpretability&rft.jtitle=IEEE+transactions+on+emerging+topics+in+computational+intelligence&rft.au=Zhang%2C+Yu&rft.au=Tino%2C+Peter&rft.au=Leonardis%2C+Ales&rft.au=Tang%2C+Ke&rft.date=2021-10-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.eissn=2471-285X&rft.volume=5&rft.issue=5&rft.spage=726&rft_id=info:doi/10.1109%2FTETCI.2021.3100641&rft.externalDBID=NO_FULL_TEXT |