Trusted Multi-View Classification With Dynamic Evidential Fusion

Existing multi-view classification algorithms focus on promoting accuracy by exploiting different views, typically integrating them into common representations for follow-up tasks. Although effective, it is also crucial to ensure the reliability of both the multi-view integration and the final decis...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on pattern analysis and machine intelligence Vol. 45; no. 2; pp. 2551 - 2566
Main Authors: Han, Zongbo, Zhang, Changqing, Fu, Huazhu, Zhou, Joey Tianyi
Format: Journal Article
Language:English
Published: United States IEEE 01.02.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:0162-8828, 1939-3539, 2160-9292, 1939-3539
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Existing multi-view classification algorithms focus on promoting accuracy by exploiting different views, typically integrating them into common representations for follow-up tasks. Although effective, it is also crucial to ensure the reliability of both the multi-view integration and the final decision, especially for noisy, corrupted and out-of-distribution data. Dynamically assessing the trustworthiness of each view for different samples could provide reliable integration. This can be achieved through uncertainty estimation. With this in mind, we propose a novel multi-view classification algorithm, termed trusted multi-view classification (TMC), providing a new paradigm for multi-view learning by dynamically integrating different views at an evidence level. The proposed TMC can promote classification reliability by considering evidence from each view. Specifically, we introduce the variational Dirichlet to characterize the distribution of the class probabilities, parameterized with evidence from different views and integrated with the Dempster-Shafer theory. The unified learning framework induces accurate uncertainty and accordingly endows the model with both reliability and robustness against possible noise or corruption. Both theoretical and experimental results validate the effectiveness of the proposed model in accuracy, robustness and trustworthiness.
AbstractList Existing multi-view classification algorithms focus on promoting accuracy by exploiting different views, typically integrating them into common representations for follow-up tasks. Although effective, it is also crucial to ensure the reliability of both the multi-view integration and the final decision, especially for noisy, corrupted and out-of-distribution data. Dynamically assessing the trustworthiness of each view for different samples could provide reliable integration. This can be achieved through uncertainty estimation. With this in mind, we propose a novel multi-view classification algorithm, termed trusted multi-view classification (TMC), providing a new paradigm for multi-view learning by dynamically integrating different views at an evidence level. The proposed TMC can promote classification reliability by considering evidence from each view. Specifically, we introduce the variational Dirichlet to characterize the distribution of the class probabilities, parameterized with evidence from different views and integrated with the Dempster-Shafer theory. The unified learning framework induces accurate uncertainty and accordingly endows the model with both reliability and robustness against possible noise or corruption. Both theoretical and experimental results validate the effectiveness of the proposed model in accuracy, robustness and trustworthiness.Existing multi-view classification algorithms focus on promoting accuracy by exploiting different views, typically integrating them into common representations for follow-up tasks. Although effective, it is also crucial to ensure the reliability of both the multi-view integration and the final decision, especially for noisy, corrupted and out-of-distribution data. Dynamically assessing the trustworthiness of each view for different samples could provide reliable integration. This can be achieved through uncertainty estimation. With this in mind, we propose a novel multi-view classification algorithm, termed trusted multi-view classification (TMC), providing a new paradigm for multi-view learning by dynamically integrating different views at an evidence level. The proposed TMC can promote classification reliability by considering evidence from each view. Specifically, we introduce the variational Dirichlet to characterize the distribution of the class probabilities, parameterized with evidence from different views and integrated with the Dempster-Shafer theory. The unified learning framework induces accurate uncertainty and accordingly endows the model with both reliability and robustness against possible noise or corruption. Both theoretical and experimental results validate the effectiveness of the proposed model in accuracy, robustness and trustworthiness.
Existing multi-view classification algorithms focus on promoting accuracy by exploiting different views, typically integrating them into common representations for follow-up tasks. Although effective, it is also crucial to ensure the reliability of both the multi-view integration and the final decision, especially for noisy, corrupted and out-of-distribution data. Dynamically assessing the trustworthiness of each view for different samples could provide reliable integration. This can be achieved through uncertainty estimation. With this in mind, we propose a novel multi-view classification algorithm, termed trusted multi-view classification (TMC), providing a new paradigm for multi-view learning by dynamically integrating different views at an evidence level. The proposed TMC can promote classification reliability by considering evidence from each view. Specifically, we introduce the variational Dirichlet to characterize the distribution of the class probabilities, parameterized with evidence from different views and integrated with the Dempster-Shafer theory. The unified learning framework induces accurate uncertainty and accordingly endows the model with both reliability and robustness against possible noise or corruption. Both theoretical and experimental results validate the effectiveness of the proposed model in accuracy, robustness and trustworthiness.
Author Han, Zongbo
Zhou, Joey Tianyi
Zhang, Changqing
Fu, Huazhu
Author_xml – sequence: 1
  givenname: Zongbo
  surname: Han
  fullname: Han, Zongbo
  email: zongbo@tju.edu.cn
  organization: College of Intelligence and Computing, Tianjin University, Tianjin, China
– sequence: 2
  givenname: Changqing
  orcidid: 0000-0003-1410-6650
  surname: Zhang
  fullname: Zhang, Changqing
  email: zhangchangqing@tju.edu.cn
  organization: College of Intelligence and Computing, Tianjin University, Tianjin, China
– sequence: 3
  givenname: Huazhu
  orcidid: 0000-0002-9702-5524
  surname: Fu
  fullname: Fu, Huazhu
  email: hzfu@ieee.org
  organization: Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (ASTAR), Singapore
– sequence: 4
  givenname: Joey Tianyi
  orcidid: 0000-0002-4675-7055
  surname: Zhou
  fullname: Zhou, Joey Tianyi
  email: zhouty@ihpc.a-star.edu.sg
  organization: ASTAR Centre for Frontier AI Research (CFAR), Agency for Science, Technology and Research (ASTAR), Singapore
BackLink https://www.ncbi.nlm.nih.gov/pubmed/35503823$$D View this record in MEDLINE/PubMed
BookMark eNp9kUtPwzAQhC0EouXxB0BCkbhwSVmvY8e5UZXykEBwKHC03GQjjNIEYgfEvyfQwqEHTnPYb3ZXMztss25qYuyAw4hzyE5n9-Pb6xEC4kjwlGdabLAhcgVxhhlusiFwhbHWqAdsx_sXAJ5IENtsIGSvGsWQnc3azgcqotuuCi5-dPQRTSrrvStdboNr6ujJhefo_LO2C5dH03dXUB2craKLzvfjPbZV2srT_kp32cPFdDa5im_uLq8n45s4F5KHWFosbAlpkqVawRwLJcscJAGqwkKiEypAqblMwPKSSJCwogCZpcomoLK52GUny72vbfPWkQ9m4XxOVWVrajpvUMkMBehU9-jxGvrSdG3df2cwVVykCSroqaMV1c0XVJjX1i1s-2l-s-kBXAJ523jfUvmHcDDfBZifAsx3AWZVQG_Sa6bchZ8cQ2td9b_1cGl1RPR3q08gVQrFF3eckNg
CODEN ITPIDJ
CitedBy_id crossref_primary_10_1109_TIP_2025_3587575
crossref_primary_10_1287_ijoc_2023_0448
crossref_primary_10_1016_j_inffus_2024_102754
crossref_primary_10_1109_TGRS_2025_3587876
crossref_primary_10_1109_TIP_2023_3310339
crossref_primary_10_1007_s00138_024_01556_w
crossref_primary_10_1016_j_media_2025_103507
crossref_primary_10_1016_j_inffus_2024_102592
crossref_primary_10_1007_s10489_024_06113_6
crossref_primary_10_1109_TMM_2025_3535391
crossref_primary_10_1109_TGRS_2024_3360470
crossref_primary_10_1109_TGRS_2023_3342740
crossref_primary_10_1016_j_eswa_2025_128225
crossref_primary_10_1007_s10489_024_05652_2
crossref_primary_10_1016_j_ins_2024_121577
crossref_primary_10_1109_TAFFC_2023_3340924
crossref_primary_10_3390_math13132136
crossref_primary_10_1109_JAS_2023_123579
crossref_primary_10_1016_j_neucom_2024_129195
crossref_primary_10_1109_TPAMI_2025_3583410
crossref_primary_10_1109_TBDATA_2024_3423694
crossref_primary_10_1016_j_inffus_2024_102400
crossref_primary_10_1007_s13042_025_02606_z
crossref_primary_10_1016_j_knosys_2023_111217
crossref_primary_10_1016_j_engappai_2025_112260
crossref_primary_10_3390_ijms25031655
crossref_primary_10_1007_s11554_025_01648_4
crossref_primary_10_1016_j_media_2025_103476
crossref_primary_10_1109_TIP_2024_3504252
crossref_primary_10_1038_s41598_025_93770_3
crossref_primary_10_3390_info16090777
crossref_primary_10_1016_j_media_2024_103294
crossref_primary_10_1109_TIP_2025_3599673
crossref_primary_10_1016_j_ymssp_2024_112058
crossref_primary_10_1109_TFUZZ_2023_3335361
crossref_primary_10_1016_j_ins_2024_121622
crossref_primary_10_1016_j_media_2024_103214
crossref_primary_10_1109_JBHI_2024_3415641
crossref_primary_10_1016_j_eswa_2025_126972
crossref_primary_10_1016_j_neunet_2025_107459
crossref_primary_10_1109_LGRS_2023_3322452
crossref_primary_10_1007_s10044_025_01455_4
crossref_primary_10_1109_JSTARS_2024_3375806
crossref_primary_10_1007_s11227_025_07307_6
crossref_primary_10_1109_JBHI_2023_3346205
crossref_primary_10_1109_TIM_2024_3522632
crossref_primary_10_1109_TNNLS_2023_3329658
crossref_primary_10_1109_TMM_2024_3521733
crossref_primary_10_1109_TMM_2023_3316437
crossref_primary_10_3390_rs16234381
crossref_primary_10_3390_electronics13030662
crossref_primary_10_1109_TPAMI_2025_3546356
crossref_primary_10_1016_j_engappai_2024_108057
crossref_primary_10_1016_j_ins_2024_120342
crossref_primary_10_1109_TIP_2025_3597045
crossref_primary_10_1109_TMM_2024_3379079
crossref_primary_10_1007_s10489_023_04891_z
crossref_primary_10_1016_j_knosys_2024_112424
crossref_primary_10_1109_TFUZZ_2024_3470794
crossref_primary_10_1007_s44443_025_00225_w
crossref_primary_10_1016_j_eswa_2024_125371
crossref_primary_10_1109_TIP_2024_3420796
crossref_primary_10_1016_j_neunet_2023_10_052
crossref_primary_10_1016_j_sigpro_2025_110216
crossref_primary_10_1016_j_inffus_2024_102605
crossref_primary_10_3390_math10234545
crossref_primary_10_1016_j_eswa_2024_125772
crossref_primary_10_1016_j_eswa_2025_129453
crossref_primary_10_1109_TPAMI_2022_3224978
crossref_primary_10_1109_ACCESS_2024_3399204
crossref_primary_10_1007_s11760_024_03633_z
crossref_primary_10_1016_j_compbiomed_2024_108058
crossref_primary_10_1109_TKDE_2025_3527978
crossref_primary_10_1186_s40854_023_00519_w
crossref_primary_10_1109_TCSVT_2024_3486344
crossref_primary_10_1016_j_patcog_2025_111639
crossref_primary_10_1109_ACCESS_2025_3599563
crossref_primary_10_1109_TCSVT_2024_3430904
crossref_primary_10_1038_s44172_024_00245_w
crossref_primary_10_3390_math10152797
crossref_primary_10_1016_j_media_2025_103697
crossref_primary_10_1007_s00521_023_08651_5
crossref_primary_10_1073_pnas_2309240120
crossref_primary_10_1016_j_neucom_2025_129722
crossref_primary_10_1016_j_engappai_2025_112153
crossref_primary_10_1016_j_engappai_2024_108289
crossref_primary_10_1109_TPAMI_2024_3361978
crossref_primary_10_1016_j_eswa_2025_129742
crossref_primary_10_1109_TKDE_2022_3231929
crossref_primary_10_1016_j_neucom_2025_131117
crossref_primary_10_1016_j_patcog_2025_112284
crossref_primary_10_1016_j_neunet_2024_106763
crossref_primary_10_1007_s13755_023_00247_6
crossref_primary_10_1016_j_eswa_2025_126872
crossref_primary_10_1109_TIP_2025_3579987
crossref_primary_10_1016_j_inffus_2025_103249
crossref_primary_10_1016_j_bspc_2025_108395
crossref_primary_10_1016_j_compbiomed_2024_109312
crossref_primary_10_1016_j_neucom_2025_131475
crossref_primary_10_3390_math11132940
crossref_primary_10_1109_TCSVT_2024_3376564
crossref_primary_10_1109_TCSVT_2024_3375511
crossref_primary_10_1109_TIP_2024_3482175
crossref_primary_10_1111_phor_12534
crossref_primary_10_1016_j_asoc_2024_112538
crossref_primary_10_1109_TITS_2023_3345901
crossref_primary_10_1016_j_jksuci_2023_101904
crossref_primary_10_1016_j_knosys_2024_112247
crossref_primary_10_1016_j_neunet_2025_107783
crossref_primary_10_1109_TNNLS_2024_3482408
crossref_primary_10_1016_j_engappai_2025_111996
crossref_primary_10_1016_j_neunet_2025_107821
crossref_primary_10_1109_JBHI_2025_3539712
crossref_primary_10_26599_BDMA_2024_9020048
crossref_primary_10_1016_j_inffus_2023_102021
crossref_primary_10_1109_TIM_2024_3374311
crossref_primary_10_1016_j_patcog_2025_111651
crossref_primary_10_1109_JSTARS_2024_3354455
crossref_primary_10_1109_TII_2024_3523536
crossref_primary_10_3390_sym17050794
crossref_primary_10_1109_TMM_2024_3406146
crossref_primary_10_1016_j_neucom_2025_130092
crossref_primary_10_1109_TETCI_2024_3423459
crossref_primary_10_1016_j_engappai_2025_111123
crossref_primary_10_1016_j_media_2025_103557
crossref_primary_10_1088_1361_6560_ad2ee4
crossref_primary_10_1016_j_media_2025_103677
crossref_primary_10_1109_JSEN_2024_3370588
crossref_primary_10_1016_j_inffus_2025_102934
crossref_primary_10_1016_j_ins_2024_120178
crossref_primary_10_1016_j_engappai_2025_111887
crossref_primary_10_1016_j_knosys_2025_113164
crossref_primary_10_1016_j_eswa_2024_126198
crossref_primary_10_1109_TIP_2025_3574918
crossref_primary_10_1109_TPAMI_2024_3438349
crossref_primary_10_3389_fonc_2024_1332188
crossref_primary_10_1093_bib_bbad378
crossref_primary_10_1016_j_knosys_2025_113507
crossref_primary_10_1016_j_inffus_2025_103143
crossref_primary_10_1016_j_jag_2025_104356
crossref_primary_10_1016_j_rineng_2025_106413
crossref_primary_10_1109_TIV_2024_3398210
crossref_primary_10_26599_BDMA_2025_9020005
crossref_primary_10_1016_j_knosys_2025_114163
crossref_primary_10_1016_j_neunet_2024_106794
crossref_primary_10_1016_j_imavis_2024_105250
crossref_primary_10_1016_j_knosys_2024_112101
crossref_primary_10_1109_TCE_2024_3357480
crossref_primary_10_1016_j_knosys_2024_111770
crossref_primary_10_1016_j_inffus_2025_103377
crossref_primary_10_1109_TAI_2023_3296092
crossref_primary_10_1016_j_inffus_2024_102661
crossref_primary_10_1109_ACCESS_2025_3559663
crossref_primary_10_1002_ima_23199
crossref_primary_10_1109_TFUZZ_2025_3576112
crossref_primary_10_1109_TPAMI_2025_3547417
crossref_primary_10_1109_TNNLS_2025_3546660
crossref_primary_10_1109_TGRS_2024_3406606
crossref_primary_10_1364_AO_528226
crossref_primary_10_1016_j_inffus_2024_102899
crossref_primary_10_1016_j_neunet_2025_107553
crossref_primary_10_3390_app13063938
crossref_primary_10_1109_TMI_2023_3278259
Cites_doi 10.1109/CVPR.2012.6247814
10.24963/ijcai.2017/631
10.1145/3474085.3475379
10.1109/TMM.2021.3136094
10.48550/arXiv.1810.04805
10.1007/s00530-010-0182-0
10.1109/CVPR.2016.645
10.1038/nature21056
10.1109/TMM.2013.2267205
10.1162/neco.1992.4.3.448
10.1109/ICCV.2019.00631
10.24963/ijcai.2019/524
10.1038/nature08538
10.1109/TPAMI.2013.2296528
10.1109/CVPR.2005.16
10.1109/ICCV.2019.00640
10.1007/978-3-319-42337-1
10.1016/j.neuron.2019.08.038
10.1109/CVPR.2015.7298655
10.1109/JPROC.2003.817150
10.1109/CVPR.2019.00713
10.1109/CVPR.2019.01211
10.1002/9780470316870
10.18653/v1/D15-1303
10.1109/TFUZZ.2017.2659764
10.1109/36.602544
10.1007/978-0-387-45528-0
10.18653/v1/N16-1020
10.1007/978-3-642-33715-4_54
10.1109/CVPR46437.2021.01102
10.1109/TPAMI.2018.2879108
10.1109/ICMEW.2015.7169757
10.1609/aaai.v32i1.12292
10.18653/v1/P19-1656
10.1145/1027527.1027661
10.1109/ICCV.2011.6126543
10.1111/j.2517-6161.1968.tb00722.x
10.1515/9780691214696
10.1007/978-3-030-58621-8_45
10.2172/800792
10.1038/s41467-018-05432-w
10.1109/CVPR42600.2020.01271
10.1109/TBME.2019.2899222
10.1109/ICIP.2002.1038076
10.1007/978-1-4612-4380-9_14
10.1609/aaai.v32i1.11945
10.1109/ICRA40945.2020.9197266
10.1109/CVPR.2016.90
10.1214/aoms/1177698950
10.1109/ICME.2018.8486573
10.1109/TIP.2018.2872629
10.1109/TNN.2007.891186
10.1109/CVPR.2004.383
10.1609/aaai.v33i01.33019176
10.1109/TNNLS.2021.3117403
10.1109/TIP.2017.2651379
10.5555/3045390.3045502
10.1016/0167-8655(96)00039-6
10.1109/ICCV48922.2021.01310
10.1109/MFI.2010.5604480
10.1038/s41467-021-23774-w
10.1109/CVPR.2017.312
10.5555/2986459.2986721
10.1007/978-3-662-44415-3_16
10.1016/j.inffus.2005.07.003
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TPAMI.2022.3171983
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic

PubMed
Technology Research Database
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 2160-9292
1939-3539
EndPage 2566
ExternalDocumentID 35503823
10_1109_TPAMI_2022_3171983
9767662
Genre orig-research
Journal Article
GrantInformation_xml – fundername: AME Programmatic Funding Scheme
  grantid: A18A1b0045
– fundername: National Key Research and Development Program of China
  grantid: 2019YFB2101900
– fundername: A*STAR AI3 HTPO Seed Fund
  grantid: C211118012
– fundername: Joey Tianyi Zhou's A*STAR SERCCentral Research Fund
– fundername: National Natural Science Foundation of China
  grantid: 61976151; 61925602; 61732011
  funderid: 10.13039/501100001809
GroupedDBID ---
-DZ
-~X
.DC
0R~
29I
4.4
53G
5GY
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
AENEX
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
HZ~
IEDLZ
IFIPE
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNS
RXW
TAE
TN5
UHB
~02
AAYXX
CITATION
5VS
9M8
AAYOK
ABFSI
ADRHT
AETEA
AETIX
AGSQL
AI.
AIBXA
ALLEH
FA8
H~9
IBMZZ
ICLAB
IFJZH
NPM
RIG
RNI
RZB
VH1
XJT
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c351t-5a2daf07497860b2d65fc05e026da0484ed066b540a1fee3e3a3d05976a4069b3
IEDL.DBID RIE
ISICitedReferencesCount 273
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000912386000078&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0162-8828
1939-3539
IngestDate Sat Sep 27 18:44:45 EDT 2025
Mon Jun 30 06:08:16 EDT 2025
Thu Apr 03 07:03:20 EDT 2025
Sat Nov 29 02:58:19 EST 2025
Tue Nov 18 21:21:28 EST 2025
Wed Aug 27 02:54:13 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 2
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c351t-5a2daf07497860b2d65fc05e026da0484ed066b540a1fee3e3a3d05976a4069b3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-9702-5524
0000-0003-1410-6650
0000-0002-4675-7055
PMID 35503823
PQID 2761374260
PQPubID 85458
PageCount 16
ParticipantIDs pubmed_primary_35503823
ieee_primary_9767662
proquest_journals_2761374260
proquest_miscellaneous_2659230878
crossref_citationtrail_10_1109_TPAMI_2022_3171983
crossref_primary_10_1109_TPAMI_2022_3171983
PublicationCentury 2000
PublicationDate 2023-02-01
PublicationDateYYYYMMDD 2023-02-01
PublicationDate_xml – month: 02
  year: 2023
  text: 2023-02-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on pattern analysis and machine intelligence
PublicationTitleAbbrev TPAMI
PublicationTitleAlternate IEEE Trans Pattern Anal Mach Intell
PublicationYear 2023
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref57
ref56
ref59
ref58
Bian (ref48) 2017
ref52
ref55
ref54
Simonyan (ref11) 2014
Kingma (ref96) 2015
Wah (ref91) 2011
ref51
ref50
Lakshminarayanan (ref32) 2017
Higgins (ref86) 2017
Havasi (ref66) 2021
ref47
ref42
Wang (ref41) 2016
ref43
Kopetzki (ref70) 2021
Sensoy (ref33) 2018
ref8
Amersfoort (ref34) 2020
ref7
Liong (ref53) 2020
ref4
Hassani (ref46) 2020
ref100
ref101
Denker (ref65) 1991
Peng (ref6) 2019
Antorán (ref67) 2020
Liu (ref5) 2021
ref36
ref30
Ayub (ref103) 2020
Wang (ref40) 2015
ref38
Kingma (ref85) 2014
Frigyik (ref89) 2010
Blundell (ref29) 2015
ref24
Guo (ref90) 2017
ref26
ref20
ref21
Andrew (ref39) 2013
Kiela (ref49) 2019
ref27
Charpentier (ref22) 2020; 33
Zhang (ref3) 2019
ref13
ref12
ref15
ref14
ref97
ref99
ref10
ref98
Ding (ref62) 2021
ref17
ref16
Bojarski (ref9) 2016
ref19
ref18
Malinin (ref68) 2018; 31
Akaho (ref37) 2006
ref93
ref92
Neal (ref25) 2012; 118
Ranganath (ref28) 2014
ref94
Malinin (ref82) 2018; 31
ref88
Malinin (ref69) 2020
ref87
Han (ref35) 2021
Chen (ref45) 2020
Jøsang (ref75) 2012
Bachman (ref44) 2019
ref84
ref80
ref79
ref108
ref78
ref109
ref106
ref107
ref104
ref74
ref105
ref77
ref102
ref76
ref2
ref1
ref71
Srivastava (ref31) 2014; 15
ref73
ref72
Charpentier (ref83) 2020
ref64
ref63
Heo (ref95) 2018
Moon (ref81) 2020
ref60
MacKay (ref23) 1992
ref61
References_xml – ident: ref14
  doi: 10.1109/CVPR.2012.6247814
– ident: ref104
  doi: 10.24963/ijcai.2017/631
– ident: ref1
  doi: 10.1145/3474085.3475379
– ident: ref58
  doi: 10.1109/TMM.2021.3136094
– ident: ref109
  doi: 10.48550/arXiv.1810.04805
– ident: ref17
  doi: 10.1007/s00530-010-0182-0
– volume: 15
  start-page: 1929
  issue: 1
  year: 2014
  ident: ref31
  article-title: Dropout: A simple way to prevent neural networks from overfitting
  publication-title: J. Mach. Learn. Res.
– start-page: 1597
  volume-title: Proc. Int. Conf. Mach. Learn.
  year: 2020
  ident: ref45
  article-title: A simple framework for contrastive learning of visual representations
– ident: ref97
  doi: 10.1109/CVPR.2016.645
– start-page: 1083
  volume-title: Proc. Int. Conf. Mach. Learn.
  year: 2015
  ident: ref40
  article-title: On deep multi-view representation learning
– ident: ref8
  doi: 10.1038/nature21056
– ident: ref56
  doi: 10.1109/TMM.2013.2267205
– ident: ref26
  doi: 10.1162/neco.1992.4.3.448
– ident: ref61
  doi: 10.1109/ICCV.2019.00631
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representations
  year: 2014
  ident: ref85
  article-title: Auto-encoding variational bayes
– ident: ref51
  doi: 10.24963/ijcai.2019/524
– start-page: 6402
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  year: 2017
  ident: ref32
  article-title: Simple and scalable predictive uncertainty estimation using deep ensembles
– start-page: 559
  volume-title: Adv. Neural Inf. Process. Syst.
  year: 2019
  ident: ref3
  article-title: CPM-Nets: Cross partial multi-view networks
– ident: ref19
  doi: 10.1038/nature08538
– ident: ref4
  doi: 10.1109/TPAMI.2013.2296528
– ident: ref93
  doi: 10.1109/CVPR.2005.16
– ident: ref63
  doi: 10.1109/ICCV.2019.00640
– volume-title: Proc. Int. Conf. Learn. Representations
  year: 2020
  ident: ref69
  article-title: Ensemble distribution distillation
– ident: ref87
  doi: 10.1007/978-3-319-42337-1
– ident: ref21
  doi: 10.1016/j.neuron.2019.08.038
– ident: ref105
  doi: 10.1109/CVPR.2015.7298655
– ident: ref55
  doi: 10.1109/JPROC.2003.817150
– start-page: 1
  volume-title: The Caltech-UCSD Birds-200–2011 Dataset
  year: 2011
  ident: ref91
– ident: ref15
  doi: 10.1109/CVPR.2019.00713
– ident: ref102
  doi: 10.1109/CVPR.2019.01211
– year: 2020
  ident: ref53
  article-title: AMVNet: Assertion-based multi-view fusion network for lidar semantic segmentation
– year: 2016
  ident: ref41
  article-title: Deep variational canonical correlation analysis
– ident: ref24
  doi: 10.1002/9780470316870
– start-page: 5707
  volume-title: Proc. Int. Conf. Mach. Learn.
  year: 2021
  ident: ref70
  article-title: Evaluating robustness of predictive uncertainty estimation: Are dirichlet-based models reliable?
– ident: ref18
  doi: 10.18653/v1/D15-1303
– volume: 118
  volume-title: Bayesian Learning for Neural Networks
  year: 2012
  ident: ref25
– start-page: 853
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  year: 1991
  ident: ref65
  article-title: Transforming neural-net output levels to probability distributions
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representations
  year: 2017
  ident: ref86
  article-title: Beta-VAE: Learning basic visual concepts with a constrained variational framework
– year: 2016
  ident: ref9
  article-title: End to end learning for self-driving cars
– ident: ref76
  doi: 10.1109/TFUZZ.2017.2659764
– ident: ref80
  doi: 10.1109/36.602544
– start-page: 814
  volume-title: Proc. Int. Conf. Artif. Intell. Statist.
  year: 2014
  ident: ref28
  article-title: Black box variational inference
– ident: ref84
  doi: 10.1007/978-0-387-45528-0
– ident: ref12
  doi: 10.18653/v1/N16-1020
– ident: ref106
  doi: 10.1007/978-3-642-33715-4_54
– ident: ref2
  doi: 10.1109/CVPR46437.2021.01102
– ident: ref52
  doi: 10.1109/TPAMI.2018.2879108
– start-page: 6850
  volume-title: Proc. Int. Conf. Mach. Learn.
  year: 2021
  ident: ref5
  article-title: One pass late fusion multi-view clustering
– year: 2017
  ident: ref48
  article-title: Revisiting the effectiveness of off-the-shelf temporal modeling approaches for large-scale video classification
– ident: ref108
  doi: 10.1109/ICMEW.2015.7169757
– start-page: 15535
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  year: 2019
  ident: ref44
  article-title: Learning representations by maximizing mutual information across views
– start-page: 3179
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  year: 2018
  ident: ref33
  article-title: Evidential deep learning to quantify classification uncertainty
– ident: ref99
  doi: 10.1609/aaai.v32i1.12292
– ident: ref13
  doi: 10.18653/v1/P19-1656
– start-page: 1321
  volume-title: Proc. Int. Conf. Mach. Learn.
  year: 2017
  ident: ref90
  article-title: On calibration of modern neural networks
– start-page: 1
  year: 2010
  ident: ref89
  article-title: Introduction to the Dirichlet distribution and related processes
– ident: ref16
  doi: 10.1145/1027527.1027661
– start-page: 9690
  volume-title: Proc. Int. Conf. Mach. Learn.
  year: 2020
  ident: ref34
  article-title: Uncertainty estimation using a single deep deterministic neural network
– ident: ref94
  doi: 10.1109/ICCV.2011.6126543
– ident: ref72
  doi: 10.1111/j.2517-6161.1968.tb00722.x
– ident: ref73
  doi: 10.1515/9780691214696
– ident: ref43
  doi: 10.1007/978-3-030-58621-8_45
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representations
  year: 2021
  ident: ref35
  article-title: Trusted multi-view classification
– ident: ref74
  doi: 10.2172/800792
– start-page: 909
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  year: 2018
  ident: ref95
  article-title: Uncertainty-aware attention for reliable interpretation and prediction
– ident: ref20
  doi: 10.1038/s41467-018-05432-w
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representations
  year: 2021
  ident: ref66
  article-title: Training independent subnetworks for robust prediction
– start-page: 5092
  volume-title: Proc. Int. Conf. Mach. Learn.
  year: 2019
  ident: ref6
  article-title: Comic: Multi-view clustering without parameter selection
– ident: ref50
  doi: 10.1109/CVPR42600.2020.01271
– start-page: 1
  volume-title: Proc. Brit. Mach. Vis. Conf.
  year: 2020
  ident: ref103
  article-title: Centroid based concept learning for RGB-D indoor scene classification
– ident: ref59
  doi: 10.1109/TBME.2019.2899222
– ident: ref10
  doi: 10.1109/ICIP.2002.1038076
– ident: ref36
  doi: 10.1007/978-1-4612-4380-9_14
– ident: ref47
  doi: 10.1609/aaai.v32i1.11945
– ident: ref64
  doi: 10.1109/ICRA40945.2020.9197266
– volume: 31
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  year: 2018
  ident: ref68
  article-title: Predictive uncertainty estimation via prior networks
– ident: ref107
  doi: 10.1109/CVPR.2016.90
– ident: ref71
  doi: 10.1214/aoms/1177698950
– year: 2019
  ident: ref49
  article-title: Supervised multimodal bitransformers for classifying images and text
– start-page: 1225
  volume-title: Proc. 15th Int. Conf. Inf. Fusion
  year: 2012
  ident: ref75
  article-title: Interpretation and fusion of hyper opinions in subjective logic
– ident: ref98
  doi: 10.1109/ICME.2018.8486573
– start-page: 4116
  volume-title: Proc. Int. Conf. Mach. Learn.
  year: 2020
  ident: ref46
  article-title: Contrastive multi-view representation learning on graphs
– ident: ref100
  doi: 10.1109/TIP.2018.2872629
– start-page: 1356
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  year: 2020
  ident: ref83
  article-title: Posterior network: Uncertainty estimation without ood samples via density-based pseudo-counts
– year: 2006
  ident: ref37
  article-title: A Kernel method for canonical correlation analysis
– ident: ref38
  doi: 10.1109/TNN.2007.891186
– ident: ref92
  doi: 10.1109/CVPR.2004.383
– ident: ref101
  doi: 10.1609/aaai.v33i01.33019176
– year: 1992
  ident: ref23
  article-title: Bayesian methods for adaptive models
– volume-title: Adv. Neural Inf. Process. Syst.
  year: 2020
  ident: ref67
  article-title: Depth uncertainty in neural networks
– start-page: 1613
  volume-title: Proc. Int. Conf. Mach. Learn.
  year: 2015
  ident: ref29
  article-title: Weight uncertainty in neural network
– year: 2021
  ident: ref62
  article-title: Cooperative learning for multi-view analysis
– volume: 33
  start-page: 1356
  year: 2020
  ident: ref22
  article-title: Posterior network: Uncertainty estimation without ood samples via density-based pseudo-counts
  publication-title: Proc. Adv. Neural Inf. Process. Syst.
– ident: ref57
  doi: 10.1109/TNNLS.2021.3117403
– start-page: 1
  year: 2015
  ident: ref96
  article-title: Adam: A method for stochastic optimization
– ident: ref7
  doi: 10.1109/TIP.2017.2651379
– ident: ref30
  doi: 10.5555/3045390.3045502
– ident: ref77
  doi: 10.1016/0167-8655(96)00039-6
– start-page: 7034
  volume-title: Proc. Int. Conf. Mach. Learn.
  year: 2020
  ident: ref81
  article-title: Confidence-aware learning for deep neural networks
– ident: ref88
  doi: 10.1109/ICCV48922.2021.01310
– ident: ref78
  doi: 10.1109/MFI.2010.5604480
– start-page: 568
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  year: 2014
  ident: ref11
  article-title: Two-stream convolutional networks for action recognition in videos
– volume: 31
  start-page: 1
  year: 2018
  ident: ref82
  article-title: Predictive uncertainty estimation via prior networks
  publication-title: Proc. Adv. Neural Inf. Process. Syst.
– ident: ref60
  doi: 10.1038/s41467-021-23774-w
– ident: ref42
  doi: 10.1109/CVPR.2017.312
– start-page: 1247
  volume-title: Proc. Int. Conf. Mach. Learn.
  year: 2013
  ident: ref39
  article-title: Deep canonical correlation analysis
– ident: ref27
  doi: 10.5555/2986459.2986721
– ident: ref54
  doi: 10.1007/978-3-662-44415-3_16
– ident: ref79
  doi: 10.1016/j.inffus.2005.07.003
SSID ssj0014503
Score 2.7382843
Snippet Existing multi-view classification algorithms focus on promoting accuracy by exploiting different views, typically integrating them into common representations...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 2551
SubjectTerms Algorithms
Bayes methods
Classification
Computational modeling
Dirichlet problem
Estimation
Evidential deep learning
Heuristic algorithms
Learning
Model accuracy
multi-view learning
Reliability
Robustness
Trustworthiness
Uncertainty
varitional Dirichlet
Title Trusted Multi-View Classification With Dynamic Evidential Fusion
URI https://ieeexplore.ieee.org/document/9767662
https://www.ncbi.nlm.nih.gov/pubmed/35503823
https://www.proquest.com/docview/2761374260
https://www.proquest.com/docview/2659230878
Volume 45
WOSCitedRecordID wos000912386000078&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 2160-9292
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014503
  issn: 0162-8828
  databaseCode: RIE
  dateStart: 19790101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3JSsRAEC1UPOjBZdziRgveNJq9k5uiDgo6eBh1bqGTruDAMCOz6O9b1VlQUMFbIJ2Frq7Ue6muegDHyk8yRXHGLjL07QAxslXs5XYcF3FAeCMK0DRxvZedTtzrJY9zcNrUwiCi2XyGZ3xocvl6lM_4V9k5hU4Z8Qd3nm5S1mo1GYMgNCrIhGDIw4lG1AUyTnLefbx8uCMq6HnEUCWxbBbPoTjrcA7sWzwyAiu_Y00Tc9qr_3vbNVipsKW4LBfDOszhsAWrtW6DqNy4BctfmhBuwEWXyy5QC1OLaz_38UMYqUzeRGTsJl7601dxXWrXi1KHlD4MA9Ge8b-2TXhq33Svbu1KV8HO_dCd2qHytCoIOxCDjJzM01FY5E6IRMe0Io8OUBMQyQjLKbdA9NFXviYYJiPFdbKZvwULw9EQd0AkhXaygEikzKNAay-RrkZHF5KYkqsztMCtZzfNq6bjrH0xSA35cJLUGCdl46SVcSw4aa55K1tu_Dl6g6e-GVnNugX7tRHTyisnqScJvEjuyW_BUXOa_ImTJGqIoxmN4Twzt0mMLdgujd_cu14zuz8_cw-WWIy-3NO9DwvT8QwPYDF_n_Yn40NatL340CzaTy4-5Es
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT9tAEB6hFKnlQCgUcKHtIvXWGvxYe-0biDZKRIg4pG1u1to7FpFQgohD_35n1g8ViVbiZsnrh3Z2PN_n2ZkP4LMO01xTnHHLHENXIsauToLCTZIykYQ3Yom2ietYTSbJbJbebMDXrhYGEe3mMzzlQ5vLN8tizb_Kzih0qpg_uK8iKQOvrtbqcgYysjrIhGHIx4lItCUyXno2vbm4HhEZDALiqIp4NsvnUKT1OAv2JCJZiZV_o00bdQb9l73vDmw36FJc1MvhLWzgYhf6rXKDaBx5F7b-akO4B-dTLrxAI2w1rvtzjr-FFcvkbUTWcuLXvLoV32r1elErkdKn4U4M1vy37R38GHyfXg7dRlnBLcLIr9xIB0aXhB6IQ8ZeHpg4KgsvQiJkRpNPSzQERXJCc9ovEUMMdWgIiKlYc6VsHu5Db7Fc4CGItDReLolGqiKWxgSp8g16plTElXyTowN-O7tZ0bQdZ_WLu8zSDy_NrHEyNk7WGMeBL90193XTjf-O3uOp70Y2s-7AcWvErPHLVRYogi-Ku_I7cNKdJo_iNIle4HJNYzjTzI0SEwcOauN3927XzPvnn_kJXg-n1-NsPJpcHcEblqavd3gfQ696WOMH2Cweq_nq4aNdun8A1Wrmqg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Trusted+Multi-View+Classification+With+Dynamic+Evidential+Fusion&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Han%2C+Zongbo&rft.au=Zhang%2C+Changqing&rft.au=Fu%2C+Huazhu&rft.au=Zhou%2C+Joey+Tianyi&rft.date=2023-02-01&rft.issn=1939-3539&rft.eissn=1939-3539&rft.volume=45&rft.issue=2&rft.spage=2551&rft_id=info:doi/10.1109%2FTPAMI.2022.3171983&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon