Towards Visual Explainable Active Learning for Zero-Shot Classification

Zero-shot classification is a promising paradigm to solve an applicable problem when the training classes and test classes are disjoint. Achieving this usually needs experts to externalize their domain knowledge by manually specifying a class-attribute matrix to define which classes have which attri...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on visualization and computer graphics Ročník 28; číslo 1; s. 791 - 801
Hlavní autoři: Jia, Shichao, Li, Zeyu, Chen, Nuo, Zhang, Jiawan
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States IEEE 01.01.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:1077-2626, 1941-0506, 1941-0506
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Zero-shot classification is a promising paradigm to solve an applicable problem when the training classes and test classes are disjoint. Achieving this usually needs experts to externalize their domain knowledge by manually specifying a class-attribute matrix to define which classes have which attributes. Designing a suitable class-attribute matrix is the key to the subsequent procedure, but this design process is tedious and trial-and-error with no guidance. This paper proposes a visual explainable active learning approach with its design and implementation called semantic navigator to solve the above problems. This approach promotes human-AI teaming with four actions (ask, explain, recommend, respond) in each interaction loop. The machine asks contrastive questions to guide humans in the thinking process of attributes. A novel visualization called semantic map explains the current status of the machine. Therefore analysts can better understand why the machine misclassifies objects. Moreover, the machine recommends the labels of classes for each attribute to ease the labeling burden. Finally, humans can steer the model by modifying the labels interactively, and the machine adjusts its recommendations. The visual explainable active learning approach improves humans' efficiency of building zero-shot classification models interactively, compared with the method without guidance. We justify our results with user studies using the standard benchmarks for zero-shot classification.
AbstractList Zero-shot classification is a promising paradigm to solve an applicable problem when the training classes and test classes are disjoint. Achieving this usually needs experts to externalize their domain knowledge by manually specifying a class-attribute matrix to define which classes have which attributes. Designing a suitable class-attribute matrix is the key to the subsequent procedure, but this design process is tedious and trial-and-error with no guidance. This paper proposes a visual explainable active learning approach with its design and implementation called semantic navigator to solve the above problems. This approach promotes human-AI teaming with four actions (ask, explain, recommend, respond) in each interaction loop. The machine asks contrastive questions to guide humans in the thinking process of attributes. A novel visualization called semantic map explains the current status of the machine. Therefore analysts can better understand why the machine misclassifies objects. Moreover, the machine recommends the labels of classes for each attribute to ease the labeling burden. Finally, humans can steer the model by modifying the labels interactively, and the machine adjusts its recommendations. The visual explainable active learning approach improves humans' efficiency of building zero-shot classification models interactively, compared with the method without guidance. We justify our results with user studies using the standard benchmarks for zero-shot classification.Zero-shot classification is a promising paradigm to solve an applicable problem when the training classes and test classes are disjoint. Achieving this usually needs experts to externalize their domain knowledge by manually specifying a class-attribute matrix to define which classes have which attributes. Designing a suitable class-attribute matrix is the key to the subsequent procedure, but this design process is tedious and trial-and-error with no guidance. This paper proposes a visual explainable active learning approach with its design and implementation called semantic navigator to solve the above problems. This approach promotes human-AI teaming with four actions (ask, explain, recommend, respond) in each interaction loop. The machine asks contrastive questions to guide humans in the thinking process of attributes. A novel visualization called semantic map explains the current status of the machine. Therefore analysts can better understand why the machine misclassifies objects. Moreover, the machine recommends the labels of classes for each attribute to ease the labeling burden. Finally, humans can steer the model by modifying the labels interactively, and the machine adjusts its recommendations. The visual explainable active learning approach improves humans' efficiency of building zero-shot classification models interactively, compared with the method without guidance. We justify our results with user studies using the standard benchmarks for zero-shot classification.
Zero-shot classification is a promising paradigm to solve an applicable problem when the training classes and test classes are disjoint. Achieving this usually needs experts to externalize their domain knowledge by manually specifying a class-attribute matrix to define which classes have which attributes. Designing a suitable class-attribute matrix is the key to the subsequent procedure, but this design process is tedious and trial-and-error with no guidance. This paper proposes a visual explainable active learning approach with its design and implementation called semantic navigator to solve the above problems. This approach promotes human-AI teaming with four actions (ask, explain, recommend, respond) in each interaction loop. The machine asks contrastive questions to guide humans in the thinking process of attributes. A novel visualization called semantic map explains the current status of the machine. Therefore analysts can better understand why the machine misclassifies objects. Moreover, the machine recommends the labels of classes for each attribute to ease the labeling burden. Finally, humans can steer the model by modifying the labels interactively, and the machine adjusts its recommendations. The visual explainable active learning approach improves humans' efficiency of building zero-shot classification models interactively, compared with the method without guidance. We justify our results with user studies using the standard benchmarks for zero-shot classification.
Author Li, Zeyu
Zhang, Jiawan
Jia, Shichao
Chen, Nuo
Author_xml – sequence: 1
  givenname: Shichao
  surname: Jia
  fullname: Jia, Shichao
  email: jsc_se@tju.edu.cn
  organization: College of Intelligence and Computing, Tianjin University, China
– sequence: 2
  givenname: Zeyu
  surname: Li
  fullname: Li, Zeyu
  email: lzytianda@tju.edu.cn
  organization: College of Intelligence and Computing, Tianjin University, China
– sequence: 3
  givenname: Nuo
  surname: Chen
  fullname: Chen, Nuo
  email: nicole_0420@tju.edu.cn
  organization: College of Intelligence and Computing, Tianjin University, China
– sequence: 4
  givenname: Jiawan
  surname: Zhang
  fullname: Zhang, Jiawan
  email: jwzhang@tju.edu.cn
  organization: College of Intelligence and Computing, Tianjin University, China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/34587036$$D View this record in MEDLINE/PubMed
BookMark eNp9kTtPwzAUhS1URGnhByAkFImFJcWvOPFYRaUgVWKgdGCxbMcBV2lc7ITHvyelLUMHpnuH7xzde84A9GpXGwAuEBwhBPntfJFPRxhiNCII0ZSTI3CKOEUxTCDrdTtM0xgzzPpgEMISQkRpxk9An9AkSyFhp2A6d5_SFyFa2NDKKpp8rStpa6kqE411Yz9MNDPS17Z-jUrnoxfjXfz05poor2QItrRaNtbVZ-C4lFUw57s5BM93k3l-H88epw_5eBZrQnkTY8wKSnWieEokVUSX1BSFTDTVkGSMEwy5yhTHBErKy9J0u1KYFqREmeSKDMHN1nft3XtrQiNWNmhTVbI2rg0CJ2mGKOecdOj1Abp0ra-76wRmiBDWJbWhrnZUq1amEGtvV9J_i31EHZBuAe1dCN6UQtvm9-fGS1sJBMWmDLEpQ2zKELsyOiU6UO7N_9NcbjXWGPPH8yTBGcXkB1kxkzU
CODEN ITVGEA
CitedBy_id crossref_primary_10_1080_00207543_2023_2238083
crossref_primary_10_1109_MGRS_2024_3403423
crossref_primary_10_1109_TVCG_2023_3326591
crossref_primary_10_1109_TCSS_2022_3231687
crossref_primary_10_1109_TVCG_2023_3345340
crossref_primary_10_1108_IMDS_03_2022_0152
crossref_primary_10_1109_TVCG_2021_3138933
crossref_primary_10_1109_TVCG_2024_3388514
crossref_primary_10_1109_TVCG_2023_3326577
crossref_primary_10_1109_TETCI_2023_3299298
crossref_primary_10_1109_TVCG_2024_3370654
crossref_primary_10_1109_TVCG_2022_3209408
crossref_primary_10_1109_TVCG_2024_3357065
crossref_primary_10_1016_j_engappai_2025_111009
crossref_primary_10_1109_TVCG_2022_3182488
Cites_doi 10.1109/TVCG.2017.2744818
10.1111/cgf.13406
10.1109/CVPR.2019.00844
10.1109/TVCG.2017.2744805
10.1109/TVCG.2017.2744378
10.1145/3242587.3242596
10.1007/s00371-018-1500-3
10.1145/3290605.3300234
10.1109/TPAMI.2018.2857768
10.1109/TVCG.2018.2864812
10.1145/3293318
10.1145/3025171.3025208
10.1109/CVPR.2013.105
10.1016/j.neucom.2012.12.056
10.1109/MCG.2018.042731661
10.1007/s12650-019-00607-z
10.1109/TVCG.2018.2864504
10.1111/cgf.13730
10.1109/TVCG.2015.2467191
10.1109/CVPR.2009.5206594
10.1109/VIS47514.2020.00057
10.1007/978-3-319-50077-5_2
10.1109/TVCG.2019.2934267
10.1145/3272973.3274059
10.1109/TVCG.2012.277
10.1007/s12650-018-0531-1
10.1109/TVCG.2018.2865047
10.1109/VAST.2012.6400486
10.1109/TVCG.2018.2843369
10.1109/CVPR.2011.5995451
10.1007/s10844-014-0304-9
10.1109/ICCV.2011.6126281
10.1109/MCG.2014.73
10.1109/CVPR.2009.5206772
10.1109/CVPR.2017.321
10.1007/978-3-642-15549-9_48
10.1109/VAST.2017.8585721
10.1109/TVCG.2014.2331979
10.1109/TVCG.2012.260
10.1109/TVCG.2018.2864477
10.1109/TVCG.2017.2744158
10.1109/ICDMW.2010.181
10.1111/cgf.14034
10.1145/604045.604056
10.1007/s00371-015-1132-9
10.1037/0022-0663.95.2.393
10.1109/TVCG.2018.2864843
10.1109/CVPR.2016.90
10.1016/j.artint.2018.07.007
10.1145/2702123.2702149
10.1016/j.artmed.2019.01.001
10.1109/ICCV.2017.376
10.1109/CVPR.2019.00758
10.1109/TVCG.2017.2744938
10.1109/VAST47406.2019.8986943
10.1007/978-3-642-33712-3_26
10.1109/TVCG.2016.2598831
10.24963/ijcai.2019/328
10.1207/s15516709cog1502_3
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TVCG.2021.3114793
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE/IET Electronic Library
CrossRef
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic

PubMed
Technology Research Database
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1941-0506
EndPage 801
ExternalDocumentID 34587036
10_1109_TVCG_2021_3114793
9552842
Genre orig-research
Research Support, Non-U.S. Gov't
Journal Article
GrantInformation_xml – fundername: National Key Research and Development Program of China
  grantid: 2019YFC1521200
  funderid: 10.13039/501100012166
GroupedDBID ---
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
F5P
HZ~
H~9
IEDLZ
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNI
RNS
RZB
TN5
VH1
AAYXX
CITATION
AAYOK
NPM
PKN
RIC
RIG
Z5M
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c349t-226d44c5b973a4b3cf4edda5c4c038693209b8b9230a49ffe8b9bb24d3f18a9b3
IEDL.DBID RIE
ISICitedReferencesCount 25
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000733959000082&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1077-2626
1941-0506
IngestDate Mon Sep 29 04:26:37 EDT 2025
Sun Nov 09 08:13:41 EST 2025
Wed Feb 19 02:27:58 EST 2025
Tue Nov 18 22:37:44 EST 2025
Sat Nov 29 03:31:38 EST 2025
Wed Aug 27 02:49:30 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 1
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c349t-226d44c5b973a4b3cf4edda5c4c038693209b8b9230a49ffe8b9bb24d3f18a9b3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
PMID 34587036
PQID 2613369413
PQPubID 75741
PageCount 11
ParticipantIDs proquest_miscellaneous_2578149993
crossref_citationtrail_10_1109_TVCG_2021_3114793
ieee_primary_9552842
proquest_journals_2613369413
crossref_primary_10_1109_TVCG_2021_3114793
pubmed_primary_34587036
PublicationCentury 2000
PublicationDate 2022-Jan.
2022-1-00
2022-01-00
20220101
PublicationDateYYYYMMDD 2022-01-01
PublicationDate_xml – month: 01
  year: 2022
  text: 2022-Jan.
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on visualization and computer graphics
PublicationTitleAbbrev TVCG
PublicationTitleAlternate IEEE Trans Vis Comput Graph
PublicationYear 2022
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref57
ref13
duan (ref16) 2012
ref56
ref12
ref59
ref15
ref58
ref53
ref52
ref55
ref11
ref10
settles (ref61) 2009
kim (ref37) 0
ref17
ghorbani (ref26) 2019
ref19
van der maaten (ref66) 2009
ref18
tian (ref65) 0; 31
höferlin (ref31) 0
ref51
ref50
liu (ref42) 2016; 23
ref46
ref45
ref48
ref47
fabian (ref20) 0
ref41
ref44
ref43
vartak (ref67) 0; 8
ref49
ref8
ref7
ref9
ref4
ref3
ref6
ref5
ref40
guo (ref28) 0
ref35
ref34
ref75
ref74
ref30
ref33
ref32
bennett (ref1) 1999
ref2
ref39
ref38
wah (ref68) 2011
ref71
ref70
ref73
ref72
kaur (ref36) 2019
ref24
ref23
ref69
ref25
ref64
ref63
ref22
ref21
ref27
ref29
der maaten (ref14) 2008; 9
simard (ref62) 2017
ref60
zhang (ref76) 0; 4
parikh (ref54) 0
References_xml – ident: ref3
  doi: 10.1109/TVCG.2017.2744818
– ident: ref4
  doi: 10.1111/cgf.13406
– ident: ref59
  doi: 10.1109/CVPR.2019.00844
– ident: ref57
  doi: 10.1109/TVCG.2017.2744805
– ident: ref44
  doi: 10.1109/TVCG.2017.2744378
– volume: 4
  start-page: 1
  year: 0
  ident: ref76
  article-title: an ideal human" expectations of ai teammates in human-ai teaming
  publication-title: Proc ACM SIG Computer-Human Interaction
– ident: ref23
  doi: 10.1145/3242587.3242596
– year: 2011
  ident: ref68
  publication-title: The Caltech-UCSD Birds-200-2011 Dataset
– ident: ref5
  doi: 10.1007/s00371-018-1500-3
– ident: ref8
  doi: 10.1145/3290605.3300234
– ident: ref72
  doi: 10.1109/TPAMI.2018.2857768
– ident: ref47
  doi: 10.1109/TVCG.2018.2864812
– volume: 9
  start-page: 2579
  year: 2008
  ident: ref14
  article-title: Visualizing data using t-sne
  publication-title: Journal of Machine Learning Research
– ident: ref70
  doi: 10.1145/3293318
– year: 2019
  ident: ref36
  article-title: Building shared mental models between humans and ai for effective collaboration
  publication-title: CHI'19
– ident: ref64
  doi: 10.1145/3025171.3025208
– ident: ref74
  doi: 10.1109/CVPR.2013.105
– ident: ref27
  doi: 10.1016/j.neucom.2012.12.056
– start-page: 23
  year: 0
  ident: ref31
  article-title: Interactive learning of ad-hoc classifiers for video visual analytics
  publication-title: 2012 IEEE Conference on Visual Analytics Science and Technology (VAST)
– ident: ref13
  doi: 10.1109/MCG.2018.042731661
– ident: ref34
  doi: 10.1007/s12650-019-00607-z
– ident: ref69
  doi: 10.1109/TVCG.2018.2864504
– start-page: 368
  year: 1999
  ident: ref1
  article-title: Semi-supervised support vector machines
  publication-title: Advances in neural information processing systems
– ident: ref10
  doi: 10.1111/cgf.13730
– ident: ref71
  doi: 10.1109/TVCG.2015.2467191
– start-page: 6870
  year: 0
  ident: ref28
  article-title: Zero-shot learning with attribute selection
  publication-title: National Conference on Artificial Intelligence
– ident: ref38
  doi: 10.1109/CVPR.2009.5206594
– ident: ref58
  doi: 10.1109/VIS47514.2020.00057
– ident: ref56
  doi: 10.1007/978-3-319-50077-5_2
– ident: ref48
  doi: 10.1109/TVCG.2019.2934267
– ident: ref33
  doi: 10.1145/3272973.3274059
– ident: ref30
  doi: 10.1109/TVCG.2012.277
– ident: ref35
  doi: 10.1007/s12650-018-0531-1
– ident: ref15
  doi: 10.1109/TVCG.2018.2865047
– ident: ref7
  doi: 10.1109/VAST.2012.6400486
– ident: ref32
  doi: 10.1109/TVCG.2018.2843369
– ident: ref52
  doi: 10.1109/CVPR.2011.5995451
– ident: ref19
  doi: 10.1007/s10844-014-0304-9
– start-page: 45
  year: 0
  ident: ref20
  article-title: Sparse quasi-newton optimization for semi-supervised support vector machines
  publication-title: Proceedings of the 1 st International Conference on Pattern Recognition Applications and Methods (ICPRAM 2012)
– ident: ref53
  doi: 10.1109/ICCV.2011.6126281
– ident: ref17
  doi: 10.1109/MCG.2014.73
– start-page: 2668
  year: 0
  ident: ref37
  article-title: Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav)
  publication-title: International Conference on Machine Learning
– ident: ref22
  doi: 10.1109/CVPR.2009.5206772
– ident: ref75
  doi: 10.1109/CVPR.2017.321
– ident: ref2
  doi: 10.1007/978-3-642-15549-9_48
– ident: ref46
  doi: 10.1109/VAST.2017.8585721
– ident: ref51
  doi: 10.1109/TVCG.2014.2331979
– ident: ref18
  doi: 10.1109/TVCG.2012.260
– ident: ref9
  doi: 10.1109/TVCG.2018.2864477
– ident: ref63
  doi: 10.1109/TVCG.2017.2744158
– ident: ref60
  doi: 10.1109/ICDMW.2010.181
– ident: ref12
  doi: 10.1111/cgf.14034
– volume: 8
  start-page: 2182
  year: 0
  ident: ref67
  article-title: Seedb: Efficient data-driven visualization recommendations to support visual analytics
  publication-title: Proceedings of the 30th International Conference on Very Large Data Bases VLDB Endowment
– ident: ref21
  doi: 10.1145/604045.604056
– ident: ref6
  doi: 10.1007/s00371-015-1132-9
– volume: 31
  year: 0
  ident: ref65
  article-title: Learning attributes from the crowdsourced relative labels
  publication-title: Proceedings of the AAAI Conference on Artificial Intelligence
– ident: ref25
  doi: 10.1037/0022-0663.95.2.393
– ident: ref43
  doi: 10.1109/TVCG.2018.2864843
– year: 2009
  ident: ref61
  publication-title: Active Learning Literature Survey
– ident: ref29
  doi: 10.1109/CVPR.2016.90
– start-page: 9273
  year: 2019
  ident: ref26
  article-title: Towards automatic concept-based explanations
  publication-title: Advances in neural information processing systems
– start-page: 384
  year: 2009
  ident: ref66
  article-title: Learning a parametric embedding by preserving local structure
  publication-title: Artificial Intelligence and Statistics
– start-page: 3474
  year: 2012
  ident: ref16
  article-title: Discovering localized attributes for fine-grained recognition
  publication-title: Computer Vision and Pattern Recognition
– year: 0
  ident: ref54
  article-title: Relative attributes for enhanced human-machine communication
  publication-title: Twenty-Sixth AAAI Conference on Artificial Intelligence
– ident: ref45
  doi: 10.1016/j.artint.2018.07.007
– ident: ref49
  doi: 10.1145/2702123.2702149
– ident: ref39
  doi: 10.1016/j.artmed.2019.01.001
– ident: ref11
  doi: 10.1109/ICCV.2017.376
– ident: ref40
  doi: 10.1109/CVPR.2019.00758
– ident: ref41
  doi: 10.1109/TVCG.2017.2744938
– ident: ref73
  doi: 10.1109/VAST47406.2019.8986943
– ident: ref55
  doi: 10.1007/978-3-642-33712-3_26
– year: 2017
  ident: ref62
  article-title: Machine teaching: A new paradigm for building machine learning systems
  publication-title: arXiv Learning
– volume: 23
  start-page: 91
  year: 2016
  ident: ref42
  article-title: Towards better analysis of deep convolutional neural networks
  publication-title: IEEE Transactions on Visualization and Computer Graphics
  doi: 10.1109/TVCG.2016.2598831
– ident: ref24
  doi: 10.24963/ijcai.2019/328
– ident: ref50
  doi: 10.1207/s15516709cog1502_3
SSID ssj0014489
Score 2.511382
Snippet Zero-shot classification is a promising paradigm to solve an applicable problem when the training classes and test classes are disjoint. Achieving this usually...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 791
SubjectTerms Active Learning
Classification
Explainable Artificial Intelligence
Human-AI Teaming
Labeling
Labels
Mixed-Initiative Visual Analytics
Navigation
Semantics
Task analysis
Testing
Training
Visual analytics
Zero-shot learning
Title Towards Visual Explainable Active Learning for Zero-Shot Classification
URI https://ieeexplore.ieee.org/document/9552842
https://www.ncbi.nlm.nih.gov/pubmed/34587036
https://www.proquest.com/docview/2613369413
https://www.proquest.com/docview/2578149993
Volume 28
WOSCitedRecordID wos000733959000082&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1941-0506
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014489
  issn: 1077-2626
  databaseCode: RIE
  dateStart: 19950101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1JS8QwFH6oiOjBfRk3KngSq22T2OQo4nISwVEGLyWrCjKVWfz9vqSZoqCCt0Bf2pL3knxvBzgUzBaOKJnmWkpUUIxLldYmtdRwZSgxZ9yEZhPl7S3v9cTdFBy3uTDW2hB8Zk_8MPjyTa3H3lR2KhjD0xQP3OmyLJtcrdZjgGqGaOILy7RAlB49mHkmTruPF9eoCRY5Kqi5tyTNwxyhjPvaU9-uo9Bf5XeoGa6cq6X__ewyLEZomZw3srACU7a_CgtfCg6uwXU3RMkOk8fX4RhpfQReTJ9KzsPJl8SCq88JotnkyQ7q9P6lHiWheaYPKwqcXIeHq8vuxU0aWymkmlAxShFkGUo1U6IkkiqiHbXGSKapzgg_QxCXCcUVor1MUuGcxbFSBTXE5VwKRTZgpl_37RYkRGaacCa4FpJKziQvHO57iVCdMsdcB7LJilY61hn37S7eqqBvZKLy_Kg8P6rIjw4ctVPemyIbfxGv-cVuCeM6d2B3wrYqbsNhheohIT5VF2cdtI9xA3mviOzbeow0zFf9QpyMNJsNu9t3T6Rk--dv7sB84bMhgkVmF2ZGg7Hdg1n9MXodDvZRSnt8P0jpJ3Ec4P4
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dSxwxEB8OLVUfrNWqp7bdgk_i1t1N4iWPclSveD2EniK-LPlUQW7lPvz7nWRzi4IW-hbYye6SmUl-k_kC2BfMFo4omeZaSjRQjEuV1ia11HBlKDHH3IRmE53BgF9fi4sWHDa5MNbaEHxmf_ph8OWbSs_8VdmRYAx3U9xwFxmlRV5nazU-AzQ0RB1h2EkLxOnRh5ln4mh41T1DW7DI0UTN_V3SMnwklHFfferVgRQ6rLwPNsOhc_rp_353DVYjuExOamn4DC07WoeVFyUHN-BsGOJkJ8nV_WSGtD4GLyZQJSdh70tiydXbBPFscmPHVfr3rpomoX2mDywKvPwCl6e_ht1eGpsppJpQMU0RZhlKNVOiQyRVRDtqjZFMU50RfowwLhOKK8R7maTCOYtjpQpqiMu5FIpswsKoGtltSIjMNOFMcC0klZxJXjjUfIlgnTLHXBuy-YqWOlYa9w0vHspgcWSi9PwoPT_KyI82HDRTHusyG_8i3vCL3RDGdW7D3pxtZVTESYkGIiE-WRdn_Wgeowp5v4gc2WqGNMzX_UKkjDRbNbubd8-lZOftb36Hpd7wT7_s_x6c78Jy4XMjwv3MHixMxzP7FT7op-n9ZPwtyOozAYTjXQ
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Towards+Visual+Explainable+Active+Learning+for+Zero-Shot+Classification&rft.jtitle=IEEE+transactions+on+visualization+and+computer+graphics&rft.au=Jia%2C+Shichao&rft.au=Li%2C+Zeyu&rft.au=Chen%2C+Nuo&rft.au=Zhang%2C+Jiawan&rft.date=2022-01-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=1077-2626&rft.eissn=1941-0506&rft.volume=28&rft.issue=1&rft.spage=791&rft_id=info:doi/10.1109%2FTVCG.2021.3114793&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1077-2626&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1077-2626&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1077-2626&client=summon