Visual Analytics for Machine Learning: A Data Perspective Survey
The past decade has witnessed a plethora of works that leverage the power of visualization (VIS) to interpret machine learning (ML) models. The corresponding research topic, VIS4ML, keeps growing at a fast pace. To better organize the enormous works and shed light on the developing trend of VIS4ML,...
Saved in:
| Published in: | IEEE transactions on visualization and computer graphics Vol. 30; no. 12; pp. 7637 - 7656 |
|---|---|
| Main Authors: | , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
United States
IEEE
01.12.2024
|
| Subjects: | |
| ISSN: | 1077-2626, 1941-0506, 1941-0506 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | The past decade has witnessed a plethora of works that leverage the power of visualization (VIS) to interpret machine learning (ML) models. The corresponding research topic, VIS4ML, keeps growing at a fast pace. To better organize the enormous works and shed light on the developing trend of VIS4ML, we provide a systematic review of these works through this survey. Since data quality greatly impacts the performance of ML models, our survey focuses specifically on summarizing VIS4ML works from the data perspective . First, we categorize the common data handled by ML models into five types, explain the unique features of each type, and highlight the corresponding ML models that are good at learning from them. Second, from the large number of VIS4ML works, we tease out six tasks that operate on these types of data (i.e., data-centric tasks) at different stages of the ML pipeline to understand, diagnose, and refine ML models. Lastly, by studying the distribution of 143 surveyed papers across the five data types, six data-centric tasks, and their intersections, we analyze the prospective research directions and envision future research trends. |
|---|---|
| AbstractList | The past decade has witnessed a plethora of works that leverage the power of visualization (VIS) to interpret machine learning (ML) models. The corresponding research topic, VIS4ML, keeps growing at a fast pace. To better organize the enormous works and shed light on the developing trend of VIS4ML, we provide a systematic review of these works through this survey. Since data quality greatly impacts the performance of ML models, our survey focuses specifically on summarizing VIS4ML works from the data perspective. First, we categorize the common data handled by ML models into five types, explain the unique features of each type, and highlight the corresponding ML models that are good at learning from them. Second, from the large number of VIS4ML works, we tease out six tasks that operate on these types of data (i.e., data-centric tasks) at different stages of the ML pipeline to understand, diagnose, and refine ML models. Lastly, by studying the distribution of 143 surveyed papers across the five data types, six data-centric tasks, and their intersections, we analyze the prospective research directions and envision future research trends.The past decade has witnessed a plethora of works that leverage the power of visualization (VIS) to interpret machine learning (ML) models. The corresponding research topic, VIS4ML, keeps growing at a fast pace. To better organize the enormous works and shed light on the developing trend of VIS4ML, we provide a systematic review of these works through this survey. Since data quality greatly impacts the performance of ML models, our survey focuses specifically on summarizing VIS4ML works from the data perspective. First, we categorize the common data handled by ML models into five types, explain the unique features of each type, and highlight the corresponding ML models that are good at learning from them. Second, from the large number of VIS4ML works, we tease out six tasks that operate on these types of data (i.e., data-centric tasks) at different stages of the ML pipeline to understand, diagnose, and refine ML models. Lastly, by studying the distribution of 143 surveyed papers across the five data types, six data-centric tasks, and their intersections, we analyze the prospective research directions and envision future research trends. The past decade has witnessed a plethora of works that leverage the power of visualization (VIS) to interpret machine learning (ML) models. The corresponding research topic, VIS4ML, keeps growing at a fast pace. To better organize the enormous works and shed light on the developing trend of VIS4ML, we provide a systematic review of these works through this survey. Since data quality greatly impacts the performance of ML models, our survey focuses specifically on summarizing VIS4ML works from the data perspective . First, we categorize the common data handled by ML models into five types, explain the unique features of each type, and highlight the corresponding ML models that are good at learning from them. Second, from the large number of VIS4ML works, we tease out six tasks that operate on these types of data (i.e., data-centric tasks) at different stages of the ML pipeline to understand, diagnose, and refine ML models. Lastly, by studying the distribution of 143 surveyed papers across the five data types, six data-centric tasks, and their intersections, we analyze the prospective research directions and envision future research trends. |
| Author | Wang, Junpeng Zhang, Wei Liu, Shixia |
| Author_xml | – sequence: 1 givenname: Junpeng orcidid: 0000-0002-1130-9914 surname: Wang fullname: Wang, Junpeng email: junpeng.wang.nk@gmail.com organization: Visa Research, Foster City, CA, USA – sequence: 2 givenname: Shixia orcidid: 0000-0003-4499-6420 surname: Liu fullname: Liu, Shixia email: shixia@tsinghua.edu.cn organization: Tsinghua University, Beijing, China – sequence: 3 givenname: Wei orcidid: 0009-0001-7984-7241 surname: Zhang fullname: Zhang, Wei email: wzhan@visa.com organization: Visa Research, Foster City, CA, USA |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/38261496$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kEtrGzEUhUVJqfPoDyiEMstsxtVrNFJXNU7jBBwSiOOtuKNeNQrjGUeaMfjfd4xtCF1kde7i-w7cc0ZOmrZBQr4xOmaMmh-L5XQ25pTLsRBFSVXxiZwyI1lOC6pOhpuWZc4VVyNyltIrpUxKbb6QkdBcMWnUKfm1DKmHOps0UG-74FLm25jdg3sJDWZzhNiE5u_PbJJdQwfZI8a0RteFDWZPfdzg9oJ89lAn_HrIc_J883sxvc3nD7O76WSeO8F5lyuQRSVKXykjJWhfaC8cc1CgqkBQEGZIz72XXlDmSi2kxqLSQJkxnClxTq72vevYvvWYOrsKyWFdQ4Ntnyw3TDOjdblDvx_QvlrhH7uOYQVxa49fD0C5B1xsU4rorQsddKFtugihtoza3b52t6_d7WsP-w4m-888ln_kXO6dgIjveMn48Jv4Bx-JhCQ |
| CODEN | ITVGEA |
| CitedBy_id | crossref_primary_10_1109_TVCG_2024_3456354 crossref_primary_10_1007_s41095_023_0393_x crossref_primary_10_1109_TVCG_2024_3496789 crossref_primary_10_1109_ACCESS_2025_3581136 crossref_primary_10_1109_TVCG_2024_3514996 crossref_primary_10_1109_TVCG_2025_3539779 crossref_primary_10_1109_TVCG_2025_3546644 crossref_primary_10_1016_j_visinf_2025_100269 crossref_primary_10_1109_TVCG_2024_3456371 |
| Cites_doi | 10.1109/TVCG.2021.3084694 10.1109/TVCG.2020.3030471 10.1109/VISUAL.2019.8933695 10.1111/cgf.14541 10.1109/TVCG.2020.3030449 10.1109/TVCG.2021.3114858 10.1109/tvcg.2023.3345340 10.1109/TVCG.2022.3209489 10.1109/TVCG.2020.2973258 10.1109/TVCG.2022.3209484 10.1109/MCG.2018.042731661 10.1145/3587470 10.1109/TVCG.2015.2467717 10.1109/MCG.2022.3199727 10.1109/TVCG.2018.2853721 10.1109/TVCG.2019.2903946 10.1111/cgf.14195 10.1109/TVCG.2009.111 10.1109/PacificVis56936.2023.00032 10.1109/TVCG.2021.3114779 10.1111/cgf.13406 10.1109/PacificVis48177.2020.3542 10.1109/VAST47406.2019.8986943 10.1109/VAST50239.2020.00007 10.1016/j.visinf.2017.01.006 10.1109/TVCG.2023.3326934 10.1109/TVCG.2017.2744683 10.1109/TVCG.2020.3030461 10.1109/PacificVis52677.2021.00032 10.1109/TVCG.2021.3114850 10.1109/MCG.2019.2919033 10.1111/cgf.13844 10.1109/VAST.2017.8585733 10.1109/TVCG.2016.2598831 10.1109/MCG.2022.3201465 10.1109/TVCG.2018.2865043 10.1109/TVCG.2017.2744158 10.1109/VAST.2018.8802454 10.1109/TVCG.2017.2744718 10.1109/PacificVis53943.2022.00020 10.1109/TVCG.2015.2467757 10.1109/TVCG.2022.3148197 10.1109/TVCG.2017.2745085 10.1109/VIS47514.2020.00057 10.1109/TVCG.2016.2598828 10.1109/VIS49827.2021.9623271 10.1109/VAST.2017.8585720 10.1109/TVCG.2019.2934619 10.1109/TVCG.2023.3326588 10.1109/VIS47514.2020.00065 10.1111/cgf.13972 10.1109/VAST.2017.8585721 10.1109/TVCG.2020.2995100 10.1109/TVCG.2022.3209466 10.1109/TNNLS.2020.2978386 10.1109/TVCG.2022.3209435 10.1109/TVCG.2020.3011155 10.1109/TVCG.2022.3184186 10.1109/TVCG.2021.3114694 10.1111/cgf.14034 10.1109/TVCG.2019.2903943 10.1109/TVCG.2020.2969185 10.1109/TVCG.2022.3182488 10.21236/ADA273556 10.1109/PacificVis48177.2020.7127 10.1109/pacificvis48177.2020.7090 10.1109/TVCG.2021.3114855 10.1109/PacificVis53943.2022.00027 10.1111/j.1467-8659.2011.01939.x 10.1109/TVCG.2020.3030418 10.1109/TVCG.2017.2744378 10.1111/cgf.13962 10.1109/TVCG.2022.3209423 10.1111/cgf.13672 10.1109/TVCG.2021.3138933 10.1109/VAST.2018.8802509 10.1109/PacificVis.2019.00044 10.1111/cgf.13683 10.1109/PacificVis.2018.00031 10.1109/PacificVis48177.2020.2785 10.1109/TVCG.2021.3114683 10.1111/cgf.13681 10.1109/TVCG.2020.3030342 10.1109/PacificVis53943.2022.00029 10.1109/TVCG.2020.2986996 10.1109/VIS47514.2020.00062 10.1109/VIS47514.2020.00061 10.1109/TVCG.2018.2843369 10.58248/pn633 10.1111/cgf.13210 10.1109/TVCG.2019.2934262 10.1109/TVCG.2020.3012063 10.1109/TVCG.2020.3045918 10.1109/TVCG.2017.2744358 10.14711/thesis-991012730263603412 10.1109/TVCG.2020.3030384 10.1177/14738716221130338 10.1109/TVCG.2018.2816223 10.1109/TVCG.2017.2745141 10.1111/cgf.14302 10.1109/TVCG.2021.3114793 10.1109/TVCG.2021.3074010 10.1109/TVCG.2020.3030350 10.1016/j.compeleceng.2013.11.024 10.1109/TVCG.2021.3057483 10.1109/TVCG.2016.2598838 10.1109/TVCG.2018.2864477 10.1109/TVCG.2021.3114837 10.1109/TVCG.2018.2864499 10.1109/TVCG.2022.3148107 10.1109/TVCG.2022.3209384 10.1111/cgf.13453 10.1109/TVCG.2019.2934629 10.1109/VIS47514.2020.00064 10.1109/TVCG.2022.3209347 10.1109/MCG.2019.2922592 10.1109/TVCG.2014.2346660 10.1007/s11263-015-0816-y 10.1109/VIS54862.2022.00018 10.1109/TVCG.2021.3114864 10.1109/MCG.2018.2878902 10.1109/PacificVis48177.2020.1031 10.1111/cgf.14525 10.1109/MSPEC.2022.9754503 10.1109/TVCG.2018.2864500 10.1109/VIS54862.2022.00019 10.1109/TVCG.2021.3114836 10.1109/PacificVis52677.2021.00038 10.1109/TVCG.2022.3146806 10.1109/PACIFICVIS.2016.7465261 10.1109/TVCG.2019.2934631 10.1109/TVCG.2021.3114794 10.1109/TVCG.2018.2865044 10.1109/TVCG.2020.3030354 10.1145/3604433 10.1109/TVCG.2023.3261935 10.1109/TVCG.2017.2744938 10.1109/TVCG.2018.2864812 10.1109/TVCG.2022.3209479 10.1109/TVCG.2018.2865027 10.1109/TVCG.2018.2864838 10.1111/cgf.13667 10.1109/TVCG.2020.3028976 10.1111/cgf.13973 10.1109/TVCG.2020.3028888 10.1145/2702123.2702509 10.1109/vis49827.2021.9623268 10.1109/TVCG.2022.3209465 10.1023/A:1009953814988 10.1109/TVCG.2022.3141040 10.1109/TVCG.2021.3076749 10.1109/tvcg.2024.3388521/mm1 10.1111/cgf.13417 10.1109/TVCG.2018.2864475 10.1109/TVCG.2018.2865230 10.1109/5.726791 10.1109/VISUAL.2019.8933619 10.1007/s12650-018-0531-1 10.1109/VAST50239.2020.00006 10.1109/TVCG.2021.3114845 10.1109/TVCG.2021.3102051 10.1016/B978-155860915-0/50046-9 10.1109/TVCG.2022.3184247 10.1109/TVCG.2022.3165347 10.1109/TVCG.2022.3209425 10.1109/TVCG.2019.2934595 10.1109/VAST47406.2019.8986948 10.1109/TVCG.2022.3172107 10.1109/VISUAL.2019.8933744 10.1109/TVCG.2019.2921323 10.1109/TVCG.2019.2934267 10.1111/cgf.14524 10.1038/nature14539 10.1007/s10115-007-0103-5 10.1109/INFVIS.2005.1532136 10.1007/s41095-020-0191-7 10.1109/VL.1996.545307 10.1007/s41095-023-0393-x 10.1109/TVCG.2013.124 10.1109/TVCG.2022.3209462 10.1109/VISUAL.2019.8933677 10.1109/TVCG.2018.2864504 10.1109/TVCG.2019.2934659 10.1109/MSP.2012.2211477 10.1111/cgf.14418 |
| ContentType | Journal Article |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7X8 |
| DOI | 10.1109/TVCG.2024.3357065 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE/IET Electronic Library (IEL) CrossRef PubMed MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed MEDLINE - Academic |
| DatabaseTitleList | MEDLINE - Academic PubMed |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering |
| EISSN | 1941-0506 |
| EndPage | 7656 |
| ExternalDocumentID | 38261496 10_1109_TVCG_2024_3357065 10412199 |
| Genre | orig-research Journal Article |
| GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD F5P HZ~ H~9 IEDLZ IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P PQQKQ RIA RIE RNI RNS RZB TN5 VH1 AAYXX CITATION NPM RIG 7X8 |
| ID | FETCH-LOGICAL-c322t-6a45b37fb6944a8f58f3c1ca5e6ba30a396baf2ff4f301c78348e5b8a01992163 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 14 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001346124800017&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1077-2626 1941-0506 |
| IngestDate | Wed Oct 01 07:04:30 EDT 2025 Mon Jul 21 05:59:29 EDT 2025 Sat Nov 29 03:31:47 EST 2025 Tue Nov 18 21:09:55 EST 2025 Wed Aug 27 03:02:06 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 12 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c322t-6a45b37fb6944a8f58f3c1ca5e6ba30a396baf2ff4f301c78348e5b8a01992163 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
| ORCID | 0009-0001-7984-7241 0000-0002-1130-9914 0000-0003-4499-6420 |
| PMID | 38261496 |
| PQID | 2918198876 |
| PQPubID | 23479 |
| PageCount | 20 |
| ParticipantIDs | crossref_citationtrail_10_1109_TVCG_2024_3357065 ieee_primary_10412199 proquest_miscellaneous_2918198876 crossref_primary_10_1109_TVCG_2024_3357065 pubmed_primary_38261496 |
| PublicationCentury | 2000 |
| PublicationDate | 2024-12-01 |
| PublicationDateYYYYMMDD | 2024-12-01 |
| PublicationDate_xml | – month: 12 year: 2024 text: 2024-12-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States |
| PublicationTitle | IEEE transactions on visualization and computer graphics |
| PublicationTitleAbbrev | TVCG |
| PublicationTitleAlternate | IEEE Trans Vis Comput Graph |
| PublicationYear | 2024 |
| Publisher | IEEE |
| Publisher_xml | – name: IEEE |
| References | ref57 ref56 ref59 ref58 ref55 ref54 Bodria (ref194) Vaswani (ref52) ref51 ref50 Kwon (ref150) ref46 ref45 ref48 ref47 ref42 ref41 ref44 ref43 ref49 ref8 ref7 ref9 ref4 ref3 ref6 ref5 ref100 ref101 ref40 ref35 ref34 ref37 ref36 ref31 ref33 ref32 ref39 ref38 ref24 ref23 ref26 ref25 ref20 ref22 ref21 ref28 ref27 ref29 Van der Maaten (ref83) 2008; 9 ref128 ref129 ref97 ref126 ref96 ref127 ref99 ref124 ref98 ref125 Rojo (ref180) ref93 ref133 ref92 ref134 ref131 ref94 ref132 ref130 ref91 ref90 ref89 ref139 ref137 ref85 ref138 ref88 ref135 ref87 ref136 Karpathy (ref53) 2015 ref82 ref144 ref81 ref145 ref84 ref142 ref143 ref140 ref141 ref80 ref79 ref108 ref78 ref109 ref106 ref107 ref75 ref104 ref74 ref105 ref77 ref102 ref76 ref103 ref111 ref70 ref112 ref73 ref72 ref110 ref68 ref119 ref67 ref117 ref69 ref118 ref64 ref115 ref63 ref116 ref113 ref65 ref114 ref122 Morris (ref71) 2020 ref123 ref62 ref120 ref61 ref121 Tominski (ref86) 2006 Zha (ref95) 2023 Dosovitskiy (ref66) 2020 ref168 ref169 Krizhevsky (ref60) 2009 ref170 ref177 ref178 ref175 ref176 ref173 ref174 ref171 ref172 ref179 ref181 ref188 ref189 ref186 ref187 ref184 ref185 ref182 ref183 ref148 ref149 ref146 ref147 ref155 ref156 ref153 ref154 ref151 ref152 ref159 ref157 ref158 ref166 ref167 ref164 ref165 ref162 ref163 ref160 ref161 ref13 ref12 ref15 ref14 ref11 ref10 ref17 ref16 ref19 ref18 Goodfellow (ref2) 2016 ref1 ref191 ref192 ref190 ref197 ref198 ref195 ref196 ref193 Wang (ref30) 2019 |
| References_xml | – ident: ref186 doi: 10.1109/TVCG.2021.3084694 – ident: ref174 doi: 10.1109/TVCG.2020.3030471 – ident: ref44 doi: 10.1109/VISUAL.2019.8933695 – ident: ref149 doi: 10.1111/cgf.14541 – ident: ref107 doi: 10.1109/TVCG.2020.3030449 – ident: ref136 doi: 10.1109/TVCG.2021.3114858 – ident: ref111 doi: 10.1109/tvcg.2023.3345340 – ident: ref189 doi: 10.1109/TVCG.2022.3209489 – ident: ref18 doi: 10.1109/TVCG.2020.2973258 – start-page: 5998 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref52 article-title: Attention is all you need – year: 2006 ident: ref86 article-title: Event based visualization for user centered visual analysis. – ident: ref188 doi: 10.1109/TVCG.2022.3209484 – year: 2019 ident: ref30 article-title: Interpreting and diagnosing deep learning models: A visual analytics approach – ident: ref20 doi: 10.1109/MCG.2018.042731661 – ident: ref105 doi: 10.1145/3587470 – ident: ref160 doi: 10.1109/TVCG.2015.2467717 – ident: ref197 doi: 10.1109/MCG.2022.3199727 – ident: ref114 doi: 10.1109/TVCG.2018.2853721 – ident: ref125 doi: 10.1109/TVCG.2019.2903946 – ident: ref139 doi: 10.1111/cgf.14195 – ident: ref38 doi: 10.1109/TVCG.2009.111 – ident: ref88 doi: 10.1109/PacificVis56936.2023.00032 – ident: ref43 doi: 10.1109/TVCG.2021.3114779 – ident: ref32 doi: 10.1111/cgf.13406 – ident: ref91 doi: 10.1109/PacificVis48177.2020.3542 – ident: ref15 doi: 10.1109/VAST47406.2019.8986943 – ident: ref96 doi: 10.1109/VAST50239.2020.00007 – ident: ref29 doi: 10.1016/j.visinf.2017.01.006 – ident: ref82 doi: 10.1109/TVCG.2023.3326934 – ident: ref106 doi: 10.1109/TVCG.2017.2744683 – ident: ref128 doi: 10.1109/TVCG.2020.3030461 – ident: ref85 doi: 10.1109/PacificVis52677.2021.00032 – ident: ref94 doi: 10.1109/TVCG.2021.3114850 – ident: ref127 doi: 10.1109/MCG.2019.2919033 – ident: ref16 doi: 10.1111/cgf.13844 – ident: ref162 doi: 10.1109/VAST.2017.8585733 – ident: ref13 doi: 10.1109/TVCG.2016.2598831 – ident: ref158 doi: 10.1109/MCG.2022.3201465 – ident: ref166 doi: 10.1109/TVCG.2018.2865043 – ident: ref49 doi: 10.1109/TVCG.2017.2744158 – ident: ref90 doi: 10.1109/VAST.2018.8802454 – ident: ref12 doi: 10.1109/TVCG.2017.2744718 – ident: ref152 doi: 10.1109/PacificVis53943.2022.00020 – ident: ref22 doi: 10.1109/TVCG.2015.2467757 – ident: ref72 doi: 10.1109/TVCG.2022.3148197 – ident: ref161 doi: 10.1109/TVCG.2017.2745085 – ident: ref130 doi: 10.1109/VIS47514.2020.00057 – ident: ref81 doi: 10.1109/TVCG.2016.2598828 – ident: ref104 doi: 10.1109/VIS49827.2021.9623271 – ident: ref4 doi: 10.1109/VAST.2017.8585720 – ident: ref42 doi: 10.1109/TVCG.2019.2934619 – ident: ref99 doi: 10.1109/TVCG.2023.3326588 – ident: ref102 doi: 10.1109/VIS47514.2020.00065 – ident: ref177 doi: 10.1111/cgf.13972 – ident: ref48 doi: 10.1109/VAST.2017.8585721 – ident: ref176 doi: 10.1109/TVCG.2020.2995100 – ident: ref141 doi: 10.1109/TVCG.2022.3209466 – ident: ref73 doi: 10.1109/TNNLS.2020.2978386 – ident: ref146 doi: 10.1109/TVCG.2022.3209435 – ident: ref187 doi: 10.1109/TVCG.2020.3011155 – ident: ref156 doi: 10.1109/TVCG.2022.3184186 – ident: ref183 doi: 10.1109/TVCG.2021.3114694 – ident: ref23 doi: 10.1111/cgf.14034 – ident: ref8 doi: 10.1109/TVCG.2019.2903943 – ident: ref92 doi: 10.1109/TVCG.2020.2969185 – ident: ref195 doi: 10.1109/TVCG.2022.3182488 – ident: ref47 doi: 10.21236/ADA273556 – ident: ref80 doi: 10.1109/PacificVis48177.2020.7127 – ident: ref11 doi: 10.1109/pacificvis48177.2020.7090 – ident: ref6 doi: 10.1109/TVCG.2021.3114855 – ident: ref153 doi: 10.1109/PacificVis53943.2022.00027 – ident: ref40 doi: 10.1111/j.1467-8659.2011.01939.x – ident: ref69 doi: 10.1109/TVCG.2020.3030418 – ident: ref100 doi: 10.1109/TVCG.2017.2744378 – ident: ref76 doi: 10.1111/cgf.13962 – ident: ref190 doi: 10.1109/TVCG.2022.3209423 – ident: ref126 doi: 10.1111/cgf.13672 – ident: ref79 doi: 10.1109/TVCG.2021.3138933 – ident: ref93 doi: 10.1109/VAST.2018.8802509 – ident: ref170 doi: 10.1109/PacificVis.2019.00044 – ident: ref171 doi: 10.1111/cgf.13683 – ident: ref121 doi: 10.1109/PacificVis.2018.00031 – ident: ref133 doi: 10.1109/PacificVis48177.2020.2785 – ident: ref78 doi: 10.1109/TVCG.2021.3114683 – ident: ref31 doi: 10.1111/cgf.13681 – ident: ref9 doi: 10.1109/TVCG.2020.3030342 – ident: ref154 doi: 10.1109/PacificVis53943.2022.00029 – ident: ref181 doi: 10.1109/TVCG.2020.2986996 – ident: ref131 doi: 10.1109/VIS47514.2020.00062 – ident: ref64 doi: 10.1109/VIS47514.2020.00061 – ident: ref21 doi: 10.1109/TVCG.2018.2843369 – ident: ref7 doi: 10.58248/pn633 – ident: ref24 doi: 10.1111/cgf.13210 – ident: ref168 doi: 10.1109/TVCG.2019.2934262 – ident: ref101 doi: 10.1109/TVCG.2020.3012063 – ident: ref155 doi: 10.1109/TVCG.2020.3045918 – ident: ref116 doi: 10.1109/TVCG.2017.2744358 – ident: ref179 doi: 10.14711/thesis-991012730263603412 – ident: ref129 doi: 10.1109/TVCG.2020.3030384 – ident: ref108 doi: 10.1177/14738716221130338 – ident: ref58 doi: 10.1109/TVCG.2018.2816223 – ident: ref117 doi: 10.1109/TVCG.2017.2745141 – ident: ref138 doi: 10.1111/cgf.14302 – ident: ref135 doi: 10.1109/TVCG.2021.3114793 – ident: ref157 doi: 10.1109/TVCG.2021.3074010 – ident: ref5 doi: 10.1109/TVCG.2020.3030350 – ident: ref41 doi: 10.1016/j.compeleceng.2013.11.024 – ident: ref36 doi: 10.1109/TVCG.2021.3057483 – ident: ref59 doi: 10.1109/TVCG.2016.2598838 – start-page: 127 volume-title: Proc. EuroVis Short Papers ident: ref180 article-title: GaCoVi: A correlation visualization to support interpretability-aware feature selection for regression models – ident: ref163 doi: 10.1109/TVCG.2018.2864477 – ident: ref103 doi: 10.1109/TVCG.2021.3114837 – ident: ref164 doi: 10.1109/TVCG.2018.2864499 – ident: ref74 doi: 10.1109/TVCG.2022.3148107 – ident: ref142 doi: 10.1109/TVCG.2022.3209384 – ident: ref50 doi: 10.1111/cgf.13453 – ident: ref27 doi: 10.1109/TVCG.2019.2934629 – ident: ref132 doi: 10.1109/VIS47514.2020.00064 – ident: ref147 doi: 10.1109/TVCG.2022.3209347 – ident: ref172 doi: 10.1109/MCG.2019.2922592 – year: 2020 ident: ref71 article-title: TUDataset: A collection of benchmark datasets for learning with graphs – ident: ref97 doi: 10.1109/TVCG.2014.2346660 – ident: ref62 doi: 10.1007/s11263-015-0816-y – ident: ref191 doi: 10.1109/VIS54862.2022.00018 – ident: ref184 doi: 10.1109/TVCG.2021.3114864 – ident: ref84 doi: 10.1109/MCG.2018.2878902 – ident: ref134 doi: 10.1109/PacificVis48177.2020.1031 – ident: ref193 doi: 10.1111/cgf.14525 – ident: ref14 doi: 10.1109/MSPEC.2022.9754503 – ident: ref119 doi: 10.1109/TVCG.2018.2864500 – ident: ref192 doi: 10.1109/VIS54862.2022.00019 – ident: ref182 doi: 10.1109/TVCG.2021.3114836 – ident: ref185 doi: 10.1109/PacificVis52677.2021.00038 – ident: ref198 doi: 10.1109/TVCG.2022.3146806 – start-page: 85 volume-title: Proc. EuroVis Short Papers ident: ref194 article-title: Explaining black box with visual exploration of latent space – ident: ref109 doi: 10.1109/PACIFICVIS.2016.7465261 – ident: ref167 doi: 10.1109/TVCG.2019.2934631 – ident: ref77 doi: 10.1109/TVCG.2021.3114794 – ident: ref120 doi: 10.1109/TVCG.2018.2865044 – ident: ref175 doi: 10.1109/TVCG.2020.3030354 – ident: ref115 doi: 10.1145/3604433 – ident: ref56 doi: 10.1109/TVCG.2023.3261935 – ident: ref61 doi: 10.1109/TVCG.2017.2744938 – ident: ref10 doi: 10.1109/TVCG.2018.2864812 – ident: ref144 doi: 10.1109/TVCG.2022.3209479 – ident: ref3 doi: 10.1109/TVCG.2018.2865027 – ident: ref28 doi: 10.1109/TVCG.2018.2864838 – ident: ref51 doi: 10.1111/cgf.13667 – ident: ref55 doi: 10.1109/TVCG.2020.3028976 – ident: ref178 doi: 10.1111/cgf.13973 – ident: ref173 doi: 10.1109/TVCG.2020.3028888 – ident: ref98 doi: 10.1145/2702123.2702509 – ident: ref137 doi: 10.1109/vis49827.2021.9623268 – ident: ref143 doi: 10.1109/TVCG.2022.3209465 – ident: ref70 doi: 10.1023/A:1009953814988 – ident: ref46 doi: 10.1109/TVCG.2022.3141040 – ident: ref68 doi: 10.1109/TVCG.2021.3076749 – ident: ref112 doi: 10.1109/tvcg.2024.3388521/mm1 – ident: ref122 doi: 10.1111/cgf.13417 – ident: ref165 doi: 10.1109/TVCG.2018.2864475 – ident: ref118 doi: 10.1109/TVCG.2018.2865230 – ident: ref65 doi: 10.1109/5.726791 – ident: ref169 doi: 10.1109/VISUAL.2019.8933619 – ident: ref39 doi: 10.1007/s12650-018-0531-1 – ident: ref19 doi: 10.1109/VAST50239.2020.00006 – year: 2009 ident: ref60 article-title: Learning multiple layers of features from tiny images – ident: ref113 doi: 10.1109/TVCG.2021.3114845 – volume: 9 start-page: 2579 issue: 11 year: 2008 ident: ref83 article-title: Visualizing data using t-SNE publication-title: J. Mach. Learn. Res. – year: 2023 ident: ref95 article-title: Data-centric artificial intelligence: A survey – ident: ref196 doi: 10.1109/TVCG.2021.3102051 – ident: ref87 doi: 10.1016/B978-155860915-0/50046-9 – ident: ref159 doi: 10.1109/TVCG.2022.3184247 – ident: ref151 doi: 10.1109/TVCG.2022.3165347 – ident: ref67 doi: 10.1109/TVCG.2022.3209425 – ident: ref123 doi: 10.1109/TVCG.2019.2934595 – ident: ref45 doi: 10.1109/VAST47406.2019.8986948 – year: 2015 ident: ref53 article-title: Visualizing and understanding recurrent networks – ident: ref17 doi: 10.1109/TVCG.2022.3172107 – ident: ref124 doi: 10.1109/VISUAL.2019.8933744 – ident: ref37 doi: 10.1109/TVCG.2019.2921323 – ident: ref110 doi: 10.1109/TVCG.2019.2934267 – ident: ref148 doi: 10.1111/cgf.14524 – ident: ref1 doi: 10.1038/nature14539 – ident: ref75 doi: 10.1007/s10115-007-0103-5 – ident: ref33 doi: 10.1109/INFVIS.2005.1532136 – start-page: 91 volume-title: Proc. EuroVis Short Papers ident: ref150 article-title: DASH: Visual analytics for debiasing image classification via user-driven synthetic data augmentation – ident: ref25 doi: 10.1007/s41095-020-0191-7 – year: 2020 ident: ref66 article-title: An image is worth 16x16 words: Transformers for image recognition at scale – ident: ref35 doi: 10.1109/VL.1996.545307 – ident: ref26 doi: 10.1007/s41095-023-0393-x – volume-title: Deep Learning year: 2016 ident: ref2 – ident: ref34 doi: 10.1109/TVCG.2013.124 – ident: ref145 doi: 10.1109/TVCG.2022.3209462 – ident: ref54 doi: 10.1109/VISUAL.2019.8933677 – ident: ref89 doi: 10.1109/TVCG.2018.2864504 – ident: ref63 doi: 10.1109/TVCG.2019.2934659 – ident: ref57 doi: 10.1109/MSP.2012.2211477 – ident: ref140 doi: 10.1111/cgf.14418 |
| SSID | ssj0014489 |
| Score | 2.5239797 |
| Snippet | The past decade has witnessed a plethora of works that leverage the power of visualization (VIS) to interpret machine learning (ML) models. The corresponding... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 7637 |
| SubjectTerms | Analytical models Data models Explainable AI machine learning Surveys Task analysis Taxonomy VIS4ML Visual analytics visualization |
| Title | Visual Analytics for Machine Learning: A Data Perspective Survey |
| URI | https://ieeexplore.ieee.org/document/10412199 https://www.ncbi.nlm.nih.gov/pubmed/38261496 https://www.proquest.com/docview/2918198876 |
| Volume | 30 |
| WOSCitedRecordID | wos001346124800017&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1941-0506 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014489 issn: 1077-2626 databaseCode: RIE dateStart: 19950101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dS8MwED_cEJkPfk6dHyOCT0Jn16ZN4pNjOn3QMXCOvZW0TUSQTbZV8L_3knZzPkzwqX1I2pK7S37Xu_sdwIXgoeLa5IXhBulQHVMn5pI5AU9T5knJU5vyP3hk3S4fDkWvKFa3tTBKKZt8phrm1sby03GSmV9laOG0iRYmSlBiLMyLtRYhA_QzRJ5gyBwPYXoRwmy64qo_aN-jK-jRhu8HJq5XgQ0fcTV6B-Gv88g2WFmNNe2Z09n-59fuwFYBLkkr14ZdWFOjPdhcohzch5vB2zQzYwwZiaFoJohayZNNqVSkYFt9vSYtcitnkvR-ajHJczb5VF9VeOnc9dsPTtFFwUnQWGdOKGkQ-0zHoaBUch1w7SfNRAYqjKXvSl_gVXtaU43GnpjGG1wFKDDXZKYiXDuA8mg8UkdAlGiKWKGTGPgpTRkXTCNi0G5iSHGUy2vgztcySgqKcdPp4j2yroYrIiOJyEgiKiRRg8vFlI-cX-OvwVWzzEsD8xWuwflcYhFahwl5yJEaZ9PIE4hgBG6kYQ0Oc1EuZs814HjFU0-gYl6e566cQnk2ydQZrCefs7fppI4qOOR1q4LfivPSjQ |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT8MwDLZgIB4H3o_yDBInpEIfaZtwAo2n2CYkxrRblbYJQkIb2tZJ_HuctBvjMCRO7SGpothOPtefbYBTzkLJlOaF4QFpU5VQO2EisgOWZZEnBMsM5b9VixoN1m7z5zJZ3eTCSCkN-Uye61cTy8-6aa5_laGFUxctjM_CXECp5xTpWuOgAXoavKAYRraHQL0MYroOv2i2qvfoDHr03PcDHdlbggUfkTX6B-GvG8m0WJmONs2tc7f6z_WuwUoJL8l1oQ_rMCM7G7A8UXRwE65a7_1cj9HlSHSRZoK4ldQNqVKSst7q2yW5JjdiIMjzTzYmecl7Q_m1Ba93t83qg132UbBTNNeBHQoaJH6kkpBTKpgKmPJTNxWBDBPhO8Ln-FSeUlShuae69QaTAYrM0dxUBGzbUOl0O3IXiOQuTyS6iYGf0SxiPFKIGZST6rI40mEWOKO9jNOyyLjudfERG2fD4bGWRKwlEZeSsOBsPOWzqLDx1-Atvc0TA4sdtuBkJLEY7UMHPURHdvN-7HHEMByP0tCCnUKU49kjDdib8tVjWHxo1mtx7bHxtA9LeiEFk-UAKoNeLg9hPh0O3vu9I6OI37VL1Ow |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Visual+Analytics+for+Machine+Learning%3A+A+Data+Perspective+Survey&rft.jtitle=IEEE+transactions+on+visualization+and+computer+graphics&rft.au=Wang%2C+Junpeng&rft.au=Liu%2C+Shixia&rft.au=Zhang%2C+Wei&rft.date=2024-12-01&rft.issn=1941-0506&rft.eissn=1941-0506&rft.volume=30&rft.issue=12&rft.spage=7637&rft_id=info:doi/10.1109%2FTVCG.2024.3357065&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1077-2626&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1077-2626&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1077-2626&client=summon |