Explaining anomalies through semi-supervised Autoencoders
This work tackles the problem of designing explainable by design anomaly detectors, which provide intelligible explanations to abnormal behaviors in input data observations. In particular, we adopt heatmaps as explanations, where a heatmap can be regarded as a collection of per-feature scores. To ex...
Saved in:
| Published in: | Array (New York) Vol. 28; p. 100537 |
|---|---|
| Main Authors: | , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Elsevier Inc
01.12.2025
|
| Subjects: | |
| ISSN: | 2590-0056, 2590-0056 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | This work tackles the problem of designing explainable by design anomaly detectors, which provide intelligible explanations to abnormal behaviors in input data observations. In particular, we adopt heatmaps as explanations, where a heatmap can be regarded as a collection of per-feature scores. To explain anomalies, our approach, called AE–XAD11The code of AE–XAD is available at https://github.com/AIDALab-DIMES/AE-XAD. (for AutoEncoder-based eXplainable Anomaly Detection), extends a recently introduced semi-supervised variant of the Autoencoder architecture. The main idea of our proposal is to exploit a reconstruction error strategy for detecting deviating features. Unlike standard Autoencoders, it leverages a semi-supervised loss designed to maximize the distance between the reconstruction and the original value assumed by anomalous features. By means of this strategy, our approach learns to isolate anomalous portions of the input observations using only a few anomalous examples during training. Experimental results highlight that AE–XAD delivers high-level performance in explaining anomalies in different scenarios while maintaining a minimal CO2 footprint, showcasing a design that is not only highly effective but also environmentally conscious. |
|---|---|
| AbstractList | This work tackles the problem of designing explainable by design anomaly detectors, which provide intelligible explanations to abnormal behaviors in input data observations. In particular, we adopt heatmaps as explanations, where a heatmap can be regarded as a collection of per-feature scores. To explain anomalies, our approach, called AE–XAD11The code of AE–XAD is available at https://github.com/AIDALab-DIMES/AE-XAD. (for AutoEncoder-based eXplainable Anomaly Detection), extends a recently introduced semi-supervised variant of the Autoencoder architecture. The main idea of our proposal is to exploit a reconstruction error strategy for detecting deviating features. Unlike standard Autoencoders, it leverages a semi-supervised loss designed to maximize the distance between the reconstruction and the original value assumed by anomalous features. By means of this strategy, our approach learns to isolate anomalous portions of the input observations using only a few anomalous examples during training. Experimental results highlight that AE–XAD delivers high-level performance in explaining anomalies in different scenarios while maintaining a minimal CO2 footprint, showcasing a design that is not only highly effective but also environmentally conscious. |
| ArticleNumber | 100537 |
| Author | Angiulli, Fabrizio Nisticò, Simona Fassetti, Fabio Ferragina, Luca |
| Author_xml | – sequence: 1 givenname: Fabrizio orcidid: 0000-0002-9860-7569 surname: Angiulli fullname: Angiulli, Fabrizio email: fabrizio.angiulli@unical.it – sequence: 2 givenname: Fabio orcidid: 0000-0002-8416-906X surname: Fassetti fullname: Fassetti, Fabio email: fabio.fassetti@unical.it – sequence: 3 givenname: Luca orcidid: 0000-0003-3184-4639 surname: Ferragina fullname: Ferragina, Luca email: luca.ferragina@unical.it – sequence: 4 givenname: Simona orcidid: 0000-0002-7386-2512 surname: Nisticò fullname: Nisticò, Simona email: simona.nistico@unical.it |
| BookMark | eNp9j79uwjAQh62KSqWUJ-iSFwg923FChg4I0VIJqQu75ThncERiZCeovH1N06FTp_uj-52-75FMOtchIc8UFhRo_tIslPfqumDARNyA4MUdmTJRQhqHfPKnfyDzEBqAeEkpFcspKTdf55Oyne0Oiepcq04WQ9IfvRsOxyRga9MwnNFfbMA6WQ29w067Gn14IvdGnQLOf-uM7N82-_U23X2-f6xXu1RzgD4tBXAGaLIiK3Oz1IXheWa0qJCxiK8LhqZQzLCqKFWmueEZXaKo87oyJVR8Rvj4VnsXgkcjz962yl8lBXnzl4388Zc3fzn6x9TrmMJIdrHoZdA2kmNtPepe1s7-m_8GQMdm4g |
| Cites_doi | 10.1007/s10994-025-06779-0 10.1145/3412815.3416893 10.1007/s10618-018-0585-7 10.1109/CVPR.2018.00920 10.1109/TCSS.2022.3164993 10.1109/TVCG.2018.2864812 10.1109/CVPR.2019.00638 10.1109/CVPR.2019.00982 10.3390/s20051459 10.1007/978-3-030-61527-7_39 10.1002/aic.690370209 10.1371/journal.pone.0130140 10.1007/s10994-023-06415-9 10.1145/3219819.3220072 10.1038/s42256-020-00265-z 10.1007/978-3-031-15740-0_28 10.1109/JPROC.2021.3052449 10.1109/TKDE.2023.3328882 10.1109/TCE.2024.3424189 10.1109/TKDE.2024.3404027 10.1007/978-3-031-18840-4_23 10.1145/3531146.3533112 10.1145/3236009 10.1007/s10994-024-06618-8 10.1109/ICCV.2017.74 10.1109/ICCV48922.2021.00822 10.1109/CVPR52688.2022.01392 10.1109/CVPR.2016.90 10.1109/JPROC.2021.3060483 10.1016/j.patrec.2020.04.004 10.1007/978-3-030-88942-5_31 10.1007/3-540-46145-0_17 10.1145/3653677 10.1109/TKDE.2023.3332784 10.1145/2939672.2939778 10.1109/CVPR52729.2023.02346 10.1145/3297280.3297443 10.1126/science.269.5232.1860 |
| ContentType | Journal Article |
| Copyright | 2025 The Authors |
| Copyright_xml | – notice: 2025 The Authors |
| DBID | 6I. AAFTH AAYXX CITATION |
| DOI | 10.1016/j.array.2025.100537 |
| DatabaseName | ScienceDirect Open Access Titles Elsevier:ScienceDirect:Open Access CrossRef |
| DatabaseTitle | CrossRef |
| DatabaseTitleList | |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISSN | 2590-0056 |
| ExternalDocumentID | 10_1016_j_array_2025_100537 S259000562500164X |
| GroupedDBID | 0R~ 6I. AAEDW AAFTH AALRI AAXUO AAYWO ACVFH ADCNI ADVLN AEUPX AEXQZ AFJKZ AFPUW AIGII AITUG AKBMS AKYEP ALMA_UNASSIGNED_HOLDINGS AMRAJ APXCP EBS EJD FDB GROUPED_DOAJ M41 M~E OK1 ROL AAYXX CITATION |
| ID | FETCH-LOGICAL-c300t-950320ef47496f8c7f364fc5be22101c72ef7a2f2b79a4c3f3418e5d6dbf90b3 |
| ISICitedReferencesCount | 0 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001604479800002&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 2590-0056 |
| IngestDate | Thu Nov 27 01:04:35 EST 2025 Sat Nov 22 16:52:11 EST 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Keywords | Explainable Artificial Intelligence Green-aware AI Explainability by design Anomaly detection |
| Language | English |
| License | This is an open access article under the CC BY license. |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-c300t-950320ef47496f8c7f364fc5be22101c72ef7a2f2b79a4c3f3418e5d6dbf90b3 |
| ORCID | 0000-0002-8416-906X 0000-0003-3184-4639 0000-0002-7386-2512 0000-0002-9860-7569 |
| OpenAccessLink | http://dx.doi.org/10.1016/j.array.2025.100537 |
| ParticipantIDs | crossref_primary_10_1016_j_array_2025_100537 elsevier_sciencedirect_doi_10_1016_j_array_2025_100537 |
| PublicationCentury | 2000 |
| PublicationDate | December 2025 2025-12-00 |
| PublicationDateYYYYMMDD | 2025-12-01 |
| PublicationDate_xml | – month: 12 year: 2025 text: December 2025 |
| PublicationDecade | 2020 |
| PublicationTitle | Array (New York) |
| PublicationYear | 2025 |
| Publisher | Elsevier Inc |
| Publisher_xml | – name: Elsevier Inc |
| References | Su, Tian, Wan, Yin (b59) 2024; 36 Hojjati, Armanfard (b63) 2023; 36 Tripathy, Guduri, Chakraborty, Bebortta, Pani, Mukhopadhyay (b8) 2024; 71 Chowdhury T, Rahimi R, Allan J. Equi-explanation maps: Concise and informative global summary explanations. In: Proceedings of the 2022 ACM conference on fairness, accountability, and transparency. 2022, p. 464–72. Shrikumar, Greenside, Kundaje (b24) 2017 2Struct: Privacy, accountability, interpretability, robustness, reasoning on structured data. Liu, Ting, Zhou (b36) 2008 Zhang Q, Wu YN, Zhu S-C. Interpretable convolutional neural networks. In: IEEE conference on computer vision and pattern recognition. 2018, p. 8827–36. Ruff, Görnitz, Deecke, Siddiqui, Vandermeulen, Binder (b60) 2018; 80 Ming, Qu, Bertini (b18) 2018; 25 Ribeiro MT, Singh S, Guestrin C. “Why should i trust you?” Explaining the predictions of any classifier. In: Int. conf. ACM SIGKDD. 2016, p. 1135–44. Rafiei, Breckon, Iosifidis (b75) 2023 Li X, Song X, Wu T. Aognets: Compositional grammatical architectures for deep learning. In: IEEE/CVF conference on computer vision and pattern recognition. 2019, p. 6220–30. Peake G, Wang J. Explanation mining: Post hoc interpretability of latent factor models for recommendation systems. In: Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 2018, p. 2060–9. Kingma, Ba (b74) 2014 Angiulli, De Luca, Fassetti, Nisticó (b25) 2024 Courty (b77) 2024 Schlegl, Seeböck, Waldstein, Schmidt-Erfurth, Langs (b55) 2017; 10265 Czimmermann, Ciuti, Milazzo, Chiurazzi, Roccella, Oddo (b6) 2020; 20 Macha, Akoglu (b34) 2018; 32 Li, Li, Yang, Zhou, Li, Wu (b53) 2024; 18 Zavrtanik V, Kristan M, Skočaj D. Draem-a discriminatively trained reconstruction embedding for surface anomaly detection. In: Proceedings of the IEEE/CVF international conference on computer vision. 2021, p. 8330–9. Samariya, Aryal, Ting, Ma (b37) 2020 Tan S, Soloviev M, Hooker G, Wells MT. Tree space prototypes: Another look at making tree ensembles interpretable. In: Proceedings of the 2020 ACM-iMS on foundations of data science conference. 2020, p. 23–34. Guidotti, Monreale, Spinnato, Pedreschi, Giannotti (b23) 2020 Nóbrega C, Marinho L. Towards explaining recommendations through local surrogate models. In: Proceedings of the 34th ACM/SIGAPP symposium on applied computing. 2019, p. 1671–8. Roth K, Pemula L, Zepeda J, Schölkopf B, Brox T, Gehler P. Towards total recall in industrial anomaly detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022, p. 14318–28. Liu, Shin, Hu (b31) 2017 Pevnỳ T, Kopp M. Explaining anomalies with sapling random forests. In: Information technologies-applications and theory workshops, posters, and tutorials. 2014, p. 7. Angiulli F, Fassetti F, Ferragina L, Spada R. Cooperative Deep Unsupervised Anomaly Detection. In: Int. conf. on discovery science. 2022, p. 318–28. Angiulli F, Fassetti F, Ferragina L. Improving Deep Unsupervised Anomaly Detection by Exploiting VAE Latent Space Distribution. In: Int. conf. on discovery science. 2020, p. 596–611. Bougaham, El Adoui, Linden, Frénay (b58) 2024; 113 Angiulli, Fassetti, Nisticò, Palopoli (b35) 2024; 113 Guidotti, Monreale, Ruggieri, Turini, Giannotti, Pedreschi (b1) 2019; 51 Angiulli F, Fassetti F, Nisticò S, Palopoli L. Outlier Explanation Through Masking Models. In: Advances in databases and information systems: European conference. 2022, p. 392–406. Angiulli, Fassetti, Ferragina (b5) 2023 Liznerski P, Ruff L, Vandermeulen RA, Franks BJ, Kloft M, Müller K. Explainable Deep One-Class Classification. In: Int. conf. on learn. repr.. 2021. Pang, Shen, van den Hengel (b65) 2019 He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, p. 770–8. Li, Zhu, van Leeuwen (b3) 2022 Chen, Li, Tao, Barnett, Rudin, Su (b15) 2019; 32 Angiulli, Fassetti, Ferragina (b51) 2022 Hecht-Nielsen (b47) 1995; 269 Liu, Li, Zhou, Jiang, Sun, Wang (b57) 2020; 32 Alvarez-Melis D, Jaakkola TS. Towards Robust Interpretability with Self-Explaining Neural Networks. In: Advances in neural information processing systems. 2018, p. 7786–95. Defard, Setkov, Loesch, Audigier (b67) 2021 Rio-Torto, Fernandes, Teixeira (b43) 2020; 133 Ley D, Mishra S, Magazzeni D. Global counterfactual explanations: Investigations, implementations and improvements. In: ICLR 2022 workshop on PAIR Kopp, Pevnỳ, Holena (b33) 2014 Pang, Ding, Shen, Hengel (b71) 2021 Das, Islam, Jayakodi, Doppa (b30) 2018 Pang, Shen, Cao, van den Hengel (b45) 2020 Barr, Fatsi, Hancox-Li, Richter, Proano, Mok (b27) 2023 Ruff, Kauffmann, Vandermeulen, Montavon, Samek, Kloft (b7) 2021; 109 Wang, Nie, Wang, Wang, Li (b54) 2025; 19 Li, Xie, Wang, Xie, Li, Cao (b52) 2023; 36 Verma S, Beniwal A, Sadagopan N, Seshadri A. RecXplainer: Post-Hoc Attribute-Based Explanations for Recommender Systems. In: Progress and challenges in building trustworthy embodied AI. Mnih, Heess, Graves (b38) 2014; 27 Parekh, Mozharovskyi, d’Alché Buc (b44) 2021; 34 Samek, Montavon, Lapuschkin, Anders, Müller (b2) 2021; 109 Meire, Van Baelen, Ooijevaar, Karsmakers (b64) 2025; 114 Bach, Binder, Montavon, Klauschen, Müller, Samek (b22) 2015; 10 Ruff L, Vandermeulen RA, Görnitz N, Binder A, Müller E, Müller K-R, et al. Deep Semi-Supervised Anomaly Detection. In: Int. conf. on learn. repr.. 2020. Chen, Bei, Rudin (b41) 2020; 2 Lundberg, Lee (b11) 2017; 30 Akcay, Atapour-Abarghouei, Breckon (b56) 2018 Li, Li, Zhang, Zhao, Jiang, Yang (b28) 2023; 36 Angiulli F, Fassetti F, Nisticò S. Local Interpretable Classifier Explanations with Self-generated Semantic Features. In: Int. conf. on discovery science. 2021, p. 401–10. Enouen, Nakhost, Ebrahimi, Arik, Liu, Pfister (b26) 2024 An, Cho (b49) 2015 Bergmann P, Fauser M, Sattlegger D, Steger C. MVTec AD — A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection. In: IEEE/CVF conference on computer vision and pattern recognition. 2019, p. 9584–92. Kramer (b46) 1991; 37 Deng, Dong, Socher, Li, Li, Fei-Fei (b73) 2009 Kong, Huet, Rossi, Sozio (b29) 2023 Hawkins S, He H, Williams G, Baxter R. Outlier Detection Using Replicator Neural Networks. In: Int. conf.. DAWAK, 2002, p. 170–80. Bhuyan, Chakraborty (b9) 2022; 11 Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE iCCV. 2017, p. 618–26. Yao X, Li R, Zhang J, Sun J, Zhang C. Explicit boundary guided semi-push-pull contrastive learning for supervised anomaly detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023, p. 24490–9. Ming (10.1016/j.array.2025.100537_b18) 2018; 25 Pang (10.1016/j.array.2025.100537_b71) 2021 Liu (10.1016/j.array.2025.100537_b31) 2017 Barr (10.1016/j.array.2025.100537_b27) 2023 Ruff (10.1016/j.array.2025.100537_b60) 2018; 80 Guidotti (10.1016/j.array.2025.100537_b1) 2019; 51 10.1016/j.array.2025.100537_b40 Li (10.1016/j.array.2025.100537_b53) 2024; 18 Czimmermann (10.1016/j.array.2025.100537_b6) 2020; 20 Meire (10.1016/j.array.2025.100537_b64) 2025; 114 Bach (10.1016/j.array.2025.100537_b22) 2015; 10 Pang (10.1016/j.array.2025.100537_b45) 2020 Kramer (10.1016/j.array.2025.100537_b46) 1991; 37 Angiulli (10.1016/j.array.2025.100537_b5) 2023 10.1016/j.array.2025.100537_b32 10.1016/j.array.2025.100537_b76 Kong (10.1016/j.array.2025.100537_b29) 2023 Rafiei (10.1016/j.array.2025.100537_b75) 2023 Mnih (10.1016/j.array.2025.100537_b38) 2014; 27 10.1016/j.array.2025.100537_b39 Bhuyan (10.1016/j.array.2025.100537_b9) 2022; 11 Samek (10.1016/j.array.2025.100537_b2) 2021; 109 Liu (10.1016/j.array.2025.100537_b57) 2020; 32 Angiulli (10.1016/j.array.2025.100537_b51) 2022 Samariya (10.1016/j.array.2025.100537_b37) 2020 Schlegl (10.1016/j.array.2025.100537_b55) 2017; 10265 10.1016/j.array.2025.100537_b72 10.1016/j.array.2025.100537_b70 10.1016/j.array.2025.100537_b4 10.1016/j.array.2025.100537_b69 10.1016/j.array.2025.100537_b68 10.1016/j.array.2025.100537_b66 Deng (10.1016/j.array.2025.100537_b73) 2009 10.1016/j.array.2025.100537_b21 10.1016/j.array.2025.100537_b20 Su (10.1016/j.array.2025.100537_b59) 2024; 36 Kopp (10.1016/j.array.2025.100537_b33) 2014 Li (10.1016/j.array.2025.100537_b52) 2023; 36 Kingma (10.1016/j.array.2025.100537_b74) 2014 An (10.1016/j.array.2025.100537_b49) 2015 Wang (10.1016/j.array.2025.100537_b54) 2025; 19 Lundberg (10.1016/j.array.2025.100537_b11) 2017; 30 10.1016/j.array.2025.100537_b62 10.1016/j.array.2025.100537_b61 Das (10.1016/j.array.2025.100537_b30) 2018 Tripathy (10.1016/j.array.2025.100537_b8) 2024; 71 10.1016/j.array.2025.100537_b14 Chen (10.1016/j.array.2025.100537_b41) 2020; 2 10.1016/j.array.2025.100537_b13 10.1016/j.array.2025.100537_b12 Guidotti (10.1016/j.array.2025.100537_b23) 2020 10.1016/j.array.2025.100537_b10 Chen (10.1016/j.array.2025.100537_b15) 2019; 32 Angiulli (10.1016/j.array.2025.100537_b25) 2024 Macha (10.1016/j.array.2025.100537_b34) 2018; 32 Enouen (10.1016/j.array.2025.100537_b26) 2024 10.1016/j.array.2025.100537_b19 10.1016/j.array.2025.100537_b17 10.1016/j.array.2025.100537_b16 Liu (10.1016/j.array.2025.100537_b36) 2008 Ruff (10.1016/j.array.2025.100537_b7) 2021; 109 Defard (10.1016/j.array.2025.100537_b67) 2021 Shrikumar (10.1016/j.array.2025.100537_b24) 2017 Courty (10.1016/j.array.2025.100537_b77) 2024 10.1016/j.array.2025.100537_b50 Akcay (10.1016/j.array.2025.100537_b56) 2018 Li (10.1016/j.array.2025.100537_b28) 2023; 36 Li (10.1016/j.array.2025.100537_b3) 2022 Hecht-Nielsen (10.1016/j.array.2025.100537_b47) 1995; 269 10.1016/j.array.2025.100537_b48 Bougaham (10.1016/j.array.2025.100537_b58) 2024; 113 Parekh (10.1016/j.array.2025.100537_b44) 2021; 34 10.1016/j.array.2025.100537_b42 Hojjati (10.1016/j.array.2025.100537_b63) 2023; 36 Pang (10.1016/j.array.2025.100537_b65) 2019 Angiulli (10.1016/j.array.2025.100537_b35) 2024; 113 Rio-Torto (10.1016/j.array.2025.100537_b43) 2020; 133 |
| References_xml | – start-page: 353 year: 2019 end-page: 362 ident: b65 article-title: Deep anomaly detection with deviation networks publication-title: Int. conf. ACM SIGKDD – volume: 20 start-page: 1459 year: 2020 ident: b6 article-title: Visual-based defect detection and classification approaches for industrial applications—a survey publication-title: Sensors – start-page: 413 year: 2008 end-page: 422 ident: b36 article-title: Isolation forest publication-title: 2008 eighth ieee international conference on data mining – reference: Zhang Q, Wu YN, Zhu S-C. Interpretable convolutional neural networks. In: IEEE conference on computer vision and pattern recognition. 2018, p. 8827–36. – reference: Bergmann P, Fauser M, Sattlegger D, Steger C. MVTec AD — A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection. In: IEEE/CVF conference on computer vision and pattern recognition. 2019, p. 9584–92. – year: 2024 ident: b77 article-title: mlco2/codecarbon: v2.4.1 – year: 2014 ident: b33 article-title: Interpreting and clustering outliers with sapling random forests publication-title: Inf Technol Appl Theory – volume: 113 start-page: 4381 year: 2024 end-page: 4406 ident: b58 article-title: Composite score for anomaly detection in imbalanced real-world industrial dataset publication-title: Mach Learn – reference: Peake G, Wang J. Explanation mining: Post hoc interpretability of latent factor models for recommendation systems. In: Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 2018, p. 2060–9. – reference: Ribeiro MT, Singh S, Guestrin C. “Why should i trust you?” Explaining the predictions of any classifier. In: Int. conf. ACM SIGKDD. 2016, p. 1135–44. – year: 2021 ident: b71 article-title: Explainable deep few-shot anomaly detection with deviation networks – year: 2014 ident: b74 article-title: Adam: A method for stochastic optimization – volume: 80 start-page: 4390 year: 2018 end-page: 4399 ident: b60 article-title: Deep one-class classification publication-title: Int. conf. on machine learning, ICML – volume: 2 start-page: 772 year: 2020 end-page: 782 ident: b41 article-title: Concept whitening for interpretable image recognition publication-title: Nat Mach Intell – volume: 71 start-page: 1889 year: 2024 end-page: 1896 ident: b8 article-title: An adaptive explainable AI framework for securing consumer electronics-based IoT applications in fog-cloud infrastructure publication-title: IEEE Trans Consum Electron – reference: Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE iCCV. 2017, p. 618–26. – start-page: 1073 year: 2023 end-page: 1078 ident: b29 article-title: Tree-based Kendall’s publication-title: ICDM – start-page: 463 year: 2020 end-page: 474 ident: b37 article-title: A new effective and efficient measure for outlying aspect mining publication-title: International conference on web information systems engineering – start-page: 248 year: 2009 end-page: 255 ident: b73 article-title: Imagenet: A large-scale hierarchical image database publication-title: 2009 IEEE conference on computer vision and pattern recognition – reference: Tan S, Soloviev M, Hooker G, Wells MT. Tree space prototypes: Another look at making tree ensembles interpretable. In: Proceedings of the 2020 ACM-iMS on foundations of data science conference. 2020, p. 23–34. – reference: Ley D, Mishra S, Magazzeni D. Global counterfactual explanations: Investigations, implementations and improvements. In: ICLR 2022 workshop on PAIR – year: 2020 ident: b45 article-title: Deep learning for anomaly detection: A review – volume: 113 start-page: 7565 year: 2024 end-page: 7589 ident: b35 article-title: Explaining outliers and anomalous groups via subspace density contrastive loss publication-title: Mach Learn – volume: 30 year: 2017 ident: b11 article-title: A unified approach to interpreting model predictions publication-title: Adv Neural Inf Process Syst – reference: 2Struct: Privacy, accountability, interpretability, robustness, reasoning on structured data. – volume: 10 year: 2015 ident: b22 article-title: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation publication-title: PLoS One – reference: Pevnỳ T, Kopp M. Explaining anomalies with sapling random forests. In: Information technologies-applications and theory workshops, posters, and tutorials. 2014, p. 7. – volume: 10265 start-page: 146 year: 2017 end-page: 157 ident: b55 article-title: Unsupervised anomaly detection with generative adversarial networks to guide marker discovery publication-title: Int. conf. IPMI – reference: Angiulli F, Fassetti F, Ferragina L. Improving Deep Unsupervised Anomaly Detection by Exploiting VAE Latent Space Distribution. In: Int. conf. on discovery science. 2020, p. 596–611. – reference: He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, p. 770–8. – reference: Angiulli F, Fassetti F, Nisticò S. Local Interpretable Classifier Explanations with Self-generated Semantic Features. In: Int. conf. on discovery science. 2021, p. 401–10. – start-page: 167 year: 2020 end-page: 176 ident: b23 article-title: Explaining any time series classifier publication-title: 2020 IEEE second international conference on cognitive machine intelligence – volume: 36 start-page: 4346 year: 2023 end-page: 4360 ident: b52 article-title: A light-weight and robust tensor convolutional autoencoder for anomaly detection publication-title: IEEE Trans Knowl Data Eng – year: 2018 ident: b30 article-title: Active anomaly detection via ensembles – volume: 34 year: 2021 ident: b44 article-title: A framework to learn with interpretation publication-title: Adv Neural Inf Process Syst – start-page: 1 year: 2022 end-page: 27 ident: b51 article-title: : An unsupervised deep anomaly detection approach exploiting latent space distribution publication-title: Mac Lear – reference: Nóbrega C, Marinho L. Towards explaining recommendations through local surrogate models. In: Proceedings of the 34th ACM/SIGAPP symposium on applied computing. 2019, p. 1671–8. – volume: 32 start-page: 1517 year: 2020 end-page: 1528 ident: b57 article-title: Generative adversarial active learning for unsupervised outlier detection publication-title: IEEE Trans Knowl Data Eng – volume: 109 start-page: 756 year: 2021 end-page: 795 ident: b7 article-title: A unifying review of deep and shallow anomaly detection publication-title: Proc IEEE – reference: Ruff L, Vandermeulen RA, Görnitz N, Binder A, Müller E, Müller K-R, et al. Deep Semi-Supervised Anomaly Detection. In: Int. conf. on learn. repr.. 2020. – reference: Yao X, Li R, Zhang J, Sun J, Zhang C. Explicit boundary guided semi-push-pull contrastive learning for supervised anomaly detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023, p. 24490–9. – year: 2015 ident: b49 article-title: Variational autoencoder based anomaly detection using reconstruction probability – year: 2022 ident: b3 article-title: A survey on explainable anomaly detection – reference: Angiulli F, Fassetti F, Nisticò S, Palopoli L. Outlier Explanation Through Masking Models. In: Advances in databases and information systems: European conference. 2022, p. 392–406. – volume: 25 start-page: 342 year: 2018 end-page: 352 ident: b18 article-title: Rulematrix: Visualizing and understanding classifiers with rules publication-title: IEEE Trans Vis Comput Graphics – reference: Roth K, Pemula L, Zepeda J, Schölkopf B, Brox T, Gehler P. Towards total recall in industrial anomaly detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022, p. 14318–28. – reference: Alvarez-Melis D, Jaakkola TS. Towards Robust Interpretability with Self-Explaining Neural Networks. In: Advances in neural information processing systems. 2018, p. 7786–95. – volume: 133 start-page: 373 year: 2020 end-page: 380 ident: b43 article-title: Understanding the decisions of CNNs: An in-model approach publication-title: Pattern Recognit Lett – start-page: 3145 year: 2017 end-page: 3153 ident: b24 article-title: Learning important features through propagating activation differences publication-title: International conference on machine learning – volume: 32 year: 2019 ident: b15 article-title: This looks like that: Deep learning for interpretable image recognition publication-title: Adv Neural Inf Process Syst – volume: 36 start-page: 3739 year: 2023 end-page: 3750 ident: b63 article-title: Dasvdd: Deep autoencoding support vector data descriptor for anomaly detection publication-title: IEEE Trans Knowl Data Eng – year: 2023 ident: b75 article-title: On pixel-level performance assessment in anomaly detection – volume: 37 start-page: 233 year: 1991 end-page: 243 ident: b46 article-title: Nonlinear principal component analysis using autoassociative neural networks publication-title: AIChE J – start-page: 19 year: 2024 end-page: 35 ident: b25 article-title: Large language models-based local explanations of text classifiers publication-title: International conference on discovery science – year: 2017 ident: b31 article-title: Contextual outlier interpretation – volume: 114 start-page: 153 year: 2025 ident: b64 article-title: Constraint guided autoencoders for joint optimization of condition indicator estimation and anomaly detection in machine condition monitoring publication-title: Mach Learn – year: 2023 ident: b27 article-title: The disagreement problem in faithfulness metrics – reference: Hawkins S, He H, Williams G, Baxter R. Outlier Detection Using Replicator Neural Networks. In: Int. conf.. DAWAK, 2002, p. 170–80. – volume: 32 start-page: 1444 year: 2018 end-page: 1480 ident: b34 article-title: Explaining anomalies in groups with characterizing subspace rules publication-title: Data Min Knowl Discov – reference: Angiulli F, Fassetti F, Ferragina L, Spada R. Cooperative Deep Unsupervised Anomaly Detection. In: Int. conf. on discovery science. 2022, p. 318–28. – year: 2023 ident: b5 article-title: Reconstruction error-based anomaly detection with few outlying examples – volume: 269 start-page: 1860 year: 1995 end-page: -1863 ident: b47 article-title: Replicator neural networks for universal optimal source coding publication-title: Science – reference: Liznerski P, Ruff L, Vandermeulen RA, Franks BJ, Kloft M, Müller K. Explainable Deep One-Class Classification. In: Int. conf. on learn. repr.. 2021. – volume: 109 start-page: 247 year: 2021 end-page: 278 ident: b2 article-title: Explaining deep neural networks and beyond: A review of methods and applications publication-title: Proc IEEE – start-page: 13984 year: 2024 end-page: 14011 ident: b26 article-title: TextGenSHAP: Scalable post-hoc explanations in text generation with long documents publication-title: Findings of the association for computational linguistics: ACL 2024 – reference: Zavrtanik V, Kristan M, Skočaj D. Draem-a discriminatively trained reconstruction embedding for surface anomaly detection. In: Proceedings of the IEEE/CVF international conference on computer vision. 2021, p. 8330–9. – volume: 51 start-page: 93:1 year: 2019 end-page: 93:42 ident: b1 article-title: A survey of methods for explaining black box models publication-title: ACM Comput Surv – volume: 11 start-page: 3131 year: 2022 end-page: 3145 ident: b9 article-title: Explainable machine learning for data extraction across computational social system publication-title: IEEE Trans Comput Soc Syst – reference: Li X, Song X, Wu T. Aognets: Compositional grammatical architectures for deep learning. In: IEEE/CVF conference on computer vision and pattern recognition. 2019, p. 6220–30. – volume: 19 start-page: 1 year: 2025 end-page: 22 ident: b54 article-title: Fuzzy weighted principal component analysis for anomaly detection publication-title: ACM Trans Knowl Discov from Data – start-page: 475 year: 2021 end-page: 489 ident: b67 article-title: Padim: A patch distribution modeling framework for anomaly detection and localization publication-title: International conference on pattern recognition – volume: 36 start-page: 62224 year: 2023 end-page: 62243 ident: b28 article-title: Interpreting unsupervised anomaly detection in security via rule extraction publication-title: Adv Neural Inf Process Syst – volume: 18 start-page: 1 year: 2024 end-page: 15 ident: b53 article-title: SA2E-AD: A stacked attention autoencoder for anomaly detection in multivariate time series publication-title: ACM Trans Knowl Discov Data – reference: Chowdhury T, Rahimi R, Allan J. Equi-explanation maps: Concise and informative global summary explanations. In: Proceedings of the 2022 ACM conference on fairness, accountability, and transparency. 2022, p. 464–72. – year: 2018 ident: b56 article-title: GANomaly: Semi-supervised anomaly detection via adversarial training – reference: Verma S, Beniwal A, Sadagopan N, Seshadri A. RecXplainer: Post-Hoc Attribute-Based Explanations for Recommender Systems. In: Progress and challenges in building trustworthy embodied AI. – volume: 36 start-page: 5605 year: 2024 end-page: 5620 ident: b59 article-title: Anomaly detection under contaminated data with contamination-immune bidirectional gans publication-title: IEEE Trans Knowl Data Eng – volume: 27 year: 2014 ident: b38 article-title: Recurrent models of visual attention publication-title: Adv Neural Inf Process Syst – volume: 114 start-page: 153 issue: 7 year: 2025 ident: 10.1016/j.array.2025.100537_b64 article-title: Constraint guided autoencoders for joint optimization of condition indicator estimation and anomaly detection in machine condition monitoring publication-title: Mach Learn doi: 10.1007/s10994-025-06779-0 – ident: 10.1016/j.array.2025.100537_b20 doi: 10.1145/3412815.3416893 – volume: 32 start-page: 1444 issue: 5 year: 2018 ident: 10.1016/j.array.2025.100537_b34 article-title: Explaining anomalies in groups with characterizing subspace rules publication-title: Data Min Knowl Discov doi: 10.1007/s10618-018-0585-7 – ident: 10.1016/j.array.2025.100537_b40 doi: 10.1109/CVPR.2018.00920 – start-page: 1 year: 2022 ident: 10.1016/j.array.2025.100537_b51 article-title: LatentOut: An unsupervised deep anomaly detection approach exploiting latent space distribution publication-title: Mac Lear – year: 2018 ident: 10.1016/j.array.2025.100537_b30 – year: 2023 ident: 10.1016/j.array.2025.100537_b5 – start-page: 19 year: 2024 ident: 10.1016/j.array.2025.100537_b25 article-title: Large language models-based local explanations of text classifiers – year: 2014 ident: 10.1016/j.array.2025.100537_b74 – volume: 34 year: 2021 ident: 10.1016/j.array.2025.100537_b44 article-title: A framework to learn with interpretation publication-title: Adv Neural Inf Process Syst – volume: 11 start-page: 3131 issue: 3 year: 2022 ident: 10.1016/j.array.2025.100537_b9 article-title: Explainable machine learning for data extraction across computational social system publication-title: IEEE Trans Comput Soc Syst doi: 10.1109/TCSS.2022.3164993 – volume: 25 start-page: 342 issue: 1 year: 2018 ident: 10.1016/j.array.2025.100537_b18 article-title: Rulematrix: Visualizing and understanding classifiers with rules publication-title: IEEE Trans Vis Comput Graphics doi: 10.1109/TVCG.2018.2864812 – ident: 10.1016/j.array.2025.100537_b39 doi: 10.1109/CVPR.2019.00638 – volume: 32 start-page: 1517 issue: 8 year: 2020 ident: 10.1016/j.array.2025.100537_b57 article-title: Generative adversarial active learning for unsupervised outlier detection publication-title: IEEE Trans Knowl Data Eng – start-page: 463 year: 2020 ident: 10.1016/j.array.2025.100537_b37 article-title: A new effective and efficient measure for outlying aspect mining – year: 2018 ident: 10.1016/j.array.2025.100537_b56 – start-page: 3145 year: 2017 ident: 10.1016/j.array.2025.100537_b24 article-title: Learning important features through propagating activation differences – ident: 10.1016/j.array.2025.100537_b76 doi: 10.1109/CVPR.2019.00982 – year: 2024 ident: 10.1016/j.array.2025.100537_b77 – start-page: 248 year: 2009 ident: 10.1016/j.array.2025.100537_b73 article-title: Imagenet: A large-scale hierarchical image database – volume: 20 start-page: 1459 issue: 5 year: 2020 ident: 10.1016/j.array.2025.100537_b6 article-title: Visual-based defect detection and classification approaches for industrial applications—a survey publication-title: Sensors doi: 10.3390/s20051459 – ident: 10.1016/j.array.2025.100537_b50 doi: 10.1007/978-3-030-61527-7_39 – ident: 10.1016/j.array.2025.100537_b19 – ident: 10.1016/j.array.2025.100537_b42 – year: 2022 ident: 10.1016/j.array.2025.100537_b3 – volume: 37 start-page: 233 issue: 2 year: 1991 ident: 10.1016/j.array.2025.100537_b46 article-title: Nonlinear principal component analysis using autoassociative neural networks publication-title: AIChE J doi: 10.1002/aic.690370209 – volume: 30 year: 2017 ident: 10.1016/j.array.2025.100537_b11 article-title: A unified approach to interpreting model predictions publication-title: Adv Neural Inf Process Syst – start-page: 413 year: 2008 ident: 10.1016/j.array.2025.100537_b36 article-title: Isolation forest – volume: 10 issue: 7 year: 2015 ident: 10.1016/j.array.2025.100537_b22 article-title: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation publication-title: PLoS One doi: 10.1371/journal.pone.0130140 – start-page: 1073 year: 2023 ident: 10.1016/j.array.2025.100537_b29 article-title: Tree-based Kendall’s τ maximization for explainable unsupervised anomaly detection – volume: 113 start-page: 4381 issue: 7 year: 2024 ident: 10.1016/j.array.2025.100537_b58 article-title: Composite score for anomaly detection in imbalanced real-world industrial dataset publication-title: Mach Learn doi: 10.1007/s10994-023-06415-9 – year: 2021 ident: 10.1016/j.array.2025.100537_b71 – ident: 10.1016/j.array.2025.100537_b32 – ident: 10.1016/j.array.2025.100537_b14 doi: 10.1145/3219819.3220072 – volume: 2 start-page: 772 issue: 12 year: 2020 ident: 10.1016/j.array.2025.100537_b41 article-title: Concept whitening for interpretable image recognition publication-title: Nat Mach Intell doi: 10.1038/s42256-020-00265-z – ident: 10.1016/j.array.2025.100537_b4 doi: 10.1007/978-3-031-15740-0_28 – volume: 109 start-page: 756 issue: 5 year: 2021 ident: 10.1016/j.array.2025.100537_b7 article-title: A unifying review of deep and shallow anomaly detection publication-title: Proc IEEE doi: 10.1109/JPROC.2021.3052449 – volume: 36 start-page: 3739 issue: 8 year: 2023 ident: 10.1016/j.array.2025.100537_b63 article-title: Dasvdd: Deep autoencoding support vector data descriptor for anomaly detection publication-title: IEEE Trans Knowl Data Eng doi: 10.1109/TKDE.2023.3328882 – volume: 71 start-page: 1889 issue: 1 year: 2024 ident: 10.1016/j.array.2025.100537_b8 article-title: An adaptive explainable AI framework for securing consumer electronics-based IoT applications in fog-cloud infrastructure publication-title: IEEE Trans Consum Electron doi: 10.1109/TCE.2024.3424189 – volume: 36 start-page: 5605 issue: 11 year: 2024 ident: 10.1016/j.array.2025.100537_b59 article-title: Anomaly detection under contaminated data with contamination-immune bidirectional gans publication-title: IEEE Trans Knowl Data Eng doi: 10.1109/TKDE.2024.3404027 – volume: 10265 start-page: 146 year: 2017 ident: 10.1016/j.array.2025.100537_b55 article-title: Unsupervised anomaly detection with generative adversarial networks to guide marker discovery – ident: 10.1016/j.array.2025.100537_b62 doi: 10.1007/978-3-031-18840-4_23 – ident: 10.1016/j.array.2025.100537_b17 doi: 10.1145/3531146.3533112 – start-page: 13984 year: 2024 ident: 10.1016/j.array.2025.100537_b26 article-title: TextGenSHAP: Scalable post-hoc explanations in text generation with long documents – year: 2014 ident: 10.1016/j.array.2025.100537_b33 article-title: Interpreting and clustering outliers with sapling random forests publication-title: Inf Technol Appl Theory – year: 2020 ident: 10.1016/j.array.2025.100537_b45 – volume: 32 year: 2019 ident: 10.1016/j.array.2025.100537_b15 article-title: This looks like that: Deep learning for interpretable image recognition publication-title: Adv Neural Inf Process Syst – volume: 51 start-page: 93:1 issue: 5 year: 2019 ident: 10.1016/j.array.2025.100537_b1 article-title: A survey of methods for explaining black box models publication-title: ACM Comput Surv doi: 10.1145/3236009 – volume: 113 start-page: 7565 issue: 10 year: 2024 ident: 10.1016/j.array.2025.100537_b35 article-title: Explaining outliers and anomalous groups via subspace density contrastive loss publication-title: Mach Learn doi: 10.1007/s10994-024-06618-8 – ident: 10.1016/j.array.2025.100537_b21 doi: 10.1109/ICCV.2017.74 – ident: 10.1016/j.array.2025.100537_b68 doi: 10.1109/ICCV48922.2021.00822 – ident: 10.1016/j.array.2025.100537_b66 doi: 10.1109/CVPR52688.2022.01392 – volume: 36 start-page: 62224 year: 2023 ident: 10.1016/j.array.2025.100537_b28 article-title: Interpreting unsupervised anomaly detection in security via rule extraction publication-title: Adv Neural Inf Process Syst – ident: 10.1016/j.array.2025.100537_b72 doi: 10.1109/CVPR.2016.90 – ident: 10.1016/j.array.2025.100537_b69 – year: 2023 ident: 10.1016/j.array.2025.100537_b27 – volume: 109 start-page: 247 issue: 3 year: 2021 ident: 10.1016/j.array.2025.100537_b2 article-title: Explaining deep neural networks and beyond: A review of methods and applications publication-title: Proc IEEE doi: 10.1109/JPROC.2021.3060483 – volume: 133 start-page: 373 year: 2020 ident: 10.1016/j.array.2025.100537_b43 article-title: Understanding the decisions of CNNs: An in-model approach publication-title: Pattern Recognit Lett doi: 10.1016/j.patrec.2020.04.004 – start-page: 167 year: 2020 ident: 10.1016/j.array.2025.100537_b23 article-title: Explaining any time series classifier – year: 2017 ident: 10.1016/j.array.2025.100537_b31 – ident: 10.1016/j.array.2025.100537_b12 doi: 10.1007/978-3-030-88942-5_31 – ident: 10.1016/j.array.2025.100537_b48 doi: 10.1007/3-540-46145-0_17 – volume: 19 start-page: 1 issue: 3 year: 2025 ident: 10.1016/j.array.2025.100537_b54 article-title: Fuzzy weighted principal component analysis for anomaly detection publication-title: ACM Trans Knowl Discov from Data – volume: 18 start-page: 1 issue: 7 year: 2024 ident: 10.1016/j.array.2025.100537_b53 article-title: SA2E-AD: A stacked attention autoencoder for anomaly detection in multivariate time series publication-title: ACM Trans Knowl Discov Data doi: 10.1145/3653677 – year: 2015 ident: 10.1016/j.array.2025.100537_b49 – volume: 36 start-page: 4346 issue: 9 year: 2023 ident: 10.1016/j.array.2025.100537_b52 article-title: A light-weight and robust tensor convolutional autoencoder for anomaly detection publication-title: IEEE Trans Knowl Data Eng doi: 10.1109/TKDE.2023.3332784 – start-page: 475 year: 2021 ident: 10.1016/j.array.2025.100537_b67 article-title: Padim: A patch distribution modeling framework for anomaly detection and localization – ident: 10.1016/j.array.2025.100537_b10 doi: 10.1145/2939672.2939778 – ident: 10.1016/j.array.2025.100537_b70 doi: 10.1109/CVPR52729.2023.02346 – ident: 10.1016/j.array.2025.100537_b13 – start-page: 353 year: 2019 ident: 10.1016/j.array.2025.100537_b65 article-title: Deep anomaly detection with deviation networks – year: 2023 ident: 10.1016/j.array.2025.100537_b75 – ident: 10.1016/j.array.2025.100537_b16 doi: 10.1145/3297280.3297443 – volume: 27 year: 2014 ident: 10.1016/j.array.2025.100537_b38 article-title: Recurrent models of visual attention publication-title: Adv Neural Inf Process Syst – volume: 80 start-page: 4390 year: 2018 ident: 10.1016/j.array.2025.100537_b60 article-title: Deep one-class classification – ident: 10.1016/j.array.2025.100537_b61 – volume: 269 start-page: 1860 issue: 5232 year: 1995 ident: 10.1016/j.array.2025.100537_b47 article-title: Replicator neural networks for universal optimal source coding publication-title: Science doi: 10.1126/science.269.5232.1860 |
| SSID | ssj0002511158 |
| Score | 2.310182 |
| Snippet | This work tackles the problem of designing explainable by design anomaly detectors, which provide intelligible explanations to abnormal behaviors in input data... |
| SourceID | crossref elsevier |
| SourceType | Index Database Publisher |
| StartPage | 100537 |
| SubjectTerms | Anomaly detection Explainability by design Explainable Artificial Intelligence Green-aware AI |
| Title | Explaining anomalies through semi-supervised Autoencoders |
| URI | https://dx.doi.org/10.1016/j.array.2025.100537 |
| Volume | 28 |
| WOSCitedRecordID | wos001604479800002&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAON databaseName: Directory of Open Access Journals customDbUrl: eissn: 2590-0056 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0002511158 issn: 2590-0056 databaseCode: DOA dateStart: 20190101 isFulltext: true titleUrlDefault: https://www.doaj.org/ providerName: Directory of Open Access Journals – providerCode: PRVHPJ databaseName: ROAD: Directory of Open Access Scholarly Resources customDbUrl: eissn: 2590-0056 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0002511158 issn: 2590-0056 databaseCode: M~E dateStart: 20190101 isFulltext: true titleUrlDefault: https://road.issn.org providerName: ISSN International Centre |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV07b9swECbctEOXJn2heUJDt1aGHqRIjkmQIENqdDCKbAJFkYWDRjYs2Ugy5Lfn-JClpEaQDFkEg7JPMu9w-ng6fh9C3-GZHMmMxCFhmoQY0yJkJWdhgQuRYUFLbuV8_pzT0YhdXPDfg8Gy3Quz_Eeril1f89mruhrGwNlm6-wL3L0yCgPwGZwOR3A7HJ_leNNV52QffohqegU427A4eDmeWl1NwnoxMxmiBqx5uGimhsqy9I3wHSHtXNw8VOrpVQz-TswrJIt7RTGf3LpeLsfpWNeqadpzvRMKLBoNCFsIWHQNQiPLFG3e1x8lthQLweMUvdtiREJ6jR02Z8FiymxUJ57des2YT7p-R7jLmrFllVmb0F1t4XIozN8emksOu28_pM9-9FhbNRu2fWyXuTWSGyO5M_IGvU0o4aYV8NddV5szy67YSruu7r0lrLKtgf_dzHpQ0wMq4y30wa8wgkMXGR_RQFWf0Gar3hH4ZP4Z8S5QglWgBD5QgkeBEvQD5Qsan56Mj89CL6QRyjSKmpCTKE0ipTHFPNNMUp1mWEtSqARW_LGkidJUJDopKBdYphqgDVPEaI1pHhXpV7RRTSv1DQUlLZhOJWDAkuOMERFnohRlFqmYCqbkNvrZTkQ-c3Qp-RPzv42ydrJyj_gcksvB_0_9cOdl19lF7wHoEtdxtYc2mvlC7aN3ctlM6vmBrcMc2BC4Bzt2duc |
| linkProvider | ISSN International Centre |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Explaining+anomalies+through+semi-supervised+Autoencoders&rft.jtitle=Array+%28New+York%29&rft.au=Angiulli%2C+Fabrizio&rft.au=Fassetti%2C+Fabio&rft.au=Ferragina%2C+Luca&rft.au=Nistic%C3%B2%2C+Simona&rft.date=2025-12-01&rft.issn=2590-0056&rft.eissn=2590-0056&rft.volume=28&rft.spage=100537&rft_id=info:doi/10.1016%2Fj.array.2025.100537&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_array_2025_100537 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2590-0056&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2590-0056&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2590-0056&client=summon |