Advancing Handwritten Digit Recognition in Defense Systems: Comparative Analysis of Autoencoder-Based Transfer Learning

The presence of labelled data is limited while the unlabelled data is present in abundance. Developing well annotated datasets is a challenging task and it requires lot of computation. These practical challenges led to the need for the development of models that can obtain knowledge from one domain...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:2025 IEEE Space, Aerospace and Defence Conference (SPACE) S. 1 - 6
Hauptverfasser: Jain, Shruti, Kapur, Shivani, Vandana
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 21.07.2025
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The presence of labelled data is limited while the unlabelled data is present in abundance. Developing well annotated datasets is a challenging task and it requires lot of computation. These practical challenges led to the need for the development of models that can obtain knowledge from one domain and use it in another similar (but not same) domain which forms the core of transfer learning paradigm. This paper's work is based on self-taught learning which obtains knowledge from a source domain and further uses it on the target domain. Optimal representation of source data is learned and then the labeled data present in the target domain is transformed to the representations already learned. These transformed representations are used for further supervised tasks.A vast amount of military data is present in intelligence gathering, logs, navigation maps and handwritten reports. It's crucial to digitalize the data for operational efficiency and decision-making. To learn the optimal representation in source domain, autoencoders are used. The experiments are done on MNIST dataset. Two separate datasets similar to MNIST are created for testing. The results show that the self-taught learning approach performs better than the baseline model where transfer learning is not used.
DOI:10.1109/SPACE65882.2025.11170843