Advancing Handwritten Digit Recognition in Defense Systems: Comparative Analysis of Autoencoder-Based Transfer Learning

The presence of labelled data is limited while the unlabelled data is present in abundance. Developing well annotated datasets is a challenging task and it requires lot of computation. These practical challenges led to the need for the development of models that can obtain knowledge from one domain...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2025 IEEE Space, Aerospace and Defence Conference (SPACE) s. 1 - 6
Hlavní autoři: Jain, Shruti, Kapur, Shivani, Vandana
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 21.07.2025
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The presence of labelled data is limited while the unlabelled data is present in abundance. Developing well annotated datasets is a challenging task and it requires lot of computation. These practical challenges led to the need for the development of models that can obtain knowledge from one domain and use it in another similar (but not same) domain which forms the core of transfer learning paradigm. This paper's work is based on self-taught learning which obtains knowledge from a source domain and further uses it on the target domain. Optimal representation of source data is learned and then the labeled data present in the target domain is transformed to the representations already learned. These transformed representations are used for further supervised tasks.A vast amount of military data is present in intelligence gathering, logs, navigation maps and handwritten reports. It's crucial to digitalize the data for operational efficiency and decision-making. To learn the optimal representation in source domain, autoencoders are used. The experiments are done on MNIST dataset. Two separate datasets similar to MNIST are created for testing. The results show that the self-taught learning approach performs better than the baseline model where transfer learning is not used.
DOI:10.1109/SPACE65882.2025.11170843