Automatic generation of in-vehicle images: StyleGAN-ADA vs. MSG-GAN

Saved in:
Bibliographic Details
Title: Automatic generation of in-vehicle images: StyleGAN-ADA vs. MSG-GAN
Authors: Sahar Azadi, Sandra Dixe, Joao Leite, Joao Borges, Sandro Queiros, Jeime Fonseca
Source: Volume: 5, Issue: 123-31
Computers and Informatics
Publisher Information: Computers and Informatics, 2025.
Publication Year: 2025
Subject Terms: Görüntü İşleme, Image Processing, Dağıtılmış Sistemler ve Algoritmalar, Autonomous Agents and Multiagent Systems, Distributed Systems and Algorithms, Deep learning, Evaluation metrics, Generative Adversarial Networks, Generative models, Otonom Ajanlar ve Çok Yönlü Sistemler
Description: Deep learning-based methodologies are a key component towards the goal of autonomous driving. For a successful application, these models require a significant amount of training data, which is difficult, time-consuming, and expensive to collect. This study assesses the effectiveness of Generative Adversarial Networks (GANs) in generating high-quality training images for in-vehicle applications using a limited dataset. Two advanced GAN architectures were compared for their ability to produce realistic in-vehicle RGB images. The results showed that the StyleGAN-ADA outperformed the MSG-GAN, generating images with better fidelity and accuracy, making it more suitable for scenarios with limited data. However, challenges such as mode collapse and long training times, particularly for high-resolution images, were identified. The models’ reliance on the quality and diversity of the training dataset also limits their effectiveness in real-world applications. This research highlights the potential of GANs to reduce the lack of data in autonomous driving, pointing to future approaches for optimizing these models.
Document Type: Article
File Description: application/pdf
ISSN: 2757-8259
DOI: 10.62189/ci.1261718
Access URL: https://dergipark.org.tr/tr/pub/ci/issue/91787/1261718
Accession Number: edsair.doi.dedup.....53d4e714b320e9b5696eb8da2386374a
Database: OpenAIRE
Description
Abstract:Deep learning-based methodologies are a key component towards the goal of autonomous driving. For a successful application, these models require a significant amount of training data, which is difficult, time-consuming, and expensive to collect. This study assesses the effectiveness of Generative Adversarial Networks (GANs) in generating high-quality training images for in-vehicle applications using a limited dataset. Two advanced GAN architectures were compared for their ability to produce realistic in-vehicle RGB images. The results showed that the StyleGAN-ADA outperformed the MSG-GAN, generating images with better fidelity and accuracy, making it more suitable for scenarios with limited data. However, challenges such as mode collapse and long training times, particularly for high-resolution images, were identified. The models’ reliance on the quality and diversity of the training dataset also limits their effectiveness in real-world applications. This research highlights the potential of GANs to reduce the lack of data in autonomous driving, pointing to future approaches for optimizing these models.
ISSN:27578259
DOI:10.62189/ci.1261718