Multi-Head Graph Attention Adversarial Autoencoder Network for Unsupervised Change Detection Using Heterogeneous Remote Sensing Images

Heterogeneous remote sensing images, acquired from different sensors, exhibit significant variations in data structure, resolution, and radiometric characteristics. These inherent heterogeneities present substantial challenges for change detection, a task that involves identifying changes in a targe...

Full description

Saved in:
Bibliographic Details
Published in:Remote sensing (Basel, Switzerland) Vol. 17; no. 15; p. 2581
Main Authors: Jia, Meng, Lou, Xiangyu, Zhao, Zhiqiang, Lu, Xiaofeng, Shi, Zhenghao
Format: Journal Article
Language:English
Published: Basel MDPI AG 24.07.2025
Subjects:
ISSN:2072-4292, 2072-4292
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Heterogeneous remote sensing images, acquired from different sensors, exhibit significant variations in data structure, resolution, and radiometric characteristics. These inherent heterogeneities present substantial challenges for change detection, a task that involves identifying changes in a target area by analyzing multi-temporal images. To address this issue, we propose the Multi-Head Graph Attention Mechanism (MHGAN), designed to achieve accurate detection of surface changes in heterogeneous remote sensing images. The MHGAN employs a bidirectional adversarial convolutional autoencoder network to reconstruct and perform style transformation of heterogeneous images. Unlike existing unidirectional translation frameworks (e.g., CycleGAN), our approach simultaneously aligns features in both domains through multi-head graph attention and dynamic kernel width estimation, effectively reducing false changes caused by sensor heterogeneity. The network training is constrained by four loss functions: reconstruction loss, code correlation loss, graph attention loss, and adversarial loss, which together guide the alignment of heterogeneous images into a unified data domain. The code correlation loss enforces consistency in feature representations at the encoding layer, while a density-based kernel width estimation method enhances the capture of both local and global changes. The graph attention loss models the relationships between features and images, improving the representation of consistent regions across bitemporal images. Additionally, adversarial loss promotes style consistency within the shared domain. Our bidirectional adversarial convolutional autoencoder simultaneously aligns features across both domains. This bilateral structure mitigates the information loss associated with one-way mappings, enabling more accurate style transformation and reducing false change detections caused by sensor heterogeneity, which represents a key advantage over existing unidirectional methods. Compared with state-of-the-art methods for heterogeneous change detection, the MHGAN demonstrates superior performance in both qualitative and quantitative evaluations across four benchmark heterogeneous remote sensing datasets.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2072-4292
2072-4292
DOI:10.3390/rs17152581