EDADet: Encoder-Decoder Domain Augmented Alignment Detector for Tiny Objects in Remote Sensing Images
In recent years, deep learning has shown great potential in object detection applications, but it is still difficult to accurately detect tiny objects with an area proportion of less than 1% in remote sensing images. Most existing studies focus on designing complex networks to learn discriminative f...
Gespeichert in:
| Veröffentlicht in: | IEEE transactions on geoscience and remote sensing Jg. 63; S. 1 - 15 |
|---|---|
| Hauptverfasser: | , , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
New York
IEEE
01.01.2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Schlagworte: | |
| ISSN: | 0196-2892, 1558-0644 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | In recent years, deep learning has shown great potential in object detection applications, but it is still difficult to accurately detect tiny objects with an area proportion of less than 1% in remote sensing images. Most existing studies focus on designing complex networks to learn discriminative features of tiny objects, usually resulting in a heavy computational burden. In contrast, this article proposes an accurate and efficient single-stage detector called EDADet for tiny objects. First, domain conversion technology is used to realize cross-domain multimodal data fusion based on single-modal data input. Then, a tiny object-aware backbone is designed to extract features at different scales. Next, an encoder-decoder feature fusion (EDFF) structure is devised to achieve efficient cross-scale propagation of semantic information. Finally, a center-assist loss and an alignment self-supervised loss are adopted to alleviate the position sensitivity issue and drift of tiny objects. A series of experiments on the AI-TODv2 dataset demonstrate the effectiveness and practicality of our EDADet. It achieves state-of-the-art (SOTA) performance and surpasses the second-best method by 9.65% in AP50 and 4.86% in mAP. |
|---|---|
| Bibliographie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 0196-2892 1558-0644 |
| DOI: | 10.1109/TGRS.2024.3510948 |