Remote sensing image captioning via Variational Autoencoder and Reinforcement Learning

Image captioning, i.e., generating the natural semantic descriptions of given image, is an essential task for machines to understand the content of the image. Remote sensing image captioning is a part of the field. Most of the current remote sensing image captioning models suffered the overfitting p...

Full description

Saved in:
Bibliographic Details
Published in:Knowledge-based systems Vol. 203; p. 105920
Main Authors: Shen, Xiangqing, Liu, Bing, Zhou, Yong, Zhao, Jiaqi, Liu, Mingming
Format: Journal Article
Language:English
Published: Amsterdam Elsevier B.V 05.09.2020
Elsevier Science Ltd
Subjects:
ISSN:0950-7051, 1872-7409
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Image captioning, i.e., generating the natural semantic descriptions of given image, is an essential task for machines to understand the content of the image. Remote sensing image captioning is a part of the field. Most of the current remote sensing image captioning models suffered the overfitting problem and failed to utilize the semantic information in images. To this end, we propose a Variational Autoencoder and Reinforcement Learning based Two-stage Multi-task Learning Model (VRTMM) for the remote sensing image captioning task. In the first stage, we finetune the CNN jointly with the Variational Autoencoder. In the second stage, the Transformer generates the text description using both spatial and semantic features. Reinforcement Learning is then applied to enhance the quality of the generated sentences. Our model surpasses the previous state of the art records by a large margin on all seven scores on Remote Sensing Image Caption Dataset. The experiment result indicates our model is effective on remote sensing image captioning and achieves the new state-of-the-art result. •Introducing VAE to regularize the shared encoder and extract image features more effectively by reconstructing input images.•Improving the performance of image caption significantly by virtue of low-level and high-level image features simultaneously.•Enhancing the final text description quality by adding self-attention to spatial features.•Our proposed model outperforms the state-of-the-art models in the remote sensing image captioning.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0950-7051
1872-7409
DOI:10.1016/j.knosys.2020.105920