Deep Hierarchical Encoder-Decoder Network for Image Captioning

Encoder-decoder models have been widely used in image captioning, and most of them are designed via single long short term memory (LSTM). The capacity of single-layer network, whose encoder and decoder are integrated together, is limited for such a complex task of image captioning. Moreover, how to...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on multimedia Vol. 21; no. 11; pp. 2942 - 2956
Main Authors: Xiao, Xinyu, Wang, Lingfeng, Ding, Kun, Xiang, Shiming, Pan, Chunhong
Format: Journal Article
Language:English
Published: Piscataway IEEE 01.11.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:1520-9210, 1941-0077
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Encoder-decoder models have been widely used in image captioning, and most of them are designed via single long short term memory (LSTM). The capacity of single-layer network, whose encoder and decoder are integrated together, is limited for such a complex task of image captioning. Moreover, how to effectively increase the "vertical depth" of encoder-decoder remains to be solved. To deal with these problems, a novel deep hierarchical encoder-decoder network is proposed for image captioning, where a deep hierarchical structure is explored to separate the functions of encoder and decoder. This model is capable of efficiently exerting the representation capacity of deep networks to fuse high level semantics of vision and language in generating captions. Specifically, visual representations in top levels of abstraction are simultaneously considered, and each of these levels is associated to one LSTM. The bottom-most LSTM is applied as the encoder of textual inputs. The application of the middle layer in encoder-decoder is to enhance the decoding ability of top-most LSTM. Furthermore, depending on the introduction of semantic enhancement module of image feature and distribution combine module of text feature, variants of architectures of our model are constructed to explore the impacts and mutual interactions among the visual representation, textual representations, and the output of the middle LSTM layer. Particularly, the framework is training under a reinforcement learning method to address the exposure bias problem between the training and the testing by the policy gradient optimization. Qualitative analyses indicate the process that our model "translates" image to sentence and further visualization presents the evolution of the hidden states from different hierarchical LSTMs over time. Extensive experiments demonstrate that our model outperforms current state-of-the-art models on three benchmark datasets: Flickr8K, Flickr30K, and MSCOCO. On both image captioning and retrieval tasks, our method achieves the best results. On MSCOCO captioning Leaderboard, our method also achieves superior performance.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2019.2915033