Abstract
Recent research on dense captioning based on the recurrent neural network and the convolutional neural network has made a great progress. However, mapping from an image feature space to a description space is a nonlinear and multimodel task, which makes it difficult for the current methods to get accurate results. In this paper, we put forward a novel approach for dense captioning based on hourglass-structured residual learning. Discriminant feature maps are obtained by incorporating dense connected networks and residual learning in our model. Finally, the performance of the approach on the Visual Genome V1.0 dataset and the region labelled MS-COCO (Microsoft Common Objects in Context) dataset are demonstrated. The experimental results have shown that our approach outperforms most current methods.
Original language | English |
---|---|
Pages (from-to) | 181-196 |
Number of pages | 16 |
Journal | Journal of Artificial Intelligence Research |
Volume | 64 |
DOIs | |
Publication status | Published - 1 Jan 2019 |
Externally published | Yes |
ASJC Scopus subject areas
- Artificial Intelligence