High-Performance Light Field Reconstruction with Channel-wise and SAI-wise Attention

Zexi Hu, Yuk Ying Chung, Seid Miad Zandavi, Wanli Ouyang, Xiangjian He, Yuefang Gao

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

2 Citations (Scopus)

Abstract

Light field (LF) images provide rich information and are suitable for high-level computer vision applications. To acquire capabilities of modeling the correlated information of LF, most of the previous methods have to stack several convolutional layers to improve the feature representation and result in heavy computation and large model sizes. In this paper, we propose channel-wise and SAI-wise attention modules to enhance the feature representation at a low cost. The channel-wise attention module helps to focus on important channels while the SAI-wise attention module guides the network to pay more attention to informative SAIs. The experimental results demonstrate that the baseline network can achieve better performance with the aid of the attention modules.

Original languageEnglish
Title of host publicationNeural Information Processing - 26th International Conference, ICONIP 2019, Proceedings
EditorsTom Gedeon, Kok Wai Wong, Minho Lee
PublisherSpringer
Pages118-126
Number of pages9
ISBN (Print)9783030368012
DOIs
Publication statusPublished - 2019
Externally publishedYes
Event26th International Conference on Neural Information Processing, ICONIP 2019 - Sydney, Australia
Duration: 12 Dec 201915 Dec 2019

Publication series

NameCommunications in Computer and Information Science
Volume1143 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

Conference26th International Conference on Neural Information Processing, ICONIP 2019
Country/TerritoryAustralia
CitySydney
Period12/12/1915/12/19

Keywords

  • Deep learning
  • Image processing
  • Light field

ASJC Scopus subject areas

  • General Computer Science
  • General Mathematics

Fingerprint

Dive into the research topics of 'High-Performance Light Field Reconstruction with Channel-wise and SAI-wise Attention'. Together they form a unique fingerprint.

Cite this