Triple attention network for video segmentation

Yan Tian, Yujie Zhang, Di Zhou, Guohua Cheng, Wei Gang Chen, Ruili Wang

Research output: Journal PublicationArticlepeer-review

33 Citations (Scopus)

Abstract

Video segmentation automatically segments a target object throughout a video and has recently achieved good progress due to the development of deep convolutional neural networks (DCNNs). However, how to simultaneously capture long-range dependencies in multiple spaces remains an important issue in video segmentation. In this paper, we propose a novel triple attention network (TriANet) that simultaneously exploits temporal, spatial, and channel context knowledge by using the self-attention mechanism to enhance the discriminant ability of feature representations. We verify our method on the Shining3D dental, DAVIS16, and DAVIS17 datasets, and the results show our method to be competitive when compared with other state-of-the-art video segmentation methods.

Original languageEnglish
Pages (from-to)202-211
Number of pages10
JournalNeurocomputing
Volume417
DOIs
Publication statusPublished - 5 Dec 2020
Externally publishedYes

Keywords

  • Computer vision
  • Convolution neural network
  • Deep learning
  • Video segmentation

ASJC Scopus subject areas

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Triple attention network for video segmentation'. Together they form a unique fingerprint.

Cite this