Tensor error correction for corrupted values in visual data

Yin Li, Yue Zhou, Junchi Yan, Jie Yang, Xiangjian He

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

12 Citations (Scopus)

Abstract

The multi-channel image or the video clip has the natural form of tensor. The values of the tensor can be corrupted due to noise in the acquisition process. We consider the problem of recovering a tensor L of visual data from its corrupted observations X = L + S, where the corrupted entries S are unknown and unbounded, but are assumed to be sparse. Our work is built on the recent studies about the recovery of corrupted low-rank matrix via trace norm minimization. We extend the matrix case to the tensor case by the definition of tensor trace norm in [6]. Furthermore, the problem of tensor is formulated as a convex optimization, which is much harder than its matrix form. Thus, we develop a high quality algorithm to efficiently solve the problem. Our experiments show potential applications of our method and indicate a robust and reliable solution.

Original languageEnglish
Title of host publication2010 IEEE International Conference on Image Processing, ICIP 2010 - Proceedings
Pages2321-2324
Number of pages4
DOIs
Publication statusPublished - 2010
Externally publishedYes
Event2010 17th IEEE International Conference on Image Processing, ICIP 2010 - Hong Kong, Hong Kong
Duration: 26 Sept 201029 Sept 2010

Publication series

NameProceedings - International Conference on Image Processing, ICIP
ISSN (Print)1522-4880

Conference

Conference2010 17th IEEE International Conference on Image Processing, ICIP 2010
Country/TerritoryHong Kong
CityHong Kong
Period26/09/1029/09/10

Keywords

  • Convex optimization
  • Sparse coding
  • Tensor decomposition
  • Trace norm minimization

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Signal Processing

Fingerprint

Dive into the research topics of 'Tensor error correction for corrupted values in visual data'. Together they form a unique fingerprint.

Cite this