A novel fast full frame video stabilization via three-layer model

Wei Long, Jie Yang, Dacheng Song, Xiaogang Chen, Xiangjian He

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review


Video stabilization is an important video enhancement technology which aims at removing undesired shaking from input videos. A challenging task in stabilization is to inpaint the missing pixels of undefined areas in the motion-compensated frames. This paper describes a new video stabilization method. It adopts a multi-layer model to improve the efficiency of the video stabilization. The undefined areas can be inpainted in real-time. Compared with traditional methods, our proposed algorithm only need maintain a single updated mosaic image for video completion, while previous methods require to store all neighboring frames and then registered with the current frame. The experimental results demonstrated the effectiveness of the proposed approach.

Original languageEnglish
Title of host publicationMultiMedia Modeling - 21st International Conference, MMM 2015, Proceedings
EditorsXiangjian He, Dacheng Tao, Muhammad Abul Hasan, Suhuai Luo, Changsheng Xu, Jie Yang
PublisherSpringer Verlag
Number of pages11
ISBN (Electronic)9783319144443
Publication statusPublished - 2015
Externally publishedYes
Event21st International Conference on MultiMedia Modeling, MMM 2015 - Sydney, Australia
Duration: 5 Jan 20157 Jan 2015

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference21st International Conference on MultiMedia Modeling, MMM 2015


  • Global motion estimation
  • Motion compensation
  • Video completion
  • Video stabilization

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science (all)


Dive into the research topics of 'A novel fast full frame video stabilization via three-layer model'. Together they form a unique fingerprint.

Cite this