Story-driven summarization for egocentric video

Zheng Lu, Kristen Grauman

Research output: Journal PublicationConference articlepeer-review

413 Citations (Scopus)


We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video sub shots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a random-walk based metric of influence between sub shots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subs hot summary. Whereas traditional methods optimize a summary's diversity or representative ness, ours explicitly accounts for how one sub-event 'leads to' another-which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects.

Original languageEnglish
Article number6619194
Pages (from-to)2714-2721
Number of pages8
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Publication statusPublished - 2013
Externally publishedYes
Event26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2013 - Portland, OR, United States
Duration: 23 Jun 201328 Jun 2013


  • egocentric
  • story
  • video summarization

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition


Dive into the research topics of 'Story-driven summarization for egocentric video'. Together they form a unique fingerprint.

Cite this