Human action recognition employing negative space features

Shah Atiqur Rahman, M. K.H. Leung, Siu Yeung Cho

Research output: Journal PublicationArticlepeer-review

18 Citations (Scopus)


We proposed a region based method to recognize human actions from video sequences. Unlike other region based methods, it works with the surrounding regions of the human silhouette termed as negative space. This paper further extends the idea of negative space to cope with the changes in viewpoints. It also addresses the problem of long shadows which is one of the major challenges of human action recognition. Some systems attempt suppressing shadows during the segmentation process but our system takes input of segmented binary images of which the shadow is not suppressed. This makes our system less dependent on segmentation process. Further, this approach can complement the positive space (silhouette) based methods to boost recognition. The system consists of a hierarchical processing: histogram analysis on segmented input image, followed by motion and shape feature extraction, pose sequence analysis by employing Dynamic Time Warping and at last classification by Nearest Neighbor classifier. We evaluated our system by most commonly used datasets and achieved higher accuracy than the state of the arts methods. Our system can also retrieve video sequences from queries of human action sequences.

Original languageEnglish
Pages (from-to)217-231
Number of pages15
JournalJournal of Visual Communication and Image Representation
Issue number3
Publication statusPublished - 2013


  • Complex activity
  • Computer vision
  • Dynamic Time Warping
  • Fuzzy function
  • Human action recognition
  • Negative space
  • Region partitioning
  • Silhouette

ASJC Scopus subject areas

  • Signal Processing
  • Media Technology
  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering


Dive into the research topics of 'Human action recognition employing negative space features'. Together they form a unique fingerprint.

Cite this