Fusing deep learned and hand-crafted features of appearance, shape, and dynamics for automatic pain estimation

Joy Onyekachukwu Egede, Michel F. Valstar, Brais Martinez

Research output: Contribution to conferencePaper

47 Citations (Scopus)
84 Downloads (Pure)

Abstract

Automatic continuous time, continuous value assessment of a patient's pain from face video is highly sought after by the medical profession. Despite the recent advances in deep learning that attain impressive results in many domains, pain estimation risks not being able to benefit from this due to the difficulty in obtaining data sets of considerable size. In this work we propose a combination of hand-crafted and deep-learned features that makes the most of deep learning techniques in small sample settings. Encoding shape, appearance, and dynamics, our method significantly outperforms the current state of the art, attaining a RMSE error of less than 1 point on a 16-level pain scale, whilst simultaneously scoring a 67.3% Pearson correlation coefficient between our predicted pain level time series and the ground truth.
Original languageEnglish
Pages689-696
Publication statusPublished - 30 May 2017
Event12th IEEE Conference on Face and Gesture Recognition (FG 2017) - Washington, D.C., U.S.A.
Duration: 30 May 20173 Jun 2017

Conference

Conference12th IEEE Conference on Face and Gesture Recognition (FG 2017)
Period30/05/173/06/17

Keywords

  • Pain, Estimation, Feature extraction, Face, Shape, Physiology, Machine learning

Fingerprint

Dive into the research topics of 'Fusing deep learned and hand-crafted features of appearance, shape, and dynamics for automatic pain estimation'. Together they form a unique fingerprint.

Cite this