Face2multi-modal: In-vehicle multi-modal predictors via facial expressions

Zhentao Huang, Rongze Li, Wangkai Jin, Zilin Song, Yu Zhang, Xiangjun Peng, Xu Sun

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

17 Citations (Scopus)
44 Downloads (Pure)

Abstract

Towards intelligent Human-Vehicle Interaction systems and innovative Human-Vehicle Interaction designs, in-vehicle drivers' physiological data has been explored as an essential data source. However, equipping multiple biosensors is considered the limited extent of user-friendliness and impractical during the driving procedure. The lack of a proper approach to access physiological data has hindered wider applications of advanced biosignal-driven designs in practice (e.g. monitoring systems and etc.). Hence, the demand for a user-friendly approach to measuring drivers' body statuses has become more intense. In this Work-In-Progress, we present Face2Multi-modal, an In-vehicle multi-modal Data Streams Predictors through facial expressions only. More specifically, we have explored the estimations of Heart Rate, Skin Conductance, and Vehicle Speed of the drivers. We believe Face2Multi-modal provides a user-friendly alternative to acquiring drivers' physiological status and vehicle status, which could serve as the building block for many current or future personalized Human-Vehicle Interaction designs. More details and updates about the project Face2Multi-modal is online at https://github.com/unnc-ucc/Face2Multimodal/.

Original languageEnglish
Title of host publicationAdjunct Proceedings - 12th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2020
PublisherAssociation for Computing Machinery, Inc
Pages30-33
Number of pages4
ISBN (Electronic)9781450380669
DOIs
Publication statusPublished - 21 Sept 2020
Event12th ACM International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2020 - Washington, Virtual, United States
Duration: 21 Sept 202022 Sept 2020

Publication series

NameAdjunct Proceedings - 12th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2020

Conference

Conference12th ACM International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2020
Country/TerritoryUnited States
CityWashington, Virtual
Period21/09/2022/09/20

Keywords

  • Computer Vision
  • Ergonomics.
  • Human-Vehicle Interactions

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Software
  • Automotive Engineering

Fingerprint

Dive into the research topics of 'Face2multi-modal: In-vehicle multi-modal predictors via facial expressions'. Together they form a unique fingerprint.

Cite this