TY - GEN
T1 - Face2multi-modal
T2 - 12th ACM International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2020
AU - Huang, Zhentao
AU - Li, Rongze
AU - Jin, Wangkai
AU - Song, Zilin
AU - Zhang, Yu
AU - Peng, Xiangjun
AU - Sun, Xu
N1 - Publisher Copyright:
© 2020 Owner/Author.
PY - 2020/9/21
Y1 - 2020/9/21
N2 - Towards intelligent Human-Vehicle Interaction systems and innovative Human-Vehicle Interaction designs, in-vehicle drivers' physiological data has been explored as an essential data source. However, equipping multiple biosensors is considered the limited extent of user-friendliness and impractical during the driving procedure. The lack of a proper approach to access physiological data has hindered wider applications of advanced biosignal-driven designs in practice (e.g. monitoring systems and etc.). Hence, the demand for a user-friendly approach to measuring drivers' body statuses has become more intense. In this Work-In-Progress, we present Face2Multi-modal, an In-vehicle multi-modal Data Streams Predictors through facial expressions only. More specifically, we have explored the estimations of Heart Rate, Skin Conductance, and Vehicle Speed of the drivers. We believe Face2Multi-modal provides a user-friendly alternative to acquiring drivers' physiological status and vehicle status, which could serve as the building block for many current or future personalized Human-Vehicle Interaction designs. More details and updates about the project Face2Multi-modal is online at https://github.com/unnc-ucc/Face2Multimodal/.
AB - Towards intelligent Human-Vehicle Interaction systems and innovative Human-Vehicle Interaction designs, in-vehicle drivers' physiological data has been explored as an essential data source. However, equipping multiple biosensors is considered the limited extent of user-friendliness and impractical during the driving procedure. The lack of a proper approach to access physiological data has hindered wider applications of advanced biosignal-driven designs in practice (e.g. monitoring systems and etc.). Hence, the demand for a user-friendly approach to measuring drivers' body statuses has become more intense. In this Work-In-Progress, we present Face2Multi-modal, an In-vehicle multi-modal Data Streams Predictors through facial expressions only. More specifically, we have explored the estimations of Heart Rate, Skin Conductance, and Vehicle Speed of the drivers. We believe Face2Multi-modal provides a user-friendly alternative to acquiring drivers' physiological status and vehicle status, which could serve as the building block for many current or future personalized Human-Vehicle Interaction designs. More details and updates about the project Face2Multi-modal is online at https://github.com/unnc-ucc/Face2Multimodal/.
KW - Computer Vision
KW - Ergonomics.
KW - Human-Vehicle Interactions
UR - http://www.scopus.com/inward/record.url?scp=85092232325&partnerID=8YFLogxK
U2 - 10.1145/3409251.3411716
DO - 10.1145/3409251.3411716
M3 - Conference contribution
AN - SCOPUS:85092232325
T3 - Adjunct Proceedings - 12th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2020
SP - 30
EP - 33
BT - Adjunct Proceedings - 12th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2020
PB - Association for Computing Machinery, Inc
Y2 - 21 September 2020 through 22 September 2020
ER -