Building BROOK: A multi-modal and facial video database for Human-Vehicle Interaction research

Xiangjun Peng, Zhentao Huang, Xu Sun

Research output: Working paper

68 Downloads (Pure)


With the growing popularity of Autonomous Vehicles, more opportunities have bloomed in the context of Human-Vehicle Interactions. However, the lack of comprehensive and concrete database support for such specific use case limits relevant studies in the whole design spaces. In this paper, we present our work-in-progress BROOK, a public multi-modal database with facial video records, which could be used to characterise drivers' affective states and driving styles. We first explain how we over-engineer such database in details, and what we have gained through a ten-month study. Then we showcase a Neural Network-based predictor, leveraging BROOK, which supports multi-modal prediction (including physiological data of heart rate and skin conductance and driving status data of speed) through facial videos. Finally we discuss related issues when building such a database and our future directions in the context of BROOK. We believe BROOK is an essential building block for future Human-Vehicle Interaction Research. More details and updates about the project BROOK is online at https: //
Original languageEnglish
Publication statusPublished - 1 Jan 2020

Publication series



  • ComputerVision
  • Ergonomics
  • Human-VehicleInteractions


Dive into the research topics of 'Building BROOK: A multi-modal and facial video database for Human-Vehicle Interaction research'. Together they form a unique fingerprint.

Cite this