In autonomous driving, motion sickness (MS) arises from physical or visual stimuli, or a combination of both. However, objective quantification of MS level (MSL) remains limited beyond questionnaire-based assessments. Using multimodal human signals (physiological and behavioral) collected in an autonomous driving simulator, this study addresses the association between these signals and MSL, across these MS types, by (i) screening and curating a decade of human-signal MS studies (HS-Set) to establish a data-driven foundation for selecting target sensor domains and features, (ii) constructing a dataset with subjective measures of MSL (fast motion sickness scale and simulator sickness questionnaire (SSQ)), alongside human signals (electroencephalogram (EEG), photoplethysmogram (PPG), electrodermal activity (EDA), skin temperature, and head/eye movement), (iii) conducting a correlation analysis between MSL and the identified features from HS-Set, and (iv) quantifying multivariable contributions at the feature and sensor domains through an explainable boosting machine (EBM). Key correlations include head amplitude/energy (pitch/surge) with SSQ total/oculomotor, eye entropy with nausea/oculomotor (positive), and EDA with nausea (negative). The EBM-based contribution analysis highlights EEG connectivity and head kinematics as dominant contributors; excluding EEG, the interpretability of single-domain models remains limited. Additionally, a combination of Head, PPG, and EDA domains retains over 80% of the full model's interpretability.
Keywords: autonomous driving; motion sickness; multimodal sensors; physiological and behavioral signals; signal processing; systematic feature extraction; unified sickness.