Improving Intention Detection in Single-Trial Classification Through Fusion of EEG and Eye-Tracker Data

Xianliang GE*, Yunxian PAN, Sujie WANG, Linze QIAN, Jingjia YUAN, Jie XU, Nitish THAKOR, Yu SUN*

*Corresponding author for this work

Research output: Journal PublicationsJournal Article (refereed)peer-review

8 Citations (Scopus)

Abstract

Intention decoding is an indispensable procedure in hands-free human-computer interaction (HCI). A conventional eye-tracker system using a single-model fixation duration may issue commands that ignore users' real expectations. Here, an eye-brain hybrid brain-computer interface (BCI) interaction system was introduced for intention detection through the fusion of multimodal eye-tracker and event-related potential (ERP) [a measurement derived from electroencephalography (EEG)] features. Eye-tracking and EEG data were recorded from 64 healthy participants as they performed a 40-min customized free search task of a fixed target icon among 25 icons. The corresponding fixation duration of eye tracking and ERP were extracted. Five previously-validated linear discriminant analysis (LDA)-based classifiers [including regularized LDA, stepwise LDA, Bayesian LDA, shrinkage linear discriminant analysis (SKLDA), and spatial-temporal discriminant analysis] and the widely-used convolutional neural network (CNN) method were adopted to verify the efficacy of feature fusion from both offline and pseudo-online analysis, and the optimal approach was evaluated by modulating the training set and system response duration. Our study demonstrated that the input of multimodal eye tracking and ERP features achieved a superior performance of intention detection in the single-trial classification of active search tasks. Compared with the single-model ERP feature, this new strategy also induced congruent accuracy across classifiers. Moreover, in comparison with other classification methods, we found that SKLDA exhibited a superior performance when fusing features in offline tests (ACC = 0.8783, AUC = 0.9004) and online simulations with various sample amounts and duration lengths. In summary, this study revealed a novel and effective approach for intention classification using an eye-brain hybrid BCI and further supported the real-life application of hands-free HCI in a more precise and stable manner.
Original languageEnglish
Pages (from-to)132-141
Number of pages10
JournalIEEE Transactions on Human-Machine Systems
Volume53
Issue number1
Early online date12 Dec 2022
DOIs
Publication statusPublished - Feb 2023
Externally publishedYes

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 82172056, Grant 81801785, and Grant T2192931, in part by the Natural Science Foundation for Distinguished Young Scholars of Zhejiang Province, in part by the National Key Research and Development Program of China under Grant 2021ZD0200408, in part by the Key Research and Development Program of Zhejiang Province under Grant 2022C03064, in part by the Hundred Talents Program of Zhejiang University, in part by the Zhejiang University Global Partnership Fund under Grant 100000-11320, in part by theZhejiangLab under Grant 2019KE0AD01, in part by the Space Medical Experiment Project of ChinaManned Space Program under GrantHYZHXM03001, and in part by the Science andTechnology Special Project of the Insitute of Wenzhou, Zhejiang University under Grant XMGLKJZX-202203.

Keywords

  • Electroencephalography (EEG)
  • event-related potential (ERP)
  • eye-brain-computer interface
  • eye-tracker
  • single-trial classification

Fingerprint

Dive into the research topics of 'Improving Intention Detection in Single-Trial Classification Through Fusion of EEG and Eye-Tracker Data'. Together they form a unique fingerprint.

Cite this