Neurally and ocularly informed graph-based models for searching 3D environments

As we move through an environment, we are constantly making assessments, judgments and decisions about the things we encounter. Some are acted upon immediately, but many more become mental notes or fleeting impressions-our implicit 'labeling' of the world. In this paper, we use physiologic...

Full description

Saved in:
Bibliographic Details
Published in:Journal of neural engineering Vol. 11; no. 4; p. 046003
Main Authors: Jangraw, David C, Wang, Jun, Lance, Brent J, Chang, Shih-Fu, Sajda, Paul
Format: Journal Article
Language:English
Published: England 01.08.2014
Subjects:
ISSN:1741-2552, 1741-2552
Online Access:Get more information
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:As we move through an environment, we are constantly making assessments, judgments and decisions about the things we encounter. Some are acted upon immediately, but many more become mental notes or fleeting impressions-our implicit 'labeling' of the world. In this paper, we use physiological correlates of this labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of a 3D environment. First, we record electroencephalographic (EEG), saccadic and pupillary data from subjects as they move through a small part of a 3D virtual city under free-viewing conditions. Using machine learning, we integrate the neural and ocular signals evoked by the objects they encounter to infer which ones are of subjective interest to them. These inferred labels are propagated through a large computer vision graph of objects in the city, using semi-supervised learning to identify other, unseen objects that are visually similar to the labeled ones. Finally, the system plots an efficient route to help the subjects visit the 'similar' objects it identifies. We show that by exploiting the subjects' implicit labeling to find objects of interest instead of exploring naively, the median search precision is increased from 25% to 97%, and the median subject need only travel 40% of the distance to see 84% of the objects of interest. We also find that the neural and ocular signals contribute in a complementary fashion to the classifiers' inference of subjects' implicit labeling. In summary, we show that neural and ocular signals reflecting subjective assessment of objects in a 3D environment can be used to inform a graph-based learning model of that environment, resulting in an hBCI system that improves navigation and information delivery specific to the user's interests.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1741-2552
1741-2552
DOI:10.1088/1741-2560/11/4/046003