David Jangraw

Mobile BCI

Brain-Cam: Fusing EEG, Eye Tracking, Pupillometry, and Environmental Context in a Mobile BCI

As you walk around during your day, you’re not just viewing things passively: you tend to mentally label the things you see as interesting or uninteresting.  Sometimes we act on these decisions right away, but more often, they are fleeting impressions or mental notes filed away for later – an implicit labeling of the world. But what if we could make a device that would detect when you were interested in something and automatically capture the moment on film?

When you find something you’re looking for, your brain reacts with a signal known as the “target response”.  This signal has been well studied in the lab, but how it presents itself in the real world is unknown. Will your brain get excited in the same way?  Will we still be able to tell when you’re excited about what you see and when you’re not? In this project, we look for the EEG signals occurring after rapid eye movements to interesting objects in a realistic virtual environment (using our NEDE system). We use a multimodal classifier to fuse information from a subject’s pupil dilation, eye position, and EEG signals and classify objects as targets or non-targets. And we combine the resulting predictions with a computer vision system to throw out anomalies and classify new objects quickly.

If we can do this successfully, it opens up fascinating possibilities for mobile brain-computer interfaces (BCIs) that could recognize interest in everyday objects. We could have the BCI trigger a video recording to create a “life log” of the most interesting parts of your day. We could recognize rapid shifts in attention and use it to diagnose ADD. For dementia and Alzheimer’s patients, we could snap a picture of interesting faces, search a database of friends and family, and help the user identify the face, easing the pain of social interaction.

The hardware needed for such a device is developing rapidly: mobile EEG caps, mobile eye tracking devices, and head-mounted displays are already on the market. If we can learn to identify the signal once we measure it, we’ll be a lot closer to making our device a reality.

hBCI_demohBCI_classifierhBCI_featureselectionhBCI_TspRoute

For More Information:

  • Jangraw and Sajda. (2014) “Neurally and Ocularly Informed Graph-Based Models for Searching 3D Environments.”
    Journal of Neural Engineering 11(2014): 046003 pdf
  • Jangraw and Sajda. (2013) “Feature Selection for Gaze, Pupillary, and EEG Signals Evoked in a 3D Environment.”
    Podium presentation, 6th Workshop on Eye Gaze in Intelligent Human Machine Interaction: Gaze in Multimodal Interaction at ACM ICMI 2013, Sydney, Australia. pdf
  • Jangraw and Sajda (2012). “Constructing Mutually-Derived Situational Awareness via EEG-Informed Graph-Based Transductive Inference.”
    Accepted abstract, IEEE Workshop on Brain-Machine-Body Interfaces (EMBC ‘12), San Diego, California. poster (pdf)
  • Army Research Laboratories Human Research & Engineering Directorate: our collaborator, Brent Lance, has helped with creating a database of 3D objects for a realistic computer environment that can test this system.
  • LIINC website
Advertisements