Mind-Reading: Using EEG and Computer Vision in a Closed Loop to Identify Images of Interest
Neuroengineering is all about taking something we know about the brain and using it to do something. People are starting to use measured brain activity to allow amputees to move artificial limbs, and to allow paralyzed people to communicate by typing with their minds, using devices called brain-computer interfaces (BCIs). It’s an exciting and expanding field.
We have developed a BCI system that uses human and computer vision together in a closed loop to speed up image search. Computer vision is very good at searching large databases of images quickly, but is not very good at identifying what is in a picture. People, on the other hand, can identify an object at a glance, but need much more time to sort through a large number of images. There are billions of unlabeled images on the internet – people don’t have time to label them, and computers just aren’t that good at it. How can we use what we know about the brain to better connect a person with the images they’re looking for?
When you see something you’ve been looking for, your brain exhibits what we call the “target response”. Similarly, when we show images to you, even very quickly (5-10 every second), and tell you to look for a certain category of thing, your brain produces the same response. We can use EEG (the electrode caps that measure “brain waves”) and machine learning to bookmark the images that are most likely to be targets based on your brain activity. This can allow you to sift through hundreds of images very quickly.
But what about the thousands or millions of images you didn’t see? For that, we turn to computer vision. Our computer vision system, called Transductive Annotation by Graph (TAG), tells us which images in a large database are visually similar to other images (based on features like colors, shapes and textures). Once we use EEG to find the images you were looking for, we can use TAG to find more images like them. Then we can use EEG again on those images to find the targets. Then we give your favorites back to TAG, and it refines its idea of what kinds of images you’re looking for. If we repeat this a few times, we hope, the computer vision begins to get pretty good at returning target images to you.
For More Information:
- Paper (Journal of Neural Engineering 2011): “Closing the loop in cortically-coupled computer vision: a brain–computer interface for searching image databases”
- EMBC 2010 conference abstract: “Combining computer and human vision into a BCI: Can the whole be greater than the sum of its parts?”
- Our collaborator Prof. Shih-Fu Chang’s Digital Video and Multimedia Lab (in Columbia’s EE department)
- LIINC website