Our brains can control machines- not just motion, but abstract thought

No Gravatar
Lobes, Hippocampus. http://pubs.niaaa.nih.gov/...
Image via Wikipedia

On 6 August, we discussed work that Larry Farwell, Jack Gallant, and Yukiyasu Kamitani were doing to decipher how our brain works.  Now, Drs. Moran Cerf (computational researcher at  CalTech and UCLA) and Itzhak Fried (UCLA neurosurgeon) , in Nature, report success in having subjects change the image on a video screen, simply by thinking. This study demonstrates that our brains can sift through various stimuli (images, sounds, odors) to determine upon the single object upon which we desire to focus; it may explain how our brain lets us recognize an individual’s face among the crowd.

Twelve (12) epileptic patients were involved in this research study.  These patients were treatment resistant, so they were about to undergo surgery to remove the part of the brain that initiated their seizures.  Dr. Itzhak Fried (UCLA) wanted to identify the precise region for removal; as such, he implanted 64 microelectrodes in the brain (in the medial temporal lobe, MTL) to collect data until a seizure occurred.  (The MTL  is associated with memory (it includes the hippocampus); it is the source of many epileptic seizures.)

This electrode array can generate hundreds of gigabytes of information in a single day, which rendered data analysis tedious. Dr. Fried has been collaborating with various neuroscientists over the years; one such collaboration with Dr. Quian Quiroga in 2003 led to the development of an algorithm to sift and organize the general noise from the electrodes to determine the firing of a single neuron.  That research demonstrated that the neurons fire only if the subject recognizes the pictures or was told the name of the person or object observed (as reported in the Proceedings of the National Academy of Sciences and Current Biology).

This study occurred before the patients had their brain surgery.  Some 110 images of famous people and objects were flashed on a screen; the neurons that responded to each of the images were then identified.  A database of neural firing for the images was developed for each patient. The scientists then selected four images that elicited responses in four different MTL regions. Two pairs of  images were used; in each the images were faded out (50%) and superimposed upon one another (in pairs). (The hybrid pictures were actually an overlay of one person’s image with another; unfortunately, MSNBC has removed the video link to which this hyperlink connected.)

The subjects were then instructed to enhance the images they observed over ten (10) seconds.  The researchers then compared and amplified the choices (the image associated with less neural stimulation was faded more; the image associated with more neural stimulation was enhanced) with the subject viewing the results.  (As the subject’s brain focused on one image, the specific neuron fired and enhanced that image, while the second would further fade into the background. ) The 12 subjects could make enhance these new image pairs to make one image fully visible (in more than 2/3 of the trials).  It did not seem to make a difference if the subject tried to enhance the image or fade the secondary image. (In other words, different cognitive strategies led to the same result.)  However, if the subjects did not observe the choice amplification, the success rate dropped below 1/3.  This demonstrated that the brain-machine interface was required for the results.

Enhanced by Zemanta
Share this:
Share this page via Email Share this page via Stumble Upon Share this page via Digg this Share this page via Facebook Share this page via Twitter
Share