Sunday, December 25, 2011

Advances in 'Brain Reading'

Researchers are using functional MRI brain scans to observe brain signal changes that take place during mental activity at UCLA's Laboratory of Integrative Neuroimaging Technology.

 Advances in 'Brain Reading'They then employ computerized machine learning (ML) methods to study these patterns and identify the cognitive state — or sometimes the thought process — of human subjects. The technique is called "brain reading" or "brain decoding."

In a new study, the UCLA research team describes several crucial advances in this field, using fMRI and machine learning methods to perform "brain reading" on smokers experiencing nicotine cravings.

The research, presented last week at the Neural Information Processing Systems' Machine Learning and Interpretation in Neuroimaging workshop in Spain, was funded by the National Institute on Drug Abuse, which is interested in using these method to help people control drug cravings.

In this study on addiction and cravings, the team classified data taken from cigarette smokers who were scanned while watching videos meant to induce nicotine cravings. The aim was to understand in detail which regions of the brain and which neural networks are responsible for resisting nicotine addiction specifically, and cravings in general, said Dr. Ariana Anderson, a postdoctoral fellow in the Integrative Neuroimaging Technology lab and the study's lead author.

"We are interested in exploring the relationships between structure and function in the human brain, particularly as related to higher-level cognition, such as mental imagery," Anderson said. "The lab is engaged in the active exploration of modern data-analysis approaches, such as machine learning, with special attention to methods that reveal systems-level neural organization."

For the study, smokers sometimes watched videos meant to induce cravings, sometimes watched "neutral" videos and at sometimes watched no video at all. They were instructed to attempt to fight nicotine cravings when they arose.

The data from fMRI scans taken of the study participants was then analyzed. Traditional machine learning methods were augmented by Markov processes, which use past history to predict future states. By measuring the brain networks active over time during the scans, the resulting machine learning algorithms were able to anticipate changes in subjects' underlying neurocognitive structure, predicting with a high degree of accuracy (90 percent for some of the models tested) what they were watching and, as far as cravings were concerned, how they were reacting to what they viewed.

"We detected whether people were watching and resisting cravings, indulging in them, or watching videos that were unrelated to smoking or cravings," said Anderson, who completed her Ph.D. in statistics at UCLA. "Essentially, we were predicting and detecting what kind of videos people were watching and whether they were resisting their cravings."

In essence, the algorithm was able to complete or "predict" the subjects' mental states and thought processes in much the same way that Internet search engines or texting programs on cell phones anticipate and complete a sentence or request before the user is finished typing. And this machine learning method based on Markov processes demonstrated a large improvement in accuracy over traditional approaches, the researchers said.

Machine learning methods, in general, create a "decision layer" — essentially a boundary separating the different classes one needs to distinguish. For example, values on one side of the boundary might indicate that a subject believes various test statements and, on the other, that a subject disbelieves these statements. Researchers have found they can detect these believe–disbelieve differences with high accuracy, in effect creating a lie detector. An innovation described in the new study is a means of making these boundaries interpretable by neuroscientists, rather than an often obscure boundary created by more traditional methods, like support vector machine learning.

"In our study, these boundaries are designed to reflect the contributed activity of a variety of brain sub-systems or networks whose functions are identifiable — for example, a visual network, an emotional-regulation network or a conflict-monitoring network," said study co-author Mark S. Cohen, a professor of neurology, psychiatry and biobehavioral sciences at UCLA's Staglin Center for Cognitive Neuroscience and a researcher at the California NanoSystems Institute at UCLA.

"By projecting our problem of isolating specific networks associated with cravings into the domain of neurology, the technique does more than classify brain states — it actually helps us to better understand the way the brain resists cravings," added Cohen, who also directs UCLA's Neuroengineering Training Program.

Remarkably, by placing this problem into neurological terms, the decoding process becomes significantly more reliable and accurate, the researchers said. This is especially significant, they said, because it is unusual to use prior outcomes and states in order to inform the machine learning algorithms, and it is particularly challenging in the brain because so much is unknown about how the brain works.

Machine learning typically involves two steps: a "training phase" in which the computer evaluates a set of known outcomes — say, a bunch of trials in which a subject indicated belief or disbelief — and a second, "prediction" phase in which the computer builds a boundary based on that knowledge.

In future research, the neuroscientists said, they will be using these machine learning methods in a biofeedback context, showing subjects real-time brain readouts to let them know when they are experiencing cravings and how intense those cravings are, in the hopes of training them to control and suppress those cravings.

But since this clearly changes the process and cognitive state for the subject, the researchers said, they may face special challenges in trying to decode a "moving target" and in separating the "training" phase from the "prediction" phase.

No comments:

Post a Comment