Cirrhosis can envelop the liver if chronic hepatitis advances to end-stage liver disease. Experts estimate that at least half of those with cirrhosis have some degree of hepatic encephalopathy, and research demonstrates that this combination could increase the risk of driving mishaps.
by Nicole Cutler, L.Ac.
When chronic hepatitis progresses to cirrhosis, an array of additional health concerns can surface. One of the most feared complications of cirrhosis, hepatic encephalopathy, appears to be hard to detect in its early stages. Interfering with a person’s concentration and reaction time, even a slight affliction with this condition poses a major risk for those driving a motor vehicle.
Hepatic Encephalopathy
Hepatic encephalopathy (HE) is brain and nervous system damage that can occur as a complication of liver disorders. Ranging from mild to severe, HE causes different nervous system symptoms including changes in:
· reflexes
· consciousness
· behavior
As seen in cirrhosis, toxic buildup results from reduced blood circulation in the liver. Although the exact cause of HE is unknown, experts concur that it stems from excessive poisons in the bloodstream damaging the central nervous system.
In people with otherwise stable liver disorders, HE may be triggered by:
· gastrointestinal bleeding
· eating too much protein
· infections
· renal disease
· medical procedures that bypass blood past the liver
· electrolyte abnormalities (especially a decrease in potassium) – potentially resulting from vomiting, paracentesis or taking diuretics
Hepatic encephalopathy may also be triggered by any condition that results in:
· alkalosis – when there is excess base (alkali) in body fluids
· low oxygen levels in the body
· use of medications that suppress the central nervous system (such as barbiturates or benzodiazepine tranquilizers)
· surgery
· co-occurring illness
Stages of HE
As liver impairment progresses, an increasing amount of circulating toxins become available for nervous system damage. Likewise, the degree of neurological damage done is characterized in incremental stages. One of the hallmark tests to distinguish hepatic encephalopathy staging is asterixis, otherwise known as flapping tremor. Asterixis is defined by the University of Washington Division of Gastroenterology as the intermittent lapse of posture of the outstretched hands. By using this and other markers, the stages of HE are as follows:
· Stage 0 = Considered sub-clinical, there are no grossly evident changes in personality or behavior, except by special tests of central nervous system function. In addition, there is no asterixis.
· Stage 1 = In this stage, asterixis may be present. Characteristic symptoms are a shortened attention span, impaired handwriting and ability to perform simple arithmetic, impaired sleep and memory and altered mood.
· Stage 2 = Stage 2 is characterized by lethargy and/or apathy, disorientation, inappropriate behavior, slurred speech and obvious asterixis.
· Stage 3 = Evidenced by gross disorientation, semi-stuporous or stuporous behavior, in HE stage 3, asterixis may be difficult to elicit.
· Stage 4a = At this late stage of HE, the patient is in a frank coma. Seizures may occur with fulminant liver failure.
· Stage 4b = When HE is fatal, herniation of the swollen cerebrum protrudes through the base of the skull (foramen magnum).
Evidence Linking Driving Mishaps With HE
In a study divulged in the October 2007 edition of the Journal of American Gastroenterology, researchers evaluated over 100 participants with cirrhosis and compared their motor vehicle accident and traffic violation history to healthy controls. After excluding those from the study taking medication that could potentially interfere with driving ability, participants with cirrhosis, especially those with the early stages of HE, had significantly more motor vehicle accidents and traffic.
Although an estimated 20 to 85 percent of people with cirrhosis develop HE, those in the beginning stages are not always aware of their own central nervous system damage. While everyone with cirrhosis does not have HE, the majority will likely have some evidence of the beginning stages upon specified testing.
According to the study’s authors, “It is well known that hepatic encephalopathy is a complication of cirrhosis. However, the spectrum of impact associated with this complication is not as well recognized by clinicians who care for these patients. Although patients with this complication may have very recognizable and overt evidence of impairment, others may exhibit changes in cognitive function that are too subtle to be detected by the standard neurologic status assessments. Such abnormalities are apparent only with neurophysiologic and neuropsychological testing.” These comments suggest that specific tests are the only way to realize that central nervous system injury is occurring, and that there is an increased risk associated with driving.
What to Look For
If you have cirrhosis but haven’t been evaluated by a neurologist for HE, it is helpful to be familiar with some of this ailment’s earliest signs. Delays in reaction time and abnormal response inhibition are characteristic of stage 1 HE, and can easily impair a person’s driving ability. The generally recognizable symptoms of this stage of brain dysfunction include:
· shortened attention span
· impaired handwriting
· impaired ability to perform simple arithmetic
· impaired sleep and memory
· altered mood
Based on this study, repeated motor vehicle accidents or traffic violations may eventually be added to this list.
If liver disease has progressed to cirrhosis, determining if there is any affect on the brain can help a person make choices to enhance their own wellness and safety. Whether your changes in mood and behavior are or are not indicative of HE, the statistics underlie a need for those with cirrhosis to be extra cautious whenever driving a car.
References:
Bajaj JS, et al, Minimal Hepatic Encephalopathy in Cirrhotic Patients and Its Association with Traffic Violations, American Journal of Gastroenterology, September 2007.
Worobetz, LJ, First Principles of Gastroenterology, (p. 537), AstraZeneca Canada Inc., 2000.
www.medscape.com, Is Minimal Hepatic Encephalopathy Associated With Increased Risk for Motor Vehicle Accidents and Traffic Violations?, David A. Johnson, MD, FACG, FACP, Medscape, 2007.
www.nlm.nih.gov, Hepatic Encephalopathy, National Institutes of Health, 2007.
www.uwgi.org, The Liver and Biliary System: Hepatic Encephalopathy, University of Washington Division of Gastroenterology, 2007.
Friday, March 5, 2010
Repeated anaesthesia can affect childrens ability to learn
There is a link between repeated anaesthesia in children and memory impairment, though physical activity can help to form new cells that improve memory, reveals new research from the Sahlgrenska Academy.
The study has been published in the Journal of Cerebral Blood Flow & Metabolism.
”Paediatric anaesthetists have long suspected that children who are anaesthetised repeatedly over the course of just a few years may suffer from impaired memory and learning,” says Klas Blomgren, professor at the Queen Silvia Children’s Hospital and researcher at the Sahlgrenska Academy. ”This is a theory that is also supported by foreign research.”
His research team discovered, by chance, a link between stem cell loss and repeated anaesthesia when working on another study. They wanted to find out what happens to the brain’s stem cells when exposed to strong magnetic fields, for example during an MRI scan. The study was carried out using rats and mice, and showed that while the magnetic fields did not have any tangible effects on the animals, the repeated anaesthesia did.
”We found that repeated anaesthesia wiped out a large portion of the stem cells in the hippocampus, an area of the brain that is important for memory,” says Blomgren. ”The stem cells in the hippocampus can form new nerve and glial cells, and the formation of nerve cells is considered important for our memory function.”
Their results could also be linked to impaired memory in animals as they got older. The effect was evident only in young rats or mice that had been anaesthetised, not when adult animals were anaesthetised. This may be because stem cells are more sensitive in an immature brain, even though there are fewer of them as we get older.
”Despite extensive attempts, we have not been able to understand exactly what happens when the stem cells are wiped out,” says Blomgren. ”We couldn’t see any signs of increased cell death, but are speculating that the stem cells lose their ability to divide.”
Another treatment that wipes out the brain’s stem cells is radiotherapy, which is used with cancer patients. Blomgren and his research team have previously used animal studies to show that physical activity after radiotherapy can result in a greater number of new stem cells and partly replace those that have been lost.
”What’s more, the new nerve cells seem to work better in animals that exercise. Now that we know this, we can come up with treatments that prevent or reverse the loss of ostem cells after repeated anaesthesia,” says Blomgren, who believes that the findings will lead to greater awareness of the problems and inspire further research into the reasons for the loss of stem cells.
ANAESTHESIA
Anaesthesia is the use of anaesthetics, which are administered to patients by inhalation and/or injection before a surgical procedure. Patients then fall asleep, relax their muscles and feel no pain whatsoever. Often a combination of several different drugs is given via a cannula. These take around 15-20 seconds to work, depending on when the anaesthetic reaches the brain.
The study has been published in the Journal of Cerebral Blood Flow & Metabolism.
”Paediatric anaesthetists have long suspected that children who are anaesthetised repeatedly over the course of just a few years may suffer from impaired memory and learning,” says Klas Blomgren, professor at the Queen Silvia Children’s Hospital and researcher at the Sahlgrenska Academy. ”This is a theory that is also supported by foreign research.”
His research team discovered, by chance, a link between stem cell loss and repeated anaesthesia when working on another study. They wanted to find out what happens to the brain’s stem cells when exposed to strong magnetic fields, for example during an MRI scan. The study was carried out using rats and mice, and showed that while the magnetic fields did not have any tangible effects on the animals, the repeated anaesthesia did.
”We found that repeated anaesthesia wiped out a large portion of the stem cells in the hippocampus, an area of the brain that is important for memory,” says Blomgren. ”The stem cells in the hippocampus can form new nerve and glial cells, and the formation of nerve cells is considered important for our memory function.”
Their results could also be linked to impaired memory in animals as they got older. The effect was evident only in young rats or mice that had been anaesthetised, not when adult animals were anaesthetised. This may be because stem cells are more sensitive in an immature brain, even though there are fewer of them as we get older.
”Despite extensive attempts, we have not been able to understand exactly what happens when the stem cells are wiped out,” says Blomgren. ”We couldn’t see any signs of increased cell death, but are speculating that the stem cells lose their ability to divide.”
Another treatment that wipes out the brain’s stem cells is radiotherapy, which is used with cancer patients. Blomgren and his research team have previously used animal studies to show that physical activity after radiotherapy can result in a greater number of new stem cells and partly replace those that have been lost.
”What’s more, the new nerve cells seem to work better in animals that exercise. Now that we know this, we can come up with treatments that prevent or reverse the loss of ostem cells after repeated anaesthesia,” says Blomgren, who believes that the findings will lead to greater awareness of the problems and inspire further research into the reasons for the loss of stem cells.
ANAESTHESIA
Anaesthesia is the use of anaesthetics, which are administered to patients by inhalation and/or injection before a surgical procedure. Patients then fall asleep, relax their muscles and feel no pain whatsoever. Often a combination of several different drugs is given via a cannula. These take around 15-20 seconds to work, depending on when the anaesthetic reaches the brain.
Alzheimer's Drug: Dimebon Is Down The Drain
The pharmaceutical company Pfizer announced today that its investigational Alzheimer's medication, Dimebon, has failed an important test of its effectiveness in the treatment of Alzheimer's Dementia. A great deal of hope and expectation had been placed in this medication. Its apparent failure has been a disappointment not only for Pfizer, but for many physicians and sufferers of Alzheimer's Dementia that were expecting Dimebon to be a breakthrough in the treatment of the illness.
Results came from an FDA approved Phase III clinical trial named CONNECTION that was being run by Pfizer and its subsidiary company, Medivation. Phase III refers to a specific stage in the clinical investigation of new medications for human use. After animal studies have shown a medication to have promise, Phase I trials establish doses and safety of the medication in human beings. Phase II trials then try the medication on small groups of patients to see if the drug shows effectiveness and safety in its use in humans. Phase III trials start after all indications are that the medication is safe and likely to have benefit for patients. Phase III trials involve large numbers of patients in several different clinics or medical centers, and their results are safeguarded by the use of placebos and so-called double blind designs that prevent both the patient and the doctor from knowing who is getting what until the results are determined.
The CONNECTION trial has been a Phase III study looking at the effects of Dimebon in about 600 patients with mild-to-moderate AD in North America, Europe, and South America. The patients in the study had an average age of 74.4 years, and met criteria for the diagnosis of Alzheimer's Dementia of mild to moderate severity. The patients were randomly assigned to receive either Dimebon or a placebo, i.e., "sugar pill" for six months. During that time their cognitive function was regularly assessed to determine what if any changes were occurring, or if the two treatment groups differed from one another. The finding was that after six months, those patients receiving Dimebon were not at all different from those who merely received the placebo. In other words, the Phase III trial showed that Dimebon was no more effective than a sugar pill. It offered no benefits for patients suffering mild to moderate degree of Alzheimer's Dementia.
The finding of the apparent ineffectiveness of Dimebon was surprising and disappointing largely because a 2008 clinical trial published in the prestigious journal The Lancet had obtained such positive results with the medication. In that study, 183 patients with mild to moderate Alzheimer's Dementia appeared to greatly benefit from treatment with Dimebon. I have noted comments in the press that this earlier study was suspect because it was performed in Russian clinics. However, the study was performed as a collaboration of the Russian Academy of Sciences with groups from Baylor, Mount Sinai, UC San Diego, and Georgetown University Colleges of Medicine. Moreover, as I have suggested, the study met the impeccable publication standards of The Lancet. Those remarkable first results with Dimebon were also consistent with what had been learned in animal studies about the effects of this drug on the brain. Dimebon has been found to mimic effects of both of the classes of drugs currently FDA approved to treat Alzheimer's Disease. That is, it blocks abnormal activity at NMDA receptors in the brain, as does the FDA approved memantine, and it blocks the enzymatic breakdown of the chemical messenger acetylcholine in the same fashion as drugs such as Aricept. In addition, Dimebon may help prevent build up of abnormal tau protein, which causes the neurofibrillary tangles of Alzheimer's. There are also reports that it can block some of the neurotoxic effects of amyloid, the abnormal protein that accumulates in the brains of sufferers of Alzheimer's.
Even before any trials in human beings, Dimebon had been found to improve the cognitive function of rats genetically engineered to exhibit changes in the brain and behavior similar to those seen in humans with Alzheimer's Dementia. Thus, there were compelling reasons to predict that Dimebon would again be shown to be helpful in improving the cognitive function of patients with Alzheimer's. Sadly, this was not the case.
Some explanation may be found as to why Dimebon may work in some but not other patients with Alzheimer's Dementia. Moreover, studies are ongoing to see if Dimebon may yet be helpful as an add on to currently approved medications for Alzheimer's Dementia. Nonetheless, it is now clear that Dimebon is not a miracle cure-all. To quote the great biologist, Thomas Huxely, "A beautiful theory has been destroyed by an ugly fact."
The quest for medications that can improve the cognitive function of sufferers of Alzheimer's dementia continues. It is possible that something will be found to stop or even reverse the degenerative processes of the illness. However, at present the most effective medications only slow the progression of the illness. Some people inherit genes that make it likely they will develop Alzheimer's Dementia no matter what they do. Thankfully, this is a small minority of people. For most of us, the best approach to Alzheimer's continues to be to avoid the illness by proper diet, stress reduction, sleeping well, staying active mentally, physically and socially, and by using vitamins, herbs and nutraceuticals that can slow down the neurodegenerative processes that cause damage to the brain. It is important that these steps be initiated in your 40's and 50's, when this damage to the brain tends to begin.
Results came from an FDA approved Phase III clinical trial named CONNECTION that was being run by Pfizer and its subsidiary company, Medivation. Phase III refers to a specific stage in the clinical investigation of new medications for human use. After animal studies have shown a medication to have promise, Phase I trials establish doses and safety of the medication in human beings. Phase II trials then try the medication on small groups of patients to see if the drug shows effectiveness and safety in its use in humans. Phase III trials start after all indications are that the medication is safe and likely to have benefit for patients. Phase III trials involve large numbers of patients in several different clinics or medical centers, and their results are safeguarded by the use of placebos and so-called double blind designs that prevent both the patient and the doctor from knowing who is getting what until the results are determined.
The CONNECTION trial has been a Phase III study looking at the effects of Dimebon in about 600 patients with mild-to-moderate AD in North America, Europe, and South America. The patients in the study had an average age of 74.4 years, and met criteria for the diagnosis of Alzheimer's Dementia of mild to moderate severity. The patients were randomly assigned to receive either Dimebon or a placebo, i.e., "sugar pill" for six months. During that time their cognitive function was regularly assessed to determine what if any changes were occurring, or if the two treatment groups differed from one another. The finding was that after six months, those patients receiving Dimebon were not at all different from those who merely received the placebo. In other words, the Phase III trial showed that Dimebon was no more effective than a sugar pill. It offered no benefits for patients suffering mild to moderate degree of Alzheimer's Dementia.
The finding of the apparent ineffectiveness of Dimebon was surprising and disappointing largely because a 2008 clinical trial published in the prestigious journal The Lancet had obtained such positive results with the medication. In that study, 183 patients with mild to moderate Alzheimer's Dementia appeared to greatly benefit from treatment with Dimebon. I have noted comments in the press that this earlier study was suspect because it was performed in Russian clinics. However, the study was performed as a collaboration of the Russian Academy of Sciences with groups from Baylor, Mount Sinai, UC San Diego, and Georgetown University Colleges of Medicine. Moreover, as I have suggested, the study met the impeccable publication standards of The Lancet. Those remarkable first results with Dimebon were also consistent with what had been learned in animal studies about the effects of this drug on the brain. Dimebon has been found to mimic effects of both of the classes of drugs currently FDA approved to treat Alzheimer's Disease. That is, it blocks abnormal activity at NMDA receptors in the brain, as does the FDA approved memantine, and it blocks the enzymatic breakdown of the chemical messenger acetylcholine in the same fashion as drugs such as Aricept. In addition, Dimebon may help prevent build up of abnormal tau protein, which causes the neurofibrillary tangles of Alzheimer's. There are also reports that it can block some of the neurotoxic effects of amyloid, the abnormal protein that accumulates in the brains of sufferers of Alzheimer's.
Even before any trials in human beings, Dimebon had been found to improve the cognitive function of rats genetically engineered to exhibit changes in the brain and behavior similar to those seen in humans with Alzheimer's Dementia. Thus, there were compelling reasons to predict that Dimebon would again be shown to be helpful in improving the cognitive function of patients with Alzheimer's. Sadly, this was not the case.
Some explanation may be found as to why Dimebon may work in some but not other patients with Alzheimer's Dementia. Moreover, studies are ongoing to see if Dimebon may yet be helpful as an add on to currently approved medications for Alzheimer's Dementia. Nonetheless, it is now clear that Dimebon is not a miracle cure-all. To quote the great biologist, Thomas Huxely, "A beautiful theory has been destroyed by an ugly fact."
The quest for medications that can improve the cognitive function of sufferers of Alzheimer's dementia continues. It is possible that something will be found to stop or even reverse the degenerative processes of the illness. However, at present the most effective medications only slow the progression of the illness. Some people inherit genes that make it likely they will develop Alzheimer's Dementia no matter what they do. Thankfully, this is a small minority of people. For most of us, the best approach to Alzheimer's continues to be to avoid the illness by proper diet, stress reduction, sleeping well, staying active mentally, physically and socially, and by using vitamins, herbs and nutraceuticals that can slow down the neurodegenerative processes that cause damage to the brain. It is important that these steps be initiated in your 40's and 50's, when this damage to the brain tends to begin.
Food for the brain
Do you know that there is an oriental herb that has a stimulating effect on the brain? I am referring to gotu kola, a creeping plant that grows in our country as well as in other countries like India, Sri Lanka, Madagascar, South Africa, Japan, China, Indonesia and the South Pacific.
The botanical name of gotu kola is centella asiatica. Other common names are March pennywort, Indian pennywort, hydrocotyle, brahmi, and luei gong gen. The last is the Chinese name of gotu kola.
Gotu kola has been used for centuries as an alternative medicine for the effective treatment of a variety of health problems. In the United States, for example gotu kola is already popular to treat some health conditions despite the fact that the health properties of gotu kola have not yet been evaluated by the US Food and Drug Administration (FDA).
Gotu kola should not be confused with kola nut (cola nitida). Kola nut is an active ingredient in coca cola. Gotu kola – considered by many as one of the best herbal tonics to put our bodies into a healthy state is a tasteless, odorless plant that grows best in and around water. The herb has small fan-shaped green leaves and white or light purple-to-pink flowers. It also bears small oval fruits. It is the leaves and the stems of the gotu kola plant that have medicinal properties.
Gotu kola can be taken internally/orally as a tablet or capsule. Or can be used as a topical preparation as tea or tincture and applied externally to heal a number of diseases that I will mention later. There are however some side effects that should be familiar to those who would like to take gotu kola.
While gotu kola has medicinal properties to heal a number of conditions, it is best-known as a “food for the brain.” Unknown to many gotu kola has for thousands of years been used to improve mental functions such as concentration and memory. It also improves the learning capabilities. Cases have also been documented showing that gotu kola can reverse some of the memory loss associated with Alzheimer’s disease. Even developmentally disabled children have been found to have improved their concentration and attention levels after taking gotu kola for at least 12 weeks.
Gotu kola also promotes healthy skin; heals ulcerations of the bladder; treats cellulite and varicose veins; speeds up the wound healing process; fortifies the immunity system; revitalizes nerves; promotes long life; treats syphilis, hepatitis, stomach ulcers, epilepsy, diarrhea, fever and asthma; lowers high blood pressure; combats stress and anxiety; minimizes the swelling effect of psoriatic arthritis; arthritis of the spine and rheumatoid arthritis. It is also a mild diuretic.
Because of the awesome health benefits of gotu kola, the Department of Science and Technology (DoST) and the Bureau of Food and Drug Administration (BFAD) of the Department of Health should conduct more researches and studies on gotu kola. Both our domestic and export businesses will certainly be benefited by more government interest and support to entities that are exploring at the vast health and medicinal potentials of gotu kola.
Brain scan - the marketing tool of future?
London, March 5 (ANI): A new research suggests that brain scans could be used as marketing tools in the future.
"Neuromarketing" takes the tools of modern brain science, like the functional MRI, and applies them to the somewhat abstract likes and dislikes of customer decision-making.
Ariely says, even though this raises the specter of marketers being able to read people's minds (more than they already do), neuromarketing may prove to be an affordable way for marketers to gather information that was previously unobtainable, or that consumers themselves may not even be fully aware of.
Ariely and Gregory have offered tips on what to look for when hiring a neuromarketing firm, and what ethical considerations there might be for the new field. They also point to some words of caution in interpreting such data to form marketing decisions.
Neuromarketing may never be cheap enough to replace focus groups and other methods used to assess existing products and advertising, but it could have real promise in gauging the conscious and unconscious reactions of consumers in the design phase of such varied products as "food, entertainment, buildings and political candidates," Ariely points out.
The report has been published online in the journal Nature Reviews Neuroscience. (ANI)
According to the analysis, done by Dan Ariely, the James B. Duke professor of psychology and behavioural economics at Duke and Gregory S. Berns of Emory's departments of psychiatry, economics and neuropolicy, scans of the human brain may help marketing experts to test a product's appeal while it is still being designed.
"Neuromarketing" takes the tools of modern brain science, like the functional MRI, and applies them to the somewhat abstract likes and dislikes of customer decision-making.
Ariely says, even though this raises the specter of marketers being able to read people's minds (more than they already do), neuromarketing may prove to be an affordable way for marketers to gather information that was previously unobtainable, or that consumers themselves may not even be fully aware of.
Ariely and Gregory have offered tips on what to look for when hiring a neuromarketing firm, and what ethical considerations there might be for the new field. They also point to some words of caution in interpreting such data to form marketing decisions.
Neuromarketing may never be cheap enough to replace focus groups and other methods used to assess existing products and advertising, but it could have real promise in gauging the conscious and unconscious reactions of consumers in the design phase of such varied products as "food, entertainment, buildings and political candidates," Ariely points out.
The report has been published online in the journal Nature Reviews Neuroscience. (ANI)
Mitochondria (the Powerhouses of our Cells) and Brain Disease
Introduction to Mitochondria and Disease
Mitochondria are eukaryotic, membrane-enclosed, 1-10um sized organelles, described as “cellular power plants” as they are responsible for the production of adenosine triphosphate (ATP) and oxidative phosporylation. Signal transduction (buffering and storage of intracellular calcium), control of cell cycle and cell growth, as well as programmed cell death (apoptosis) are other important homeostatic processes governed by mitochondria. It is not surprising therefore that, despite extensive research efforts at elucidating the still un-established pathophysiology of neurological disease, mitochondrial dysfunction is hypothesised to play a substantial role, with their consequent emergence in neuroscience research today.
Dysfunction of mitochondrial energy metabolism leads to reduced ATP production, impaired calcium buffering, and generation of reactive oxygen species (ROS) such as superoxide anions, hydroxyl radicals and hydrogen peroxide (Cassarino & Bennett, 1999). ROS are increasingly recognized as playing an important role in neurodegenerative diseases because of their ability to cause oxidative stress and consequently damage cellular contents. Acute exposure to relatively high levels of oxidants, especially in the presence of calcium, can also induce opening of the mitochondrial permeability transition (MPT), an inner mitochondrial membrane residing, voltage- sensitive, non-selective ion channel which opens to pass large molecular weight solutes between the mitochondrial matrix and cytoplasm, enabling the inner membrane (which is normally impermeable) to become permeable, leading to a “large amplitude swelling” (permeability transition) (Kroemer, Galluzzi, & Brenner, 2007).
The opening of the MTP is implicated as a mediator of cell death due to a resulting inhibition of the electron transport chain, programmed cell death (apoptosis), oxidative stress, increased leakage of CA2+ currents (which activates apoptotic-inducing factors), as well as cytochrome c release (which activates apoptotic-inducing enzymes and may be a direct consequence of MTP opening) (Norenberg & Rao, 2007). The mitochondrial membrane potential (Ψm), a high -150mV current, is the component of the proton electrochemical potential which determines Ca2+ sequestration and ROS generation. Damage to mitochondrial proteins and mitochondrial DNA (mtDNA) would be expected to decrease mitochondrial bioenergetics and efficiency, as the lack of histones in mtDNA and diminished capacity for DNA repair renders it susceptible to oxidative stress(Lin & Beal, 2006) . Mitochondrial ROS production is intimately linked to mitochondrial membrane potential, such that hyperpolarization increases and promotes ROS production (Valko et al., 2007).
It is not surprising that, in adult neurons, which depend primarily on ATP production to meet bioenergetic demands, any compromises in mitochondrial function place neurons at a high risk for both dysfunction and/or death. The association between mitochondrial abnormalities and disease has been known for approximately four decades, with the description of a patient with hypermetabolism and a skeletal muscle biopsy demonstrating large numbers of abnormal mitochondria, a disorder now termed “mitochondrial myopathy” (Cassarino & Bennett, 1999). There exists substantial evidence that mitochondrial dysfunction and oxidative damage may play a key role in the pathogenesis of neurodegenerative disease. Evidence implicating both mitochondrial dysfunction and oxidative damage in the pathogenesis of Alzheimer’s disease (AD), and Huntington’s disease (HD), as well as ischemia and other neurological disorders, continues to accumulate. This review aims to outline the role mitochondria may play in AD, a debilitating CNS disorder for which there is currently no cure, but substantial evidence suggesting a mitochondrial interplay with disease pathogenesis.
Mitochondria in Alzheimer’s Disease (AD)
AD, the most common form of dementia, is a terminal neurodegenerative disease, neuropathologically characterised by amyloid “senile” plaques (composed of beta-amyloid or AB) and tau-containing neurofibrilliary tangles (NFTs), marked clinically by short term memory loss, confusion, anger and mood swings . Dysfunction of mitochondrial electron transport proteins has been associated with the pathophysiology of AD and Blass and Gibson were among the first who prompted the notion that defective energy metabolism in Alzheimer’s disease (AD) was a fundamental component of the disease (Sullivan & Brown, 2005). The most consistent defect in mitochondrial enzyme activity reported in AD has been the electron transport chain (ETS) carrier cytochrome c oxidase (COX) (Hirai et al., 2001). It has been hypothesized by research groups worldwide that defective mitochondrial metabolism sets up a cascade of pathological events that initiates AD.
Disruptions in energy metabolism have been suggested to be a prominent feature, perhaps even a fundamental component, of AD, as depicted from abnormalities in cerebral metabolism, which precede the onset of neurological dysfunction as well as gross neuropathology in AD(Sullivan & Brown, 2005). There is substantial data from positron emission tomography (PET) for example that consistently shows reduced cerebral metabolism in temporoparietal cortices in AD (Sullivan & Brown, 2005). These changes may stem from inhibition of mitochondrial enzymes including pyruvate dehydrogenase, cytochrome c oxidase, and aketoglutarate dehydrogenase. In particular, Amyloid binding alcohol dehydrogenase (ABAD), a mitochondrial matrix-localised enzyme, may be a direct molecular link between amyloid and mitochondrial toxicity. Evidence comes from the finding that amyloid bound to ABAD was found in AD brain mitochondria, and blocking this interaction in vitro suppressed B-amyloid–induced apoptosis and free radical generation in neurons. Furthermore, transgenic mice over expressing ABAD, when crossed with mice over expressing B-amyloid, showed exaggerated oxidative stress and impaired memory. Similarly, α-Ketoglutarate dehydrogenase complex activity is severely decreased in post-mortem AD brain (Hirai et al., 2001).
There are strong links between the mitochondrial and amyloid hypotheses. On one hand, mitochondrial dysfunction and oxidative stress may alter APP processing, leading to increased intracellular Aβ accumulation. On the other hand, β-amyloid may cause mitochondrial dysfunction and oxidative stress. A recent paper has shown that that intracellular accumulated β-amyloid precedes both neurofibrilliary tangles and synaptic dysfunction in a transgenic mouse expressing β-amyloid, presenilin, and tau mutations. The effects of crossing mice with a partial deficiency of manganese superoxide dismutase with Tg1995 mice were examined (William et al., 1998). This markedly exacerbated β-amyloid deposition, providing direct evidence of a link between β-amyloid deposition and oxidative damage (Castellani et al., 2002).
Studies from our Blass et al have further elucidated the role of mitochondria in AD, by showing that neurons in AD brains accumulate mitochondrial debris in their perikaryon, which results from oxidative damage to mtDNA and mitochondria proteins, and may be related to deficient or defective microtubule metabolism (figure 1). Currently, it is not clear whether oxidative damage to mitochondria leads to a decreased function, or whether a decreased efficiency of the ETS results in excessive electron release and ROS formation. Regardless, the result would be a vicious feed forward cycle where increased oxidative stress would continually reduce mitochondrial bioenergetics. This loss of mitochondrial bioenergetics is coupled with increased oxidative damage and ROS production . A recent study has depicted the relationship between mitochondrial abnormalities in AD and their relationship to oxidative stress, with results showing the same neurons which showed increased oxidative damage in AD had a striking and significant increase in mtDNA and cytochrome oxidase. In addition, morphometric analysis showed that mitochondria were significantly reduced in AD, which together suggests an early association between hallmark neuropathological features of AD and oxidative damage (Hirai et al., 2001).
Conclusion and Implications for the Future
Mitochondrial defects are now described in a wide spectrum of human conditions, including neurodegenerative and metabolic diseases, aging, and cancer. Further studies examining the importance of mitochondrial pathophysiology in neurodegenerative diseases such as AD and HD may provide important insight into neurodegenerative disease pathogenesis and may indeed provide a target for specific therapies.
There is increasing interest in the potential usefulness of coenzymeQ10 (CoQ10) to treat neurodegenerative diseases because CoQ10 administration can increase brain and brain mitochondrial concentrations in brain in mature and older animals. CoQ10 (also known as ubiquinone) serves as an important cofactor of the electron transport chain, where it accepts electrons from complexes I and II, thus serves as an important antioxidant in both mitochondria and lipid membranes. A prior study showed that vitamin E has efficacy in slowing the progression of AD. The antioxidants curcurmin and melatonin exert beneficial effects on amyloid deposition in transgenic mouse models of AD. It is, therefore, possible that CoQ10 might similarly be beneficial in AD.
References
Cassarino, D. S., & Bennett, J. P. (1999). An evaluation of the role of mitochondria in neurodegenerative diseases: mitochondrial mutations and oxidative pathology, protective nuclear responses, and cell death in neurodegeneration. Brain Research Reviews, 29(1), 1-25.
Castellani, R., Hirai, K., Aliev, G., Drew, K. L., Nunomura, A., Takeda, A., et al. (2002). Role of mitochondrial dysfunction in Alzheimer's disease. Journal of Neuroscience Research, 70(3), 357-360.
Hirai, K., Aliev, G., Nunomura, A., Fujioka, H., Russell, R. L., Atwood, C. S., et al. (2001). Mitochondrial Abnormalities in Alzheimer's Disease. J. Neurosci., 21(9), 3017-3023.
Kroemer, G., Galluzzi, L., & Brenner, C. (2007). Mitochondrial membrane permeabilization in cell death. [Review]. Physiological Reviews, 87(1), 99-163.
Lin, M. T., & Beal, M. F. (2006). Mitochondrial dysfunction and oxidative stress in neurodegenerative diseases. Nature, 443(7113), 787-795.
Norenberg, M. D., & Rao, K. V. R. (2007). The mitochondrial permeability transition in neurologic disease. [Article]. Neurochemistry International, 50(7-8), 983-997.
Sullivan, P. G., & Brown, M. R. (2005). Mitochondrial aging and dysfunction in Alzheimer's disease. Progress in Neuro-Psychopharmacology and Biological Psychiatry, 29(3), 407-410.
Valko, M., Leibfritz, D., Moncol, J., Cronin, M. T. D., Mazur, M., & Telser, J. (2007). Free radicals and antioxidants in normal physiological functions and human disease. The International Journal of Biochemistry & Cell Biology, 39(1), 44-84.
Superconductors to simulate the brain
Electronic components that exploit the phenomenon of superconductivity could allow us to study the collective behaviour of large numbers of neurons operating over long timescales. That is the finding of scientists in the US, who have shown how networks of artificial neurons containing two Josephson junctions would outpace more traditional computer-simulated brains by many orders of magnitude. Studying such junction-based systems could improve our understanding of long-term learning and memory along with factors that may contribute to disorders like epilepsy.
The human brain consists of some 100 billion nerve cells known as neurons, each of which receives electrical inputs from a number of its neighbours and then sends an electrical output to others – a process known as "firing" – when the sum of its inputs exceeds a certain level. The connections between neurons are known as synapses and it is the relative weighting of these that determines how the brain processes information.
One way to simulate the workings of the brain is using software. For example, the Blue Brain project at the Ecole Polytechnique Fédérale de Lausanne in Switzerland involves simulating in precise biological detail the 10,000 neurons that make up the neocortical column – the building block of the cerebral cortex, or grey matter.
Josephson junctions, on the other hand, are inherently nonlinear and much quicker than transistors – responding to a changing input on a timescale of around 10–11 s rather than the 10–9 s typical of transistors. The junctions consist of two superconducting layers separated by an insulating gap, which is thin enough to allow charge-carrying Cooper pairs to tunnel across and couple the wavefunctions of the two superconductors. Small currents lead to no voltage across the gap (this is the "supercurrent" that encounters no resistance), whereas higher currents result in progressively greater voltages. Crucially, intermediate currents cause a short-duration voltage pulse, which is the equivalent of a neuron firing.
Now Patrick Crotty, Dan Schult and Ken Segall of Colgate University in the US have worked out the mathematics of an artificial neuron consisting of just two Josephson junctions and three inductors, joined to an artificial synapse consisting of an inductor, a capacitor and a pair of resistors.
The existing design does not permit learning since the weighting of connections between synapses cannot be changed over time, but Segall believes that if this feature can be added then their neurons might allow a lifetime's worth of learning to be simulated in five or ten minutes. This, he adds, should help us to understand how learning changes with age and might give us clues as to how long-term disorders like Parkinson's disease develops.
Henry Markram, the biologist who heads Blue Brain, says that the American group's work "may have interesting applications for artificial neural networks" but believes that it is less well suited for reproducing real brain circuitry. This, he says, is partly because the Josephson-junction neurons lack the dendrites and axons that connect real neurons together. He also points out that it would be far harder to monitor individual neurons than it is in computer simulations, limiting this approach to those phenomena that can be characterized by the values of the system as a whole, such as data from electroencephalogram measurements.
The human brain consists of some 100 billion nerve cells known as neurons, each of which receives electrical inputs from a number of its neighbours and then sends an electrical output to others – a process known as "firing" – when the sum of its inputs exceeds a certain level. The connections between neurons are known as synapses and it is the relative weighting of these that determines how the brain processes information.
One way to simulate the workings of the brain is using software. For example, the Blue Brain project at the Ecole Polytechnique Fédérale de Lausanne in Switzerland involves simulating in precise biological detail the 10,000 neurons that make up the neocortical column – the building block of the cerebral cortex, or grey matter.
Lack of speed a problem
One fundamental drawback with such an approach is speed. The neurons and their connections exist in computer code, which means that they must be simulated sequentially. This requires significant computing power and means that simulations take far longer to run than actual brain processes. The alternative is to create a physical analogue of the brain, making artificial neurons and connecting them up in parallel. One way to do this is to build up neurons using transistors and then exploit existing microchip fabrication techniques to create large neural networks. Unfortunately transistors lack the nonlinearity between current and voltage that characterizes neurons, and reproducing this behaviour means connecting up at least 20 transistors for each neuron.Josephson junctions, on the other hand, are inherently nonlinear and much quicker than transistors – responding to a changing input on a timescale of around 10–11 s rather than the 10–9 s typical of transistors. The junctions consist of two superconducting layers separated by an insulating gap, which is thin enough to allow charge-carrying Cooper pairs to tunnel across and couple the wavefunctions of the two superconductors. Small currents lead to no voltage across the gap (this is the "supercurrent" that encounters no resistance), whereas higher currents result in progressively greater voltages. Crucially, intermediate currents cause a short-duration voltage pulse, which is the equivalent of a neuron firing.
Now Patrick Crotty, Dan Schult and Ken Segall of Colgate University in the US have worked out the mathematics of an artificial neuron consisting of just two Josephson junctions and three inductors, joined to an artificial synapse consisting of an inductor, a capacitor and a pair of resistors.
Three vital characteristics
The two junctions correspond to two different ion channels in a neuron, with one responsible for initiating the voltage pulse while the other restores the neuron to its resting potential. Crotty and colleagues have shown that this system shares three vital characteristics of an actual neuron. In addition to firing, the firing only occurs when the current exceeds some minimum value. Also, the artificial neuron, like a real neuron, must rest for a certain length of time after firing before it can fire again.
The team worked out how much more quickly such a Josephson-junction-based neuron could fire than the neurons reproduced in a number of different software models, assuming that these models are run on a computer that can carry out a billion floating point operations per second. It found that for individual neurons the device should fire some 100 times more rapidly than the simplest kind of digital neuron. But this advantage, the researchers say, would become much more pronounced when large numbers of neurons are hooked up to one another in a network. They calculate that for 1000 interconnected neurons their approach would be at least 10 million times quicker.Planning experiments
The current work is purely theoretical but the group is starting to design networks of Josephson-junction neurons for some initial prototyping experiments. Segall says that it should eventually be straightforward to fabricate chips with some 10,000 Josephson-junction neurons (enough for a neocortical column), given that similar circuits with twice as many junctions have already been produced. Putting a number of such chips together should then allow researchers to study certain collective neural phenomena, such as how large groups of neurons fire in step, or synchronize, which might prove useful in combating epilepsy given that this condition is caused by unwanted synchronization.The existing design does not permit learning since the weighting of connections between synapses cannot be changed over time, but Segall believes that if this feature can be added then their neurons might allow a lifetime's worth of learning to be simulated in five or ten minutes. This, he adds, should help us to understand how learning changes with age and might give us clues as to how long-term disorders like Parkinson's disease develops.
Henry Markram, the biologist who heads Blue Brain, says that the American group's work "may have interesting applications for artificial neural networks" but believes that it is less well suited for reproducing real brain circuitry. This, he says, is partly because the Josephson-junction neurons lack the dendrites and axons that connect real neurons together. He also points out that it would be far harder to monitor individual neurons than it is in computer simulations, limiting this approach to those phenomena that can be characterized by the values of the system as a whole, such as data from electroencephalogram measurements.
Using Nanoscale Technologies to Understand and Replicate the Human Brain
Scanning electron microscope image (false color) of a rat hippocampal neuron on a bed of vertical silicon nanowires. Nanowires penetrate the cell membrane without affecting cell viability, and can be used to efficiently deliver a wide variety of molecules into the cell's cytoplasm.Scanning electron microscope image (false color) of a rat hippocampal neuron on a bed of vertical silicon nanowires. Nanowires penetrate the cell membrane without affecting cell viability, and can be used to efficiently deliver a wide variety of molecules into the cell's cytoplasm. Courtesy: H. Park
Abstract:
How does the brain compute? Can we emulate the brain to create supercomputers far beyond what currently exists? And will we one day have tools small enough to manipulate individual neurons -- and if so, what might be the impact of this new technology on neuroscience?
Using Nanoscale Technologies to Understand and Replicate the Human Brain
Oxnard, CA | Posted on March 5th, 2010
Contacts:
The Kavli Foundation
1801 Solar Drive , Suite 250
Oxnard, CA 93030
Phone 805.983.6000
Fax 805.988.4800
Abstract:
How does the brain compute? Can we emulate the brain to create supercomputers far beyond what currently exists? And will we one day have tools small enough to manipulate individual neurons -- and if so, what might be the impact of this new technology on neuroscience?
Using Nanoscale Technologies to Understand and Replicate the Human Brain
Oxnard, CA | Posted on March 5th, 2010
Recently, three renowned researchers - one neuroscientist and two nanoscientists - discussed how their once diverging disciplines are now joining to understand how the brain works at its most basic cellular level, and the extraordinary advances this merger seems to promise for fields ranging from computer technology to health.
Neuroscientist and KIBM Co-Director Nicholas Spitzer leads a discussion on the intersection of neuroscience and nanoscience with two nanoscience pioneers - Stanford University's Kwabena Boahen and Harvard University's Hongkun Park.
* Nicholas Spitzer (Host), Professor of Neurobiology and Co-Director of the Kavli Institute for Brain and Mind at the University of California, San Diego, has pursued groundbreaking studies into the activity and development of neurons and neuronal networks for more than four decades.
* Kwabena Boahen, Associate Professor of Bioengineering at Stanford University, is using silicon integrated circuits to emulate the way neurons compute, bridging electronics and computer science with neurobiology and medicine. At Stanford, his research group is developing "Neurogrid," a hardware platform that will emulate the cortex's inner workings.
* Hongkun Park, Professor of Chemistry and of Physics at Harvard University, is known for his work in developing computing technology modeled after the human brain and nervous system. Park is pushing the frontiers of nanotechnology by developing devices capable of probing and manipulating individual neurons.
NICHOLAS SPITZER: Hongkun and Kwabena, thanks for joining me for this conversation about the intersection of neuroscience and nanoscience. Here in La Jolla [California] we have a joke - that the most rapidly growing area of physics is neuroscience. But it actually makes sense, because a lot of exciting discoveries have been made at the interface of fields that haven't been in really close contact before. Certainly from the neuroscience side this interface between nanoscience and neuroscience is very attractive, because neuroscience wants to apply tools from nanoscience. I hope that this is a two-way street and that the nanoscientists are glad of the applications of their work to neuroscience.
Hongkun, I'll start with you. How did you become interested in neuroscience? This is different from your background in chemistry and physics. What drew you toward this field?
HONGKUN PARK: It started about five years ago, as a part of what you might call a "mid-career crisis." At the time I had just been promoted, and I was looking to do something quite different from what I was doing. I still have a program in physics and nanoscience - about half of my group is working on that - but I missed being on a steep learning curve, and in order to revive that feeling, I wanted to learn something new. It seemed like neuroscience was the perfect thing. A lot of interesting and exciting developments were happening, but it also seemed that we could contribute by providing new tools that could perturb and probe complex neuronal networks. So that's when I started getting into this particular field.
NS: Kwabena, how about you? What got you excited about neuroscience?
KWABENA BOAHEN: I was one of those people who started out with an interest in engineering from an early age. When I was young I liked to take apart stuff and build things. I also hated biology - I couldn't memorize to save my life. I got my first computer as a teenager, and I went to a library and figured out how it worked and I was kind of really turned off. I thought computers were very "brute force" and I thought there must be a more elegant way to compute. I didn't really know anything about the brain, because of my lack of biology. I actually discovered more about how brains work when I was an undergrad. This was in the late ‘80s when there was a lot of hoopla about neural networks, which were mathematical abstractions about how the brain works. And so that's how I got into it. As I learned more about biology I discovered that it had very elegant ways of computing, and I got deeper and deeper into neuroscience.
NS: As I understand it, Kwabena, you want to bridge experiment, theory and computation by building what you call "an affordable supercomputer" that not only works like a brain but also helps us to understand how the brain works. How are those goals going to complement one another?
KB: As I mentioned, when I first learned about computers I thought they were very "brute force." When I was an undergrad I learned about how much energy the brain used and how much computation it did with that energy, and it was orders and orders of magnitude -- six orders of magnitude or more -- more efficient than a computer. So this is what got me interested. I said -- hey, why don't we just design chips that that are based on these neural circuits and neural systems. That has culminated in the "Neurogrid" project that we're doing right now, where we build models of the components of neurons and synapses, and various parts of the brain, directly with transistors, and do it very efficiently. This allows us to make the kind of calculation that it would normally take a supercomputer to do, but with only a few chips. Instead of using megawatts of power we use just one watt. We're trying to make it easier and more affordable to do large-scale simulations, on the order of a million neurons.
Kwabena Boahen, Associate Professor of Bioengineering at Stanford University (Credit: Michael Halaas)
NS: Kwabena, you're doing this by working with what are called "neuromorphic" chips that were pioneered by Carver Mead. But when Mead developed them, he assembled silicon neurons in a hard-wired manner, and you have really broken that open with soft-wiring of the neurons on a chip. Can you talk a little bit about that?
KB: I did my Ph.D. with Carver Mead at Caltech, and at the time he was working on the silicon retina. The way that was built on a chip was to hard-wire these transistors together to match the connectivity between the neurons and the retina, and also essentially to pre-design how the individual neurons behave. In order to use these neuromophic chips as a programmable platform for doing simulations, we wanted to make it possible to reconfigure the connections as well as to simulate different neurons with the same circuit. And so we've been able to come up with a technique called soft-wiring, which works similarly to the Internet. By giving each neuron an address, we can send a spike from any neuron to any other neuron, just like you can send email from one computer to any other computer based just on its IP address, with no direct physical connection between your computer and that computer. By using this same approach we can actually make these connections configurable, and we call them "soft-wired." For the neuronal properties themselves, we built a circuit that solves the Hodgkin-Huxley equations that are used to describe how any type of ion channel behaves. Once I have those circuits I can model those equations and therefore I can model any type of ion channel.
NS: And my understanding is that with your Neurogrid supercomputer you are modeling a million neurons at this point. Am I correct?
KB: Yes. We are modeling a million neurons connected by something like 6 billion connections.
NS: That's really quite amazing. I'm going to come back to you in a moment with further questions about what we're learning from the Neurogrid project. But let me turn to Hongkun. You developed a remarkable vertical nanowire arrays that have contributed to our understanding of the design principles of cellular networks. How were you inspired to invent this technology? What was the background that led you to develop this?
HP: A lot of people are working on nanostructure-cell interactions, and there have been some studies showing that vertical nanostructures can support cell growth. When I first saw that, I thought, "That cannot be true… How can a cell be happy on top of those vertical needles?" So we tried to see whether neurons and other cells could be supported on these vertical silicon nanowires that we grow with various techniques, and started think about what we could do with those needles. We soon found, amazingly enough, when these cells -- whether they were neurons, stem cells or what have you -- are put on top of these needles, they actually like to be poked by these needles. Apparently they are not bothered by them - they function normally, and continue to divide and differentiate. So what we are trying to do now is to use that unique interface between vertical nanowires and living cells, cellular networks or even tissues to poke, perturb, and probe them in a cell-specific fashion.
NS: This is really quite remarkable. What are the dimensions and densities of the nanowires?
HP: Typically, we use nanowires with ~100 nm in diameter. Their dimensions can vary quite a bit. For, say, perturbation experiments, we use 1.5-micron or 2-micron length nanowires, but these can be longer. One of the reasons we have been using 2- micron long nanowires is that longer nanowires pierce cultured cells. Since these nanowires are prepared using standard semiconductor processing techniques, we can prepare, within, say, an hour or so, 6-inch wafers full of nanowires with varying densities, varying lengths and varying diameters.
NS: It's really impressive to see the way you not only record electrical and biochemical activity, but also introduce molecular probes using coated nanowires. Our audience will remember that the development of a technique for recording the electrical activity of spinal cord cells and neurons - the patch clamp technique -- was such a valuable tool that it led to the award of a Nobel Prize a number of years ago. So one is always very interested in new tools for this kind of study.
Kwabena, let me come back to you. One recognizes that the ultimate test of understanding of a process is the ability to reconstruct it. This is of course precisely what you're doing -- to reconstruct the behavior of the nervous system. I want to ask what you're learning from these remarkable emulations of the nervous system.
KB: One of the main things we want to use this system to model is the feedback between cortical areas. In the visual system alone, there are about three dozen representations of the visual world. These are called cortical areas. And there's a massive amount of feedback. Pretty much every area that talks to another area gets feedback from it. About half the connections are feedback connections. Feedback is a real problem to deal with because you have to sort of break the loop to control the input that's going to a particular area so that you can do a virtual experiment. And then you somehow have to put that loop back together to try to get the system to operate in the way that it's supposed to. The solution goes back to Alan Hodgkin and Andrew Huxley, who figured out how action potentials are generated. They used the voltage clamp technique to fix the voltage of the neuron and measure the current carried by all the various ion channels. So they broke the loop; they didn't allow the neuron to spike. But then they demonstrated in a model - that was one of the first computational neuroscience models --that when they put these currents together they could generate a spike mathematically by simulating the equations that they had derived from their experiments.
Basically we wanted to do the same thing at the system level, by characterizing what each cortical area does, and then hooking them together with the feedback in the model to try to understand top-down effects like attention. We are basically at the stage where we are dealing with single layers of neurons modeled on a single chip. We haven't yet hooked multiple chips together in various layers of the different cortical areas. At the single-layer level, we've been able to model brain rhythms. One of these that is important for attention is called the gamma rhythm. This is in the 40 hertz range. It's nested with a slower rhythm called a theta rhythm. We've been able to reproduce these nested rhythms in this model that we've programmed this chip to do.
NS: Hongkun, let me turn to you and ask what are some of the things that you and your colleagues are learning from these remarkable vertical nanowires, either in recordings or in the perturbation experiments that you're doing
Neuroscientist and KIBM Co-Director Nicholas Spitzer leads a discussion on the intersection of neuroscience and nanoscience with two nanoscience pioneers - Stanford University's Kwabena Boahen and Harvard University's Hongkun Park.
* Nicholas Spitzer (Host), Professor of Neurobiology and Co-Director of the Kavli Institute for Brain and Mind at the University of California, San Diego, has pursued groundbreaking studies into the activity and development of neurons and neuronal networks for more than four decades.
* Kwabena Boahen, Associate Professor of Bioengineering at Stanford University, is using silicon integrated circuits to emulate the way neurons compute, bridging electronics and computer science with neurobiology and medicine. At Stanford, his research group is developing "Neurogrid," a hardware platform that will emulate the cortex's inner workings.
* Hongkun Park, Professor of Chemistry and of Physics at Harvard University, is known for his work in developing computing technology modeled after the human brain and nervous system. Park is pushing the frontiers of nanotechnology by developing devices capable of probing and manipulating individual neurons.
NICHOLAS SPITZER: Hongkun and Kwabena, thanks for joining me for this conversation about the intersection of neuroscience and nanoscience. Here in La Jolla [California] we have a joke - that the most rapidly growing area of physics is neuroscience. But it actually makes sense, because a lot of exciting discoveries have been made at the interface of fields that haven't been in really close contact before. Certainly from the neuroscience side this interface between nanoscience and neuroscience is very attractive, because neuroscience wants to apply tools from nanoscience. I hope that this is a two-way street and that the nanoscientists are glad of the applications of their work to neuroscience.
Hongkun, I'll start with you. How did you become interested in neuroscience? This is different from your background in chemistry and physics. What drew you toward this field?
HONGKUN PARK: It started about five years ago, as a part of what you might call a "mid-career crisis." At the time I had just been promoted, and I was looking to do something quite different from what I was doing. I still have a program in physics and nanoscience - about half of my group is working on that - but I missed being on a steep learning curve, and in order to revive that feeling, I wanted to learn something new. It seemed like neuroscience was the perfect thing. A lot of interesting and exciting developments were happening, but it also seemed that we could contribute by providing new tools that could perturb and probe complex neuronal networks. So that's when I started getting into this particular field.
NS: Kwabena, how about you? What got you excited about neuroscience?
KWABENA BOAHEN: I was one of those people who started out with an interest in engineering from an early age. When I was young I liked to take apart stuff and build things. I also hated biology - I couldn't memorize to save my life. I got my first computer as a teenager, and I went to a library and figured out how it worked and I was kind of really turned off. I thought computers were very "brute force" and I thought there must be a more elegant way to compute. I didn't really know anything about the brain, because of my lack of biology. I actually discovered more about how brains work when I was an undergrad. This was in the late ‘80s when there was a lot of hoopla about neural networks, which were mathematical abstractions about how the brain works. And so that's how I got into it. As I learned more about biology I discovered that it had very elegant ways of computing, and I got deeper and deeper into neuroscience.
NS: As I understand it, Kwabena, you want to bridge experiment, theory and computation by building what you call "an affordable supercomputer" that not only works like a brain but also helps us to understand how the brain works. How are those goals going to complement one another?
KB: As I mentioned, when I first learned about computers I thought they were very "brute force." When I was an undergrad I learned about how much energy the brain used and how much computation it did with that energy, and it was orders and orders of magnitude -- six orders of magnitude or more -- more efficient than a computer. So this is what got me interested. I said -- hey, why don't we just design chips that that are based on these neural circuits and neural systems. That has culminated in the "Neurogrid" project that we're doing right now, where we build models of the components of neurons and synapses, and various parts of the brain, directly with transistors, and do it very efficiently. This allows us to make the kind of calculation that it would normally take a supercomputer to do, but with only a few chips. Instead of using megawatts of power we use just one watt. We're trying to make it easier and more affordable to do large-scale simulations, on the order of a million neurons.
Kwabena Boahen, Associate Professor of Bioengineering at Stanford University (Credit: Michael Halaas)
NS: Kwabena, you're doing this by working with what are called "neuromorphic" chips that were pioneered by Carver Mead. But when Mead developed them, he assembled silicon neurons in a hard-wired manner, and you have really broken that open with soft-wiring of the neurons on a chip. Can you talk a little bit about that?
KB: I did my Ph.D. with Carver Mead at Caltech, and at the time he was working on the silicon retina. The way that was built on a chip was to hard-wire these transistors together to match the connectivity between the neurons and the retina, and also essentially to pre-design how the individual neurons behave. In order to use these neuromophic chips as a programmable platform for doing simulations, we wanted to make it possible to reconfigure the connections as well as to simulate different neurons with the same circuit. And so we've been able to come up with a technique called soft-wiring, which works similarly to the Internet. By giving each neuron an address, we can send a spike from any neuron to any other neuron, just like you can send email from one computer to any other computer based just on its IP address, with no direct physical connection between your computer and that computer. By using this same approach we can actually make these connections configurable, and we call them "soft-wired." For the neuronal properties themselves, we built a circuit that solves the Hodgkin-Huxley equations that are used to describe how any type of ion channel behaves. Once I have those circuits I can model those equations and therefore I can model any type of ion channel.
NS: And my understanding is that with your Neurogrid supercomputer you are modeling a million neurons at this point. Am I correct?
KB: Yes. We are modeling a million neurons connected by something like 6 billion connections.
NS: That's really quite amazing. I'm going to come back to you in a moment with further questions about what we're learning from the Neurogrid project. But let me turn to Hongkun. You developed a remarkable vertical nanowire arrays that have contributed to our understanding of the design principles of cellular networks. How were you inspired to invent this technology? What was the background that led you to develop this?
HP: A lot of people are working on nanostructure-cell interactions, and there have been some studies showing that vertical nanostructures can support cell growth. When I first saw that, I thought, "That cannot be true… How can a cell be happy on top of those vertical needles?" So we tried to see whether neurons and other cells could be supported on these vertical silicon nanowires that we grow with various techniques, and started think about what we could do with those needles. We soon found, amazingly enough, when these cells -- whether they were neurons, stem cells or what have you -- are put on top of these needles, they actually like to be poked by these needles. Apparently they are not bothered by them - they function normally, and continue to divide and differentiate. So what we are trying to do now is to use that unique interface between vertical nanowires and living cells, cellular networks or even tissues to poke, perturb, and probe them in a cell-specific fashion.
NS: This is really quite remarkable. What are the dimensions and densities of the nanowires?
HP: Typically, we use nanowires with ~100 nm in diameter. Their dimensions can vary quite a bit. For, say, perturbation experiments, we use 1.5-micron or 2-micron length nanowires, but these can be longer. One of the reasons we have been using 2- micron long nanowires is that longer nanowires pierce cultured cells. Since these nanowires are prepared using standard semiconductor processing techniques, we can prepare, within, say, an hour or so, 6-inch wafers full of nanowires with varying densities, varying lengths and varying diameters.
NS: It's really impressive to see the way you not only record electrical and biochemical activity, but also introduce molecular probes using coated nanowires. Our audience will remember that the development of a technique for recording the electrical activity of spinal cord cells and neurons - the patch clamp technique -- was such a valuable tool that it led to the award of a Nobel Prize a number of years ago. So one is always very interested in new tools for this kind of study.
Kwabena, let me come back to you. One recognizes that the ultimate test of understanding of a process is the ability to reconstruct it. This is of course precisely what you're doing -- to reconstruct the behavior of the nervous system. I want to ask what you're learning from these remarkable emulations of the nervous system.
KB: One of the main things we want to use this system to model is the feedback between cortical areas. In the visual system alone, there are about three dozen representations of the visual world. These are called cortical areas. And there's a massive amount of feedback. Pretty much every area that talks to another area gets feedback from it. About half the connections are feedback connections. Feedback is a real problem to deal with because you have to sort of break the loop to control the input that's going to a particular area so that you can do a virtual experiment. And then you somehow have to put that loop back together to try to get the system to operate in the way that it's supposed to. The solution goes back to Alan Hodgkin and Andrew Huxley, who figured out how action potentials are generated. They used the voltage clamp technique to fix the voltage of the neuron and measure the current carried by all the various ion channels. So they broke the loop; they didn't allow the neuron to spike. But then they demonstrated in a model - that was one of the first computational neuroscience models --that when they put these currents together they could generate a spike mathematically by simulating the equations that they had derived from their experiments.
Basically we wanted to do the same thing at the system level, by characterizing what each cortical area does, and then hooking them together with the feedback in the model to try to understand top-down effects like attention. We are basically at the stage where we are dealing with single layers of neurons modeled on a single chip. We haven't yet hooked multiple chips together in various layers of the different cortical areas. At the single-layer level, we've been able to model brain rhythms. One of these that is important for attention is called the gamma rhythm. This is in the 40 hertz range. It's nested with a slower rhythm called a theta rhythm. We've been able to reproduce these nested rhythms in this model that we've programmed this chip to do.
NS: Hongkun, let me turn to you and ask what are some of the things that you and your colleagues are learning from these remarkable vertical nanowires, either in recordings or in the perturbation experiments that you're doing
HP: What we have shown so far is that we can indeed introduce a variety of different biochemicals into neurons, neuronal networks and now even tissues using these nanowires in a spatially selected fashion. With our recording tools we take multiples of neurons -- our initial goal is 16 neurons -- record their activities, and correlate them with the connectivity of these neurons that we map optically. We are also collaborating with Sebastian Seung and Russ Tedrake at MIT, who will model the activities of the neurons that we record and come up with a model that can explain the behavior of these neurons and can be used to predict a control scheme.
NS: Let me ask about the targeting of specific cells in the network, Hongkun. In principle, one might be able to apply the concept of system identification that is used in other fields, such as engineering, to identify different layers in the circuit. I wonder if you can do that with the vertical nanowires. Are they addressable? Can one address individual vertical nanowires to provide identification, for example, of the neurons in your cultures?
HP: Yes, we are certainly aiming to do that. With our chip, which can electrically as well as chemically interface with neurons, we can individually address individual nanowires or nanowire bundles that are penetrating the cells so that we can record from individual cells. We can also couple that with microfluidics to administer neuromodulators, hormones and other molecules and see how site-selective administration of the chemicals modifies neural activity. One of the things we want to test is the model that Sebastian Seung came up with, the hedonistic neuron model, where in order to generate, for example, memory, you require not only persistent electrical activity but also chemical rewards such as dopamine. I think that our experimental platform is well suited to test these models.
NS: Kwabena, let me come back to you. Earlier in our conversation you pointed out the dramatic difference in the power requirements for the brain on the one hand, and something like a large computer, a classical computer, such as the IBM Blue Gene computer on the other hand. It is striking that the human brain requires only 10 watts whereas the Blue Gene requires a megawatt. What is the way in which the brain is achieving this wonderful energetic economy?
KB: Well, we don't know, do we? This is really a very interdisciplinary question, because neuroscientists who study the brain usually don't measure or try to understand what makes the brain efficient. At the same time, engineers and physicists who know about energy and things like that don't study actual neurons. If you come and you tell me that you've figured out how the brain works, the first thing I'm going to ask you is, "How does it do it with 10 watts?" But if you do some calculations based on 10 watts, you can get an idea of its style of computation. Knowing how much current a single ion channel passes when it's open, you can calculate how many ion channels can be open at the same time. When you divide that by the number of neurons in the brain, it turns out that only about a hundred to a thousand ion channels per neuron can be open at one time. And that's an amazingly small number.
HP: What we have shown so far is that we can indeed introduce a variety of different biochemicals into neurons, neuronal networks and now even tissues using these nanowires in a spatially selected fashion. With our recording tools we take multiples of neurons -- our initial goal is 16 neurons -- record their activities, and correlate them with the connectivity of these neurons that we map optically. We are also collaborating with Sebastian Seung and Russ Tedrake at MIT, who will model the activities of the neurons that we record and come up with a model that can explain the behavior of these neurons and can be used to predict a control scheme.
NS: Let me ask about the targeting of specific cells in the network, Hongkun. In principle, one might be able to apply the concept of system identification that is used in other fields, such as engineering, to identify different layers in the circuit. I wonder if you can do that with the vertical nanowires. Are they addressable? Can one address individual vertical nanowires to provide identification, for example, of the neurons in your cultures?
HP: Yes, we are certainly aiming to do that. With our chip, which can electrically as well as chemically interface with neurons, we can individually address individual nanowires or nanowire bundles that are penetrating the cells so that we can record from individual cells. We can also couple that with microfluidics to administer neuromodulators, hormones and other molecules and see how site-selective administration of the chemicals modifies neural activity. One of the things we want to test is the model that Sebastian Seung came up with, the hedonistic neuron model, where in order to generate, for example, memory, you require not only persistent electrical activity but also chemical rewards such as dopamine. I think that our experimental platform is well suited to test these models.
NS: Kwabena, let me come back to you. Earlier in our conversation you pointed out the dramatic difference in the power requirements for the brain on the one hand, and something like a large computer, a classical computer, such as the IBM Blue Gene computer on the other hand. It is striking that the human brain requires only 10 watts whereas the Blue Gene requires a megawatt. What is the way in which the brain is achieving this wonderful energetic economy?
KB: Well, we don't know, do we? This is really a very interdisciplinary question, because neuroscientists who study the brain usually don't measure or try to understand what makes the brain efficient. At the same time, engineers and physicists who know about energy and things like that don't study actual neurons. If you come and you tell me that you've figured out how the brain works, the first thing I'm going to ask you is, "How does it do it with 10 watts?" But if you do some calculations based on 10 watts, you can get an idea of its style of computation. Knowing how much current a single ion channel passes when it's open, you can calculate how many ion channels can be open at the same time. When you divide that by the number of neurons in the brain, it turns out that only about a hundred to a thousand ion channels per neuron can be open at one time. And that's an amazingly small number.
HP: What we have shown so far is that we can indeed introduce a variety of different biochemicals into neurons, neuronal networks and now even tissues using these nanowires in a spatially selected fashion. With our recording tools we take multiples of neurons -- our initial goal is 16 neurons -- record their activities, and correlate them with the connectivity of these neurons that we map optically. We are also collaborating with Sebastian Seung and Russ Tedrake at MIT, who will model the activities of the neurons that we record and come up with a model that can explain the behavior of these neurons and can be used to predict a control scheme.
NS: Let me ask about the targeting of specific cells in the network, Hongkun. In principle, one might be able to apply the concept of system identification that is used in other fields, such as engineering, to identify different layers in the circuit. I wonder if you can do that with the vertical nanowires. Are they addressable? Can one address individual vertical nanowires to provide identification, for example, of the neurons in your cultures?
HP: Yes, we are certainly aiming to do that. With our chip, which can electrically as well as chemically interface with neurons, we can individually address individual nanowires or nanowire bundles that are penetrating the cells so that we can record from individual cells. We can also couple that with microfluidics to administer neuromodulators, hormones and other molecules and see how site-selective administration of the chemicals modifies neural activity. One of the things we want to test is the model that Sebastian Seung came up with, the hedonistic neuron model, where in order to generate, for example, memory, you require not only persistent electrical activity but also chemical rewards such as dopamine. I think that our experimental platform is well suited to test these models.
NS: Kwabena, let me come back to you. Earlier in our conversation you pointed out the dramatic difference in the power requirements for the brain on the one hand, and something like a large computer, a classical computer, such as the IBM Blue Gene computer on the other hand. It is striking that the human brain requires only 10 watts whereas the Blue Gene requires a megawatt. What is the way in which the brain is achieving this wonderful energetic economy?
KB: Well, we don't know, do we? This is really a very interdisciplinary question, because neuroscientists who study the brain usually don't measure or try to understand what makes the brain efficient. At the same time, engineers and physicists who know about energy and things like that don't study actual neurons. If you come and you tell me that you've figured out how the brain works, the first thing I'm going to ask you is, "How does it do it with 10 watts?" But if you do some calculations based on 10 watts, you can get an idea of its style of computation. Knowing how much current a single ion channel passes when it's open, you can calculate how many ion channels can be open at the same time. When you divide that by the number of neurons in the brain, it turns out that only about a hundred to a thousand ion channels per neuron can be open at one time. And that's an amazingly small number.
Given that a neuron has 10,000 synapses, and something on the order of 10,000 ion channels are required to generate an action potential, a neuron has to be operating most of the time with most of its synapses inactive and most of ion channels closed. This style of computation, where you've got a hundred stochastic elements opening and closing randomly, is going to be very noisy and probabilistic. So we know we have to find a style of computation that works in that regime.
NS: I remember reading about the model retina that you developed. Were there additional things that came out of the modeling - insights, perhaps, into how that part of the nervous system works so effectively?
KB: Two things. One has to do with how you deal with this kind of heterogeneity or variability between neurons. Normally people describe the retina as a kind of input-output device. And they say there's a difference of Gaussians operation that's performed on the input to generate the output. So it's sort of a black box model. If you take that model and implement it on a chip, you end up with a very crummy device. But if you go and look at the circuitry, what the individual cells are doing, and how they are connected, and if you base your chip on that design, you get something that's very robust to the heterogeneity of the devices on the chip. So this is saying something about how you translate that function into a circuit, and the kinds of constraints the circuit is solving in addition to performing that function.
So that's more on the engineering side of things. On the neuroscience side of things, going through that process of translating this function into a circuit, we come up with specific predictions of what identified cell types in the retina are doing. Those predictions can be tested to show we can assign specific functions to a specific cell type. This is another one of the things that came out of that modeling work.
NS: One thing that I wanted to ask both of you about - and I will start with you, Kwabena - is the various commercial applications of the work that you are doing. For example, I remember reading with fascination a few years ago a book by Jeff Hawkins, the inventor of the Palm Pilot, called "On Intelligence." He described an effort that he is making trying to use neuronal architectures to build computers that he would like to patent and bring to the marketplace. Do you see an opportunity in this domain?
KB: I built my first neuromorphic chip - an associative memory - when I was an undergrad. I published a paper on it in 1989. So I've been in this business for 20 years. The picture has gotten more complex since then. This is part of the motivation to build Neurogrid. We have to advance our fundamental scientific understanding before we can turn this thing into a technology. To help accelerate that process, we said, well, the kinds of things we learned from the biology, in terms of modeling things like the retina and so on, we can turn into a computer that works like the brain and that will be much more efficient at doing these simulations, which would make them more affordable. And then we could have enough computational power to really do something.
We really have to understand how the computation arises, all the way down to the ion channel level, to understand how it's done efficiently. These multi-scale simulations require enormous amounts of computation.
NS: Hongkun, let me throw this ball to you. Do you see, either in the near future or down the road, applications beyond the scientific understanding of the technology that you've developed and are continuing to develop?
HP: My group is primarily interested in fundamental neuroscience, but some have shown interest in utilizing our technology beyond these studies. The things that we have demonstrated - such as the fact that the vertical nanowires can deliver any biological effector to any cell type in a spatially selected fashion - have drawn interest from many different people, and we have been working with stem cell institutes and others to demonstrate the unique utility of this particular platform. For example, one thing that we are doing is to try to differentiate individual pluripotent stem cells into a particular cell type with a shotgun approach. We can simultaneously introduce many effectors, say micro-RNAs and nuclear factors, into the same cell by simply co-depositing these molecules onto the nanowires, and we can do this in a massively parallel arrayed fashion.
So in terms of parallel bio-assay, I think the technologies we are developing can have an impact, although the actual impact remains to be seen. And it turns out that the chip we are developing can be small enough that it might be able to be implanted into live animals. We are working together with electrical engineers to demonstrate the feasibility of such an approach. As an example, we are exploring, with some electronics companies, how to develop the backside CMOS circuitry so that you can record signals remotely. Once we're able to do that we will have an implantable chip that can interface with neurons and other types of cellular networks.
NS:That's a great vision of the future. Hongkun -- and Kwabena also -- I'm now going to pose some more general questions. Let me ask Hongkun: What are one or two of the big conceptual challenges that face you in developing these tools and perhaps testing them as well? What are the conceptual hurdles that you have to overcome in this work?
HP: I'm not so sure you can call this a conceptual hurdle, but one thing that struck me as a physicist about the biology of neuronal networks is that we seem to lack a framework for how to really think about the problem. And, in my opinion, it, at least partially, stems from the fact that we lack the appropriate tools. Let's take an example: say I give you a small piece of rat brain and then tell you that it must have been very, very important because I took it out and the rat died when I took it out. Can we discern the function of that particular neuronal circuitry? Currently we don't know how to answer these types of questions. From my perspective, the reason why we cannot do so is because we lack the tools that can perturb the system in a specific fashion and then record the signals globally, the type of tools that engineers use for system identification. So the one thing that I am trying to do while I'm learning this new, fascinating field is to try to contribute in that direction, that is, by developing such tools.
NS: Kwabena, I want to ask you the same question. What conceptual challenges are the biggest problems for you and your colleagues?
KB: I think that the biggest one is the way in which engineers are trained to get precision from individual components. If they have a chip and all the different devices on the chip are behaving differently, they're going to try to make every device identical before they move to the next step of doing something with it. I think that that is a big stumbling block. Exposing engineers to a little more biology would help them see how the brain works, that it works despite all this heterogeneity, and that it's able to get precision at the system level from imprecise components at the device level. The way that technology is going right now, as transistors are getting down to the nanoscale, we are getting a lot of variability between the transistors - eventually they will get so small that electrons are going to be flowing down the channel single-file, and when an electron gets trapped, the current is going to turn off and is going to turn on stochastically, just like an ion channel. At that point your digital logic is going to fail and yet you still have to try to compute. Through these neurobiologic approaches, if we can figure out how the brain is doing it, we will be able to come up with a solution for the next generation of technology.
NS: Let me throw another question to the two of you. I'll start with you, Kwabena and then come to you, Hongkun. This is about how you interact with neuroscientists. Your research concerns are very closely related to those of neuroscientists. How do you interact with them?
KB: My lab is in the "Bio-X" center at Stanford, and the whole point of "Bio-X" is that X stands for anything. All these biologists and engineers and physicists and mathematicians and all these guys are cheek by jowl together in the same place. I'm part of the "neuro" cluster, which consists of six different "neuro" people - including myself, Krishna Shenoy, who studies the motor cortex, and Tirin Moore, who studies attention. Tirin and I have a collaboration with a student who is recording from multiple layers of cortex at the same time so that we can constrain our model by looking at activities of different layers of cortex. I'm also collaborating with Eric Knudsen - he's in the building next door. He studies the optic tectum but he's also interested in attention - basically how you direct your gaze to different things that come up or to interesting targets. The tectum is a little brain by itself. It gets sensory input at the superficial layers and has motor output from the deep layers, and it closes the loop. We are able to record from the various layers in brain slices and we can also do behavioral experiments with chicks. And the tectum is more accessible than the cortex because the cells we are particularly interested in are in separate nuclei. That's another area of collaboration - we share a student who's modeling the tectum and doing experiments in vitro and in vivo.
NS: That's great. I've certainly heard a lot about Bio-X. It's a really nice incubator for bringing people together and encouraging collaboration. Hongkun, how is this working for you? How do you keep in close touch with neuroscientists interested in similar problems?
HP: I'm certainly blessed by the wonderful colleagues and the wonderful collaborators that I have. As I said, I really knew nothing about neurobiology, and without them I could not have started this endeavor. When I first started, I learned a lot from Sebastian Seung, Venkatesh Murthy and Markus Meister, who are all close by. Venky Murthy and Markus Meister are the colleagues at Harvard who are affiliated with the Center Brain Science, which I am part of. Sebastian Seung at MIT taught me how to think about neural networks and what the important questions are. I also learned from Clay Reid at Harvard Medical School about the power of imaging tools. All these interactions helped me greatly in terms of identifying the problems that they care about. I think collaboration is the crucial part or this particular endeavor and I know that, without them, I would not be where I am.
NS: My last question is this: If you could be granted instantly the answer to a question - let's imagine that it was from an authority that had the ability to do this - what would the question be? I'll start with you, Kwabena.
KB: The question would be, "How come when I look out there I see a single world when there are three dozen representations of the world inside my brain?"
NS: That's a very interesting question, because of course the inputs that come in are disparate, and the way in which they are fused to give us a single perspective of reality is a fascinating problem. Hongkun, what is the question you would most like have answered?
HP: We start from a single cell, and then become a fully functional biological organism with complex organs such as the brain. I'd love to know how this wonderful "self assembly" happens.
NS: That's another wonderful question. Gentlemen, this has been a real treat for me. I have enjoyed it tremendously. I look forward to seeing this interface between neuroscience and nanoscience develop further, and I know the two of you will be right on the cutting edge there, pushing this forward. Thanks very much.
KB & HP: And thank you very much.
(The teleconference was held on January 19, 2010.)
NS: Let me ask about the targeting of specific cells in the network, Hongkun. In principle, one might be able to apply the concept of system identification that is used in other fields, such as engineering, to identify different layers in the circuit. I wonder if you can do that with the vertical nanowires. Are they addressable? Can one address individual vertical nanowires to provide identification, for example, of the neurons in your cultures?
HP: Yes, we are certainly aiming to do that. With our chip, which can electrically as well as chemically interface with neurons, we can individually address individual nanowires or nanowire bundles that are penetrating the cells so that we can record from individual cells. We can also couple that with microfluidics to administer neuromodulators, hormones and other molecules and see how site-selective administration of the chemicals modifies neural activity. One of the things we want to test is the model that Sebastian Seung came up with, the hedonistic neuron model, where in order to generate, for example, memory, you require not only persistent electrical activity but also chemical rewards such as dopamine. I think that our experimental platform is well suited to test these models.
NS: Kwabena, let me come back to you. Earlier in our conversation you pointed out the dramatic difference in the power requirements for the brain on the one hand, and something like a large computer, a classical computer, such as the IBM Blue Gene computer on the other hand. It is striking that the human brain requires only 10 watts whereas the Blue Gene requires a megawatt. What is the way in which the brain is achieving this wonderful energetic economy?
KB: Well, we don't know, do we? This is really a very interdisciplinary question, because neuroscientists who study the brain usually don't measure or try to understand what makes the brain efficient. At the same time, engineers and physicists who know about energy and things like that don't study actual neurons. If you come and you tell me that you've figured out how the brain works, the first thing I'm going to ask you is, "How does it do it with 10 watts?" But if you do some calculations based on 10 watts, you can get an idea of its style of computation. Knowing how much current a single ion channel passes when it's open, you can calculate how many ion channels can be open at the same time. When you divide that by the number of neurons in the brain, it turns out that only about a hundred to a thousand ion channels per neuron can be open at one time. And that's an amazingly small number.
HP: What we have shown so far is that we can indeed introduce a variety of different biochemicals into neurons, neuronal networks and now even tissues using these nanowires in a spatially selected fashion. With our recording tools we take multiples of neurons -- our initial goal is 16 neurons -- record their activities, and correlate them with the connectivity of these neurons that we map optically. We are also collaborating with Sebastian Seung and Russ Tedrake at MIT, who will model the activities of the neurons that we record and come up with a model that can explain the behavior of these neurons and can be used to predict a control scheme.
NS: Let me ask about the targeting of specific cells in the network, Hongkun. In principle, one might be able to apply the concept of system identification that is used in other fields, such as engineering, to identify different layers in the circuit. I wonder if you can do that with the vertical nanowires. Are they addressable? Can one address individual vertical nanowires to provide identification, for example, of the neurons in your cultures?
HP: Yes, we are certainly aiming to do that. With our chip, which can electrically as well as chemically interface with neurons, we can individually address individual nanowires or nanowire bundles that are penetrating the cells so that we can record from individual cells. We can also couple that with microfluidics to administer neuromodulators, hormones and other molecules and see how site-selective administration of the chemicals modifies neural activity. One of the things we want to test is the model that Sebastian Seung came up with, the hedonistic neuron model, where in order to generate, for example, memory, you require not only persistent electrical activity but also chemical rewards such as dopamine. I think that our experimental platform is well suited to test these models.
NS: Kwabena, let me come back to you. Earlier in our conversation you pointed out the dramatic difference in the power requirements for the brain on the one hand, and something like a large computer, a classical computer, such as the IBM Blue Gene computer on the other hand. It is striking that the human brain requires only 10 watts whereas the Blue Gene requires a megawatt. What is the way in which the brain is achieving this wonderful energetic economy?
KB: Well, we don't know, do we? This is really a very interdisciplinary question, because neuroscientists who study the brain usually don't measure or try to understand what makes the brain efficient. At the same time, engineers and physicists who know about energy and things like that don't study actual neurons. If you come and you tell me that you've figured out how the brain works, the first thing I'm going to ask you is, "How does it do it with 10 watts?" But if you do some calculations based on 10 watts, you can get an idea of its style of computation. Knowing how much current a single ion channel passes when it's open, you can calculate how many ion channels can be open at the same time. When you divide that by the number of neurons in the brain, it turns out that only about a hundred to a thousand ion channels per neuron can be open at one time. And that's an amazingly small number.
HP: What we have shown so far is that we can indeed introduce a variety of different biochemicals into neurons, neuronal networks and now even tissues using these nanowires in a spatially selected fashion. With our recording tools we take multiples of neurons -- our initial goal is 16 neurons -- record their activities, and correlate them with the connectivity of these neurons that we map optically. We are also collaborating with Sebastian Seung and Russ Tedrake at MIT, who will model the activities of the neurons that we record and come up with a model that can explain the behavior of these neurons and can be used to predict a control scheme.
NS: Let me ask about the targeting of specific cells in the network, Hongkun. In principle, one might be able to apply the concept of system identification that is used in other fields, such as engineering, to identify different layers in the circuit. I wonder if you can do that with the vertical nanowires. Are they addressable? Can one address individual vertical nanowires to provide identification, for example, of the neurons in your cultures?
HP: Yes, we are certainly aiming to do that. With our chip, which can electrically as well as chemically interface with neurons, we can individually address individual nanowires or nanowire bundles that are penetrating the cells so that we can record from individual cells. We can also couple that with microfluidics to administer neuromodulators, hormones and other molecules and see how site-selective administration of the chemicals modifies neural activity. One of the things we want to test is the model that Sebastian Seung came up with, the hedonistic neuron model, where in order to generate, for example, memory, you require not only persistent electrical activity but also chemical rewards such as dopamine. I think that our experimental platform is well suited to test these models.
NS: Kwabena, let me come back to you. Earlier in our conversation you pointed out the dramatic difference in the power requirements for the brain on the one hand, and something like a large computer, a classical computer, such as the IBM Blue Gene computer on the other hand. It is striking that the human brain requires only 10 watts whereas the Blue Gene requires a megawatt. What is the way in which the brain is achieving this wonderful energetic economy?
KB: Well, we don't know, do we? This is really a very interdisciplinary question, because neuroscientists who study the brain usually don't measure or try to understand what makes the brain efficient. At the same time, engineers and physicists who know about energy and things like that don't study actual neurons. If you come and you tell me that you've figured out how the brain works, the first thing I'm going to ask you is, "How does it do it with 10 watts?" But if you do some calculations based on 10 watts, you can get an idea of its style of computation. Knowing how much current a single ion channel passes when it's open, you can calculate how many ion channels can be open at the same time. When you divide that by the number of neurons in the brain, it turns out that only about a hundred to a thousand ion channels per neuron can be open at one time. And that's an amazingly small number.
Given that a neuron has 10,000 synapses, and something on the order of 10,000 ion channels are required to generate an action potential, a neuron has to be operating most of the time with most of its synapses inactive and most of ion channels closed. This style of computation, where you've got a hundred stochastic elements opening and closing randomly, is going to be very noisy and probabilistic. So we know we have to find a style of computation that works in that regime.
NS: I remember reading about the model retina that you developed. Were there additional things that came out of the modeling - insights, perhaps, into how that part of the nervous system works so effectively?
KB: Two things. One has to do with how you deal with this kind of heterogeneity or variability between neurons. Normally people describe the retina as a kind of input-output device. And they say there's a difference of Gaussians operation that's performed on the input to generate the output. So it's sort of a black box model. If you take that model and implement it on a chip, you end up with a very crummy device. But if you go and look at the circuitry, what the individual cells are doing, and how they are connected, and if you base your chip on that design, you get something that's very robust to the heterogeneity of the devices on the chip. So this is saying something about how you translate that function into a circuit, and the kinds of constraints the circuit is solving in addition to performing that function.
So that's more on the engineering side of things. On the neuroscience side of things, going through that process of translating this function into a circuit, we come up with specific predictions of what identified cell types in the retina are doing. Those predictions can be tested to show we can assign specific functions to a specific cell type. This is another one of the things that came out of that modeling work.
NS: One thing that I wanted to ask both of you about - and I will start with you, Kwabena - is the various commercial applications of the work that you are doing. For example, I remember reading with fascination a few years ago a book by Jeff Hawkins, the inventor of the Palm Pilot, called "On Intelligence." He described an effort that he is making trying to use neuronal architectures to build computers that he would like to patent and bring to the marketplace. Do you see an opportunity in this domain?
KB: I built my first neuromorphic chip - an associative memory - when I was an undergrad. I published a paper on it in 1989. So I've been in this business for 20 years. The picture has gotten more complex since then. This is part of the motivation to build Neurogrid. We have to advance our fundamental scientific understanding before we can turn this thing into a technology. To help accelerate that process, we said, well, the kinds of things we learned from the biology, in terms of modeling things like the retina and so on, we can turn into a computer that works like the brain and that will be much more efficient at doing these simulations, which would make them more affordable. And then we could have enough computational power to really do something.
We really have to understand how the computation arises, all the way down to the ion channel level, to understand how it's done efficiently. These multi-scale simulations require enormous amounts of computation.
NS: Hongkun, let me throw this ball to you. Do you see, either in the near future or down the road, applications beyond the scientific understanding of the technology that you've developed and are continuing to develop?
HP: My group is primarily interested in fundamental neuroscience, but some have shown interest in utilizing our technology beyond these studies. The things that we have demonstrated - such as the fact that the vertical nanowires can deliver any biological effector to any cell type in a spatially selected fashion - have drawn interest from many different people, and we have been working with stem cell institutes and others to demonstrate the unique utility of this particular platform. For example, one thing that we are doing is to try to differentiate individual pluripotent stem cells into a particular cell type with a shotgun approach. We can simultaneously introduce many effectors, say micro-RNAs and nuclear factors, into the same cell by simply co-depositing these molecules onto the nanowires, and we can do this in a massively parallel arrayed fashion.
So in terms of parallel bio-assay, I think the technologies we are developing can have an impact, although the actual impact remains to be seen. And it turns out that the chip we are developing can be small enough that it might be able to be implanted into live animals. We are working together with electrical engineers to demonstrate the feasibility of such an approach. As an example, we are exploring, with some electronics companies, how to develop the backside CMOS circuitry so that you can record signals remotely. Once we're able to do that we will have an implantable chip that can interface with neurons and other types of cellular networks.
NS:That's a great vision of the future. Hongkun -- and Kwabena also -- I'm now going to pose some more general questions. Let me ask Hongkun: What are one or two of the big conceptual challenges that face you in developing these tools and perhaps testing them as well? What are the conceptual hurdles that you have to overcome in this work?
HP: I'm not so sure you can call this a conceptual hurdle, but one thing that struck me as a physicist about the biology of neuronal networks is that we seem to lack a framework for how to really think about the problem. And, in my opinion, it, at least partially, stems from the fact that we lack the appropriate tools. Let's take an example: say I give you a small piece of rat brain and then tell you that it must have been very, very important because I took it out and the rat died when I took it out. Can we discern the function of that particular neuronal circuitry? Currently we don't know how to answer these types of questions. From my perspective, the reason why we cannot do so is because we lack the tools that can perturb the system in a specific fashion and then record the signals globally, the type of tools that engineers use for system identification. So the one thing that I am trying to do while I'm learning this new, fascinating field is to try to contribute in that direction, that is, by developing such tools.
NS: Kwabena, I want to ask you the same question. What conceptual challenges are the biggest problems for you and your colleagues?
KB: I think that the biggest one is the way in which engineers are trained to get precision from individual components. If they have a chip and all the different devices on the chip are behaving differently, they're going to try to make every device identical before they move to the next step of doing something with it. I think that that is a big stumbling block. Exposing engineers to a little more biology would help them see how the brain works, that it works despite all this heterogeneity, and that it's able to get precision at the system level from imprecise components at the device level. The way that technology is going right now, as transistors are getting down to the nanoscale, we are getting a lot of variability between the transistors - eventually they will get so small that electrons are going to be flowing down the channel single-file, and when an electron gets trapped, the current is going to turn off and is going to turn on stochastically, just like an ion channel. At that point your digital logic is going to fail and yet you still have to try to compute. Through these neurobiologic approaches, if we can figure out how the brain is doing it, we will be able to come up with a solution for the next generation of technology.
NS: Let me throw another question to the two of you. I'll start with you, Kwabena and then come to you, Hongkun. This is about how you interact with neuroscientists. Your research concerns are very closely related to those of neuroscientists. How do you interact with them?
KB: My lab is in the "Bio-X" center at Stanford, and the whole point of "Bio-X" is that X stands for anything. All these biologists and engineers and physicists and mathematicians and all these guys are cheek by jowl together in the same place. I'm part of the "neuro" cluster, which consists of six different "neuro" people - including myself, Krishna Shenoy, who studies the motor cortex, and Tirin Moore, who studies attention. Tirin and I have a collaboration with a student who is recording from multiple layers of cortex at the same time so that we can constrain our model by looking at activities of different layers of cortex. I'm also collaborating with Eric Knudsen - he's in the building next door. He studies the optic tectum but he's also interested in attention - basically how you direct your gaze to different things that come up or to interesting targets. The tectum is a little brain by itself. It gets sensory input at the superficial layers and has motor output from the deep layers, and it closes the loop. We are able to record from the various layers in brain slices and we can also do behavioral experiments with chicks. And the tectum is more accessible than the cortex because the cells we are particularly interested in are in separate nuclei. That's another area of collaboration - we share a student who's modeling the tectum and doing experiments in vitro and in vivo.
NS: That's great. I've certainly heard a lot about Bio-X. It's a really nice incubator for bringing people together and encouraging collaboration. Hongkun, how is this working for you? How do you keep in close touch with neuroscientists interested in similar problems?
HP: I'm certainly blessed by the wonderful colleagues and the wonderful collaborators that I have. As I said, I really knew nothing about neurobiology, and without them I could not have started this endeavor. When I first started, I learned a lot from Sebastian Seung, Venkatesh Murthy and Markus Meister, who are all close by. Venky Murthy and Markus Meister are the colleagues at Harvard who are affiliated with the Center Brain Science, which I am part of. Sebastian Seung at MIT taught me how to think about neural networks and what the important questions are. I also learned from Clay Reid at Harvard Medical School about the power of imaging tools. All these interactions helped me greatly in terms of identifying the problems that they care about. I think collaboration is the crucial part or this particular endeavor and I know that, without them, I would not be where I am.
NS: My last question is this: If you could be granted instantly the answer to a question - let's imagine that it was from an authority that had the ability to do this - what would the question be? I'll start with you, Kwabena.
KB: The question would be, "How come when I look out there I see a single world when there are three dozen representations of the world inside my brain?"
NS: That's a very interesting question, because of course the inputs that come in are disparate, and the way in which they are fused to give us a single perspective of reality is a fascinating problem. Hongkun, what is the question you would most like have answered?
HP: We start from a single cell, and then become a fully functional biological organism with complex organs such as the brain. I'd love to know how this wonderful "self assembly" happens.
NS: That's another wonderful question. Gentlemen, this has been a real treat for me. I have enjoyed it tremendously. I look forward to seeing this interface between neuroscience and nanoscience develop further, and I know the two of you will be right on the cutting edge there, pushing this forward. Thanks very much.
KB & HP: And thank you very much.
(The teleconference was held on January 19, 2010.)
About Kavli Foundation
The Kavli Foundation, based in Oxnard, California, is dedicated to the goals of advancing science for the benefit of humanity and promoting increased public understanding and support for scientists and their work.
The Foundation's mission is implemented through an international program of research institutes, professorships, and symposia in the fields of astrophysics, nanoscience, neuroscience, and theoretical physics as well as prizes in the fields of astrophysics, nanoscience, and neuroscience.
For more information, please click hereThe Kavli Foundation, based in Oxnard, California, is dedicated to the goals of advancing science for the benefit of humanity and promoting increased public understanding and support for scientists and their work.
The Foundation's mission is implemented through an international program of research institutes, professorships, and symposia in the fields of astrophysics, nanoscience, neuroscience, and theoretical physics as well as prizes in the fields of astrophysics, nanoscience, and neuroscience.
Contacts:
The Kavli Foundation
1801 Solar Drive , Suite 250
Oxnard, CA 93030
Phone 805.983.6000
Fax 805.988.4800
Will your brain like a new commercial?
Two U.S. researchers are speculating on the popularity of the new field of neuromarketing, which is a high-tech way for marketers to find out what consumers like or dislike. A brain scan may show one day that you have an unconscious attraction for a recently introduced beer or for an up-and-coming presidential candidate.
Neuroimaging methods have been very popular with the medical profession for many decades. However, their application for “neuromarketing” in business is showing a lot of promise, too.
In an article in Nature Reviews: Neuroscience, U.S. researchers Dan Ariely (Duke University, North Carolina) and Gregory S. Berns (Emory University, Georgia) talk about how neurimaging is being introduced to the world of business, specifically to advertising and marketing of products and services.
One day in the future, a professional neuromarketer may learn that your bran scan shows a particular liking or disliking for some product or service. That knowledge could be then applied to the company’s commercial advantage.
Today, various imaging techniques are used to directly or indirectly visualize the structure and function of the brain. Such scans are used quite often in medicine and neuroscience. They are now entering business!
For instance, CAT scans—less commonly called computerized axial tomography scans—have been around for forty years in the diagnoses of patients and for many research purposes.
In the 1980s, magnetic resonance imaging (MRI) scans were introduced primarily for use in radiology. It is an improvement over CAT scans in some areas—such as greater contrast between various tissues of the body—but provides less quality imaging than CAT scans when it comes to functioning of the body.
Then, in the 1990s, functional MRI scans (or, fMRI scans) were developed. They are able to better visualize the functioning of the internal body. In the brain, they are able to see the blood flowing, showing how it changes in different parts of the brain due to various circumstances.
From the early 2000s to now in the 2010s, such neuroimaging methods, such as CAT and MRI scans, have allowed the field of neuromarketing to become popular.
In this new field, only with a history of less than ten years, studies are being performed on the sensorimotor, cognitive, and affective responses that the brain has to marketing stimuli.
In other words, marketing studies are peering into your brain to see how it reacts to the latest commercial on TV, a yet-to-be made product to be advertised on your media device, or even the next round of political campaigns.
For instance, fMRI scans can be used to measure a pleasurable response when something desirable is placed in front of a person's eyes. Electroencephalography (EEG) sensors can be used to measure an increased rate of respiration, heart rate, and skin response to various consumer products or services.
In all, these neuromarketing techniques can be used to find out how consumers perceive new products, and why they make decisions with respect to purchases. In many cases, these responses have the potential to be much more accurate than a consumer saying, “Yes, I like that color.” or “No, I don’t like the taste of that soda pop.”
Consequently, neuromarketing is helping corporations design products and services that are created with the consumer in mind and to create advertising and marketing campaigns more on what the brain responds to in the way of stimulation.
Over the past ten or so years, neuroimaging methods applied to the marketplace have become popular. And two psychologists are proposing they know why this popularity is gaining ground.
Drs. Dan Ariely and Gregory S. Berns have written a March 3, 2010 perspective piece in Nature Reviews: Neuroscience. It is entitled “Neuromarketing: the hope and hype of neuroimaging in business” (Nature Reviews Neuroscience (3 March 2010) | doi:10.1038/nrn2795).
Dr. Ariely is associated with the Fuqua School of Business, Center for Cognitive Neuroscience, Department of Economics, and the Department of Psychiatry and Behavioral Sciences, Duke University, Durham, North Carolina.
Dr. Berns is with the Department of Psychiatry and Behavioural Sciences, Economics Department, Center for Neuropolicy, Emory University, Atlanta, Georgia.
In their perspective piece, the two scientists state, “We propose that there are two main reasons for this trend. First, the possibility that neuroimaging will become cheaper and faster than other marketing methods; and second, the hope that neuroimaging will provide marketers with information that is not obtainable through conventional marketing methods.”
They admit current neuroimaging methods are not expected to become less expensive anywhere in the near future. Thus, they speculate that there is, “…growing evidence that it [neuroimaging methods] may provide hidden information about the consumer experience.:with what Drs. Ariely and Berns think could be the largest application of neuroimaging in the future.
Drs. Ariely and Berns suggest in their conclusion to their paper that, "The most promising application of neuroimaging methods to marketing may come before a product is even released — when it is just an idea being developed.”
For additional information on neuromarketing, please read the March 4, 2010 Cellular-News.com article “Brain Scans Could Be Marketing Tool of the Future.”
And, learn more about neuromarketing in the article "Market Researchers make Increasing use of Brain Imaging" by Dr. David Lewis.
You may not hear or see too much about neuromarketing, but it is out there and it is being developed to learn more about what is going on up there in the brain of customers.
Neuroimaging methods have been very popular with the medical profession for many decades. However, their application for “neuromarketing” in business is showing a lot of promise, too.
In an article in Nature Reviews: Neuroscience, U.S. researchers Dan Ariely (Duke University, North Carolina) and Gregory S. Berns (Emory University, Georgia) talk about how neurimaging is being introduced to the world of business, specifically to advertising and marketing of products and services.
One day in the future, a professional neuromarketer may learn that your bran scan shows a particular liking or disliking for some product or service. That knowledge could be then applied to the company’s commercial advantage.
Today, various imaging techniques are used to directly or indirectly visualize the structure and function of the brain. Such scans are used quite often in medicine and neuroscience. They are now entering business!
For instance, CAT scans—less commonly called computerized axial tomography scans—have been around for forty years in the diagnoses of patients and for many research purposes.
In the 1980s, magnetic resonance imaging (MRI) scans were introduced primarily for use in radiology. It is an improvement over CAT scans in some areas—such as greater contrast between various tissues of the body—but provides less quality imaging than CAT scans when it comes to functioning of the body.
Then, in the 1990s, functional MRI scans (or, fMRI scans) were developed. They are able to better visualize the functioning of the internal body. In the brain, they are able to see the blood flowing, showing how it changes in different parts of the brain due to various circumstances.
From the early 2000s to now in the 2010s, such neuroimaging methods, such as CAT and MRI scans, have allowed the field of neuromarketing to become popular.
In this new field, only with a history of less than ten years, studies are being performed on the sensorimotor, cognitive, and affective responses that the brain has to marketing stimuli.
In other words, marketing studies are peering into your brain to see how it reacts to the latest commercial on TV, a yet-to-be made product to be advertised on your media device, or even the next round of political campaigns.
For instance, fMRI scans can be used to measure a pleasurable response when something desirable is placed in front of a person's eyes. Electroencephalography (EEG) sensors can be used to measure an increased rate of respiration, heart rate, and skin response to various consumer products or services.
In all, these neuromarketing techniques can be used to find out how consumers perceive new products, and why they make decisions with respect to purchases. In many cases, these responses have the potential to be much more accurate than a consumer saying, “Yes, I like that color.” or “No, I don’t like the taste of that soda pop.”
Consequently, neuromarketing is helping corporations design products and services that are created with the consumer in mind and to create advertising and marketing campaigns more on what the brain responds to in the way of stimulation.
Over the past ten or so years, neuroimaging methods applied to the marketplace have become popular. And two psychologists are proposing they know why this popularity is gaining ground.
Drs. Dan Ariely and Gregory S. Berns have written a March 3, 2010 perspective piece in Nature Reviews: Neuroscience. It is entitled “Neuromarketing: the hope and hype of neuroimaging in business” (Nature Reviews Neuroscience (3 March 2010) | doi:10.1038/nrn2795).
Dr. Ariely is associated with the Fuqua School of Business, Center for Cognitive Neuroscience, Department of Economics, and the Department of Psychiatry and Behavioral Sciences, Duke University, Durham, North Carolina.
Dr. Berns is with the Department of Psychiatry and Behavioural Sciences, Economics Department, Center for Neuropolicy, Emory University, Atlanta, Georgia.
In their perspective piece, the two scientists state, “We propose that there are two main reasons for this trend. First, the possibility that neuroimaging will become cheaper and faster than other marketing methods; and second, the hope that neuroimaging will provide marketers with information that is not obtainable through conventional marketing methods.”
They admit current neuroimaging methods are not expected to become less expensive anywhere in the near future. Thus, they speculate that there is, “…growing evidence that it [neuroimaging methods] may provide hidden information about the consumer experience.:with what Drs. Ariely and Berns think could be the largest application of neuroimaging in the future.
Drs. Ariely and Berns suggest in their conclusion to their paper that, "The most promising application of neuroimaging methods to marketing may come before a product is even released — when it is just an idea being developed.”
For additional information on neuromarketing, please read the March 4, 2010 Cellular-News.com article “Brain Scans Could Be Marketing Tool of the Future.”
And, learn more about neuromarketing in the article "Market Researchers make Increasing use of Brain Imaging" by Dr. David Lewis.
You may not hear or see too much about neuromarketing, but it is out there and it is being developed to learn more about what is going on up there in the brain of customers.
Mind-Control Coming To A Computer Near You?
The world’s largest high-tech fair is the place to go to find out how you can play games or do daily chores all by just using the power of your mind.
At the annual CeBIT fair, crowds gathered around a man sitting at a pinball table, wearing a cap full of electrodes attached to his head. The man controlled flippers with ease without the use of his hands.
Michael Tangermann, of Berlin Brain Computer Interface, told spectators “He thinks: left-hand or right-hand and the electrodes monitor the brain waves associated with that thought, send the information to a computer, which then moves the flippers,”
In an emergency where a driver needs to stop quickly, brain activity kicks in around 200 milliseconds before even the most alert driver can hit the brake. Although, scientists are not thinking that the car should automatically brake for the driver, Tangermann said. He did say, however, “there are various things the car can do in that crucial time, tighten the seat belt, for example.”
Using the monitoring technology, a car could also tell whether the driver is drowsy or not, potentially keeping the driver safe, by alerting them to take a break.
At another booth, spectators watched a man wearing a similar device as he sat in front of a screen with a large keyboard, with letters flashing in an ordered sequence. The user concentrates on a letter flashing on the screen and the brain waves stimulated at this exact moment are registered by the computer and the letter appears onscreen.
Currently, the technology is slow-going -- it took the man about 4 minutes to write a five-letter word -- but researchers are hoping to perfect the technology in the close future.
Another device showed users controlling robots with their mind. The user concentrates on flashing lights from a small box that has lights on each of four compass points. Depending on which light he concentrates on, the robot will move in that direction.
Scientists say the technology is being perfected for use by persons with disabilities, allowing them to communicate and operate devices with their brain. “In future, people will be able to control wheelchairs, open doors and turn on their televisions with their minds,” said Clemens Holzner from g.tec.
At the annual CeBIT fair, crowds gathered around a man sitting at a pinball table, wearing a cap full of electrodes attached to his head. The man controlled flippers with ease without the use of his hands.
Michael Tangermann, of Berlin Brain Computer Interface, told spectators “He thinks: left-hand or right-hand and the electrodes monitor the brain waves associated with that thought, send the information to a computer, which then moves the flippers,”
Although the technology can be used as a fun gadget, there is so much more it could possibly do. Scientists are researching potential ways the technology would be able to monitor brain waves of motorists to help improve reaction times in a crash.
In an emergency where a driver needs to stop quickly, brain activity kicks in around 200 milliseconds before even the most alert driver can hit the brake. Although, scientists are not thinking that the car should automatically brake for the driver, Tangermann said. He did say, however, “there are various things the car can do in that crucial time, tighten the seat belt, for example.”
Using the monitoring technology, a car could also tell whether the driver is drowsy or not, potentially keeping the driver safe, by alerting them to take a break.
At another booth, spectators watched a man wearing a similar device as he sat in front of a screen with a large keyboard, with letters flashing in an ordered sequence. The user concentrates on a letter flashing on the screen and the brain waves stimulated at this exact moment are registered by the computer and the letter appears onscreen.
Currently, the technology is slow-going -- it took the man about 4 minutes to write a five-letter word -- but researchers are hoping to perfect the technology in the close future.
Another device showed users controlling robots with their mind. The user concentrates on flashing lights from a small box that has lights on each of four compass points. Depending on which light he concentrates on, the robot will move in that direction.
Scientists say the technology is being perfected for use by persons with disabilities, allowing them to communicate and operate devices with their brain. “In future, people will be able to control wheelchairs, open doors and turn on their televisions with their minds,” said Clemens Holzner from g.tec.
Vaccine may shift odds against deadly brain cancer
In March 2008, Karen was working out in the gym at UNC-Asheville when her left arm went limp. She felt so faint and dizzy that she had to sit. When the Vanemans first got to the emergency room, the doctors were afraid that Karen was suffering a stroke or heart attack.
In fact, Vaneman, a retired professor of Shakespeare and medieval literature, had brain cancer: a glioblastoma or GBM, the most common type of brain tumor. Many doctors consider it a death sentence, but not the team at the Preston Robert Tisch center, where Vaneman was directed for surgery and post-operative care.
"Other doctors will tell you you've got six to nine months, maybe a year to live," says Dr. Henry Friedman, the center's director. "'You're incurable; move on with your life the best you can.' But it's a self-fulfilling prophecy."
At Duke, aggressive treatment is the rule. Almost every patient is enrolled in a clinical trial. For Vaneman, that meant a novel vaccine.
The vaccine, based on research done at Duke and Johns Hopkins, and produced by Pfizer, is called CDX-110. It's not a vaccine in the traditional sense: It doesn't prevent disease. But like any vaccine, it triggers the immune system to attack an intruder, in this case, the cancerous cells.
"All the cells in our body have a [genetic] fingerprint," said Dr. John Sampson, a surgeon and researcher who helped develop the vaccine. "The immune system can recognize differences in those fingerprints."
CDX-110 targets a particular protein -- one with the unwieldy name of EGFRviii, or "EGFR factor three." There is a huge amount of genetic variation within even one patient's tumor, which is one reason the disease is so hard to treat. But according to Sampson, about 40 percent of tumor cells produce EGFRviii. It acts like a homing beacon for the disease-fighting immune cells that are stirred up by the vaccine.
"We're using white blood cells called T-cells, antibodies, to attack the tumor cells because [the tumor cells] have a different fingerprint from the normal cells in the body," Sampson says.
"Unlike chemotherapy, which really hurts all dividing cells in the body, or radiation... the immune system can be absolutely precise. And so we get a very tumor-specific attack with very low toxicity," he says. That means patients suffer fewer side effects.
A vaccine approach is not unique to brain cancer; several are under investigation. Last summer, researchers presented data showing that another vaccine could extend survival for prostate cancer patients.
CDX-110 is not the only candidate for a brain cancer vaccine, either. A project at the University of California, San Francisco, takes a more radically personalized approach. The vaccine for each GBM patient is custom-made from that patient's own tumor cells. Each drug is one-of-a-kind.
"This is the ultimate personalized medicine," says Dr. Andrew Parsa, the neurosurgeon who is running the trial. "It's like having a lot of little medicines instead of one big blockbuster."
The vaccine works -- he hopes -- by targeting something called a heat-shock protein, which is produced in high quantity by tumor cells.
Another novel aspect of the UCSF research is that it doesn't involve a pharmaceutical company as a backer. Instead, it's funded completely by the National Cancer Institute and a handful of cancer nonprofits.
One of the funders, Deneen Hesser, research director of the American Brain Tumor Association, says the research on vaccines is exciting and about expanding the realm of possibility.
"Vaccines represent treatment options," Hesser says. "For patients to be able to have choices, choices in how to approach a treatment plan, is really much different from historical approaches to treating brain tumors."
The UCSF trial is still in its early stages. The current Phase 2 trials -- testing whether the vaccine is effective -- have been enrolling patients only since last summer. The research with CDX-110 is further along, but results from the first multicenter trial will not be made public until later this spring. Forty-four hospitals and medical centers are participating.
At least one of Sampson's patients has made it six years without his tumor coming back. Still unclear is just how many other patients can benefit. Parsa, at UCSF, says his hospital tested 14 patients for the multicenter study. Only one had a tumor that was vulnerable to the vaccine.
In Asheville, Karen Vaneman calls herself lucky.
"In the last year, I've been more mindful of my priorities," Vaneman says. "My family, and the granddaughter, and my husband who has been like a rock through the whole thing. And beyond that, trying to keep a connection with the rest of life, too -- the birds, the bees, the rocks, things like that."
She's hiking again through the hills around Asheville, although she limits the walks to about 2 miles these days. She feels good. And she's happy, more than happy, to put up with the monthly trips to Durham.
"As long as the vaccine works, I'll be getting the monthly shots. And when it doesn't work [any more], then I'm in trouble."
How brain cells are possessed and damaged by demons of dementia
Washington, March 4 (ANI): A new study has shed light on how Amyloid-Beta found in cerebral plaques, typically present in the brain of Alzheimer's patients, leads to neurodegeneration.
While the exact mechanisms by which the formation of plaques occurs and how they cause neurodegeneration and dementia is still a matter of debate in the scientific world, this study sheds a new light on how astrocytes may participate in the development of Alzheimer's disease.
This new understanding of the interaction between Amyloid-Beta and astrocytes could lead to more effective therapies for Alzheimer's disease by trying to rescue astrocytic functions by deactivating the scavenger receptors.
The current study explores the causal relationship between the build-up of the Amyloid-Beta protein, associated with the formation of plaques, and the impairment of astrocyte's unctions.
Pierre Magistretti, director of the Brain Mind Institute and the Center for Psychiatric Neurosciences at CHUV/UNIL, and Igor Allaman, post doctoral fellow in Magistretti's lab, have succeeded in determining how built-up Amyloid-Beta infiltrates the astrocyte cells and alters their proper functioning, thus leading to the death of surrounding neurons.
"To penetrate the astrocyte, the pathological protein goes through a 'scavenger' receptor. Our study has shown that if we impair Amyloid-Beta build-up, or activation of this receptor, astrocytes continue to fulfill their normal neuroprotective functions even in the presence of the Amyloid-Beta," Igor Allaman said.
Researchers from EPFL's (Ecole Polytechnique Fidirale de Lausanne) Laboratory of Neuroenergetics and Cellular Dynamics in Lausanne Switzerland have studied how the functions of certain cells called astrocytes-which normally protect, repair, and transfer energy to neurons-are impaired when "possessed" by aggregated Amyloid-Beta.
While the exact mechanisms by which the formation of plaques occurs and how they cause neurodegeneration and dementia is still a matter of debate in the scientific world, this study sheds a new light on how astrocytes may participate in the development of Alzheimer's disease.
This new understanding of the interaction between Amyloid-Beta and astrocytes could lead to more effective therapies for Alzheimer's disease by trying to rescue astrocytic functions by deactivating the scavenger receptors.
The current study explores the causal relationship between the build-up of the Amyloid-Beta protein, associated with the formation of plaques, and the impairment of astrocyte's unctions.
Pierre Magistretti, director of the Brain Mind Institute and the Center for Psychiatric Neurosciences at CHUV/UNIL, and Igor Allaman, post doctoral fellow in Magistretti's lab, have succeeded in determining how built-up Amyloid-Beta infiltrates the astrocyte cells and alters their proper functioning, thus leading to the death of surrounding neurons.
"To penetrate the astrocyte, the pathological protein goes through a 'scavenger' receptor. Our study has shown that if we impair Amyloid-Beta build-up, or activation of this receptor, astrocytes continue to fulfill their normal neuroprotective functions even in the presence of the Amyloid-Beta," Igor Allaman said.
Subscribe to:
Posts (Atom)