We spoke over four hours in two sessions. A condensed and edited version of the conversations follows.
You’re a philosopher by training. How did philosophy lead to neuroethics?
Mine’s the typical immigrant’s story. My family moved to Cincinnati from Taiwan in the early 1980s. Once here, my siblings gravitated towards the sciences. I was the black sheep. I was in love with the humanities.
So I didn’t go to M.I.T. — I went to Princeton, where I got a degree in philosophy. This, of course, worried my parents. They’d never met a philosopher with a job.
Do you have any insight on why scientific careers are so attractive to new Americans?
You don’t need to speak perfect English to do science. And there are job opportunities.
It’s a kind of subspecialty of bioethics. Until very recently, the human mind was a black box. But here we are in the 21st century, and now we have all these new technologies with opportunities to look inside that black box — a little.
With functional magnetic imaging, f.M.R.I., you can get pictures of what the brain is doing during cognition. You see which parts light up during brain activity. Scientists are trying to match those lights with specific behaviors.
At the same time this is moving forward, there are all kinds of drugs being developed and tested to modify behavior and the mind. So the question is: Are these new technologies ethical?
A neuroethicist can look at the downstream implications of these new possibilities. We help map the conflicting arguments, which will, hopefully, lead to more informed decisions. What we want is for citizens and policy makers to be thinking in advance about how new technologies will affect them. As a society, we don’t do enough of that.
Give us an example of a technology that entered our lives without forethought.
The Internet. It has made us more connected to the world’s knowledge. But it’s also reduced our actual human contacts with one another.
So what would be an issue you might look at through a neuroethics lens?
New drugs to alter memory. Right now, the government is quite interested in propranolol. They are testing it on soldiers with post-traumatic stress disorder. The good part is that the drug helps traumatized veterans by removing the bad memories causing them such distress. A neuroethicist must ask, “Is this good for society, to have warriors have their memories wiped out chemically? Will we start getting conscienceless soldiers?”
What do you think?
It is a serious business removing memories, because memories can affect your personal identity. They can impact who you think you are. I’d differentiate between offering such a drug to every distressed soldier and giving it only to certain individuals with a specific need.
Let’s say you have a situation like that in “Sophie’s Choice,” where the memories are so bad that the person is suicidal. Even if the drug causes them to live in falsehood, that would have been preferable to suicide.
But should we give it to every soldier who goes into battle? No! You need memory for a conscience. Doing this routinely might create super-immoral soldiers. As humans we have natural moral reactions to the beings around us — sympathy for other people and animals. When you start to tinker with those neurosystems, we’re not going to react to our fellow humans in the right way anymore. One wonders about the wrong people giving propranolol routinely to genocidal gangs in places like Rwanda or Syria.
Some researchers claim to be near to using f.M.R.I.’s to read thoughts. Is this really happening?
The technology, though still crude, appears to be getting closer. For instance, there’s one research group that asks subjects to watch movies. When they look at the subject’s visual cortex while the subject is watching, they can sort of recreate what they are seeing — or a semblance of it.
Similarly, there’s another experiment where they can tell in advance whether you’re going to push the right or the left button. On the basis of these experiments some people claim they’ll soon be able to read minds. Before we go further with this, I’d like to think more about what it could mean. The technology has the potential to destroy any concept of inner privacy.
What about using f.M.R.I. to replace lie detectors?
The fact is we don’t really know if f.M.R.I.’s will be any more reliable or predictive. Nonetheless, in India, a woman was convicted of poisoning her boyfriend on the basis of f.M.R.I. evidence. The authorities said that based on the pictures of blood flow in her brain, she was lying to them.
In American courts, there’s another issue, too. Defendants cannot be forced to testify against themselves — the Fifth Amendment. So the legal and ethical question here is: If the police put you into a machine that’s reading your mind, are you being forced to testify against yourself? At present, a person can be forced to surrender DNA. Is an f.M.R.I. scan the same thing?
On the other hand, criminal defendants are beginning to use brain scans to bolster their claims. Recently there was this case where this guy was charged with tossing his wife out of a window. In court, he produced a brain scan showing a frontal-lobe tumor. On the basis of that, his crime was reduced from murder to manslaughter. It was a smart defense move, though the technology’s predictive accuracy remains questionable. I like it better when judges say, “We can’t admit this stuff; we just don’t know what this technology can do yet.”
Lately, you’ve been writing about this question: Do people own their memories? Most of us think, “Of course we do.” Why are you bringing this up?
Because there are some new technologies coming where we may be able to enhance cognition and memory with implanted chips. Right now, if you work for a company, when you quit, your boss can take away your computer, your phone, but not your memory. Now, when we come to a point when an employee gets computer chip enhancements of their memory, who will own it? Will the chip manufacturer own it as Facebook owns the data you upload on their products at present?
Even today, some people claim that our iPhones are really just extensions of our minds. If that’s true, we already lack ownership of that data. Will a corporate employer own the chip and everything on it? Can employers selectively take those memories away? Could they force you to take propranolol as a condition of employment so that you don’t give away what they define as corporate secrets?
Someone needs to ask these questions, don’t you think?
Do you have a favorite movie?
I have several. The one I most often return to is, “Eternal Sunshine of the Spotless Mind,” where Kate Winslet and Jim Carrey, at the end of an affair, employ a technology that’s supposed to erase their memories of each other. But it doesn’t quite work out, and therein is the story.
And this may well be how things will go when we get technology that can do that. In many ways, writers and film directors have been acting as unofficial neuroethicists by anticipating the problems of our new capabilities.