Wouldn't it be amazing if there was a machine that could tell you whether someone was telling the truth? It would, of course, be really useful – but more than that, it would represent the ultimate triumph of technology. The utterly private world of our consciousness would be private, and sacred, no more.
Given how fascinating the idea is, then, it's no surprise that there have been plenty of attempts to design technological lie detectors, and no shortage of people willing to pay for the chance to use them. All of them have worked, in theory. But that doesn't mean they work.
A group of Scottish neuroscientists recently warned against the seductions of the latest approach – the use of functional magnetic resonance imaging (fMRI) to detect deception. A number of commercial enterprises, such as the US-based No Lie MRI now offer fMRI lie detection, and fMRI evidence has been submitted to courts of law in the US several times, although it has never yet been accepted as admissible evidence.
The judge's conservativism is well placed. To be sure, fMRI is an incredible technology. Scientists use it to probe the workings of the brain, and doctors use it to work out which parts of the brain do what, so they can avoid damaging the important bits during brain surgery.
But it's just not capable of detecting lies with the kind of certainty that could stand up in court. When scientists use fMRI in an experiment to investigate brain function, it's typical to scan 10 to 20 people. Scans are expensive, and we don't do this for fun: we do it because it's very difficult to interpret the results of any individual person's scan. There's just too much variability. Using fMRI you can see which parts of the brain tend to light up in response to, say, listening to music. Or telling lies. But everyone's brain is a bit different and there's a lot of random noise in every scan, so it's only by averaging over many people that you can achieve good results.
With every new technological advance, it's never long before someone claims to be able to use it to detect deception – for a price. Last time it was computers. An company called Nemesysco sell software – Layered Voice Analysis – which they say can mathematically process voice recordings and reveal the emotional stress-patterns associated with lying. If that doesn't float your boat, you can buy the same technology to work out whether someone you're chatting to online is attracted to you.
In 2007, two Swedish academics published a paper criticising the science behind Nemesysco's system. The academic journal that printed the article was promptly slapped with a lawsuit, and the article was taken down amid much controversy, but bootleg copies are available online. It's well worth a read, given that in 2007-2008, the government performed extensive trials of Nemesysco's unproven technology for the purpose of catching "benefit scroungers".
Going further back, electroencephalography (EEG), the brain-scanning technology that people used before fMRI arrived, is crude but still effective at measuring neural activation. It turns out that there's a particular neural response, the P300, that happens when you see something that you've seen before – a recognition spike. So if you show a murder suspect pictures of the murder scene, say, you could tell if they'd been there. Even better than just lie detection, it's mind reading. In theory.
This "brain fingerprinting" is certainly an interesting technique, but we just don't know whether it's reliable in practice. Studies have shown that it works fairly well in the lab on normal volunteers (such as students) instructed to lie about imaginary crimes, but real-life field tests are lacking. That hasn't stopped it being promoted commercially, and EEG has been admitted as evidence in Indian courts several times, although the Indian supreme court recently banned such tests.
This is a common theme. Most "lie detectors" are based on real evidence, but they require you to disregard all of the caveats, the ifs, ands and buts, that are the stuff of science. It's not hard to see why: lie detectors are a commercial product. Caveats don't sell, but if you can show people even a bit of evidence that something exciting should work in theory, you'll go far.
In theory, you can use EEG or fMRI to see through deception, but only if you assume that the brains of hardened criminals with strong motivations to lie behave the same was as the brains of college students. This is also true of the very oldest lie detector, the polygraph, invented over 100 years ago. It simply records heart rate and blood pressure etc, on the theory that when you lie, you get stressed and your body reacts. But does it work on actual criminals? Can it distinguish between stress associated with lying and stress associated with telling painful truths? It's hard to say. Yet if we don't know whether it works in any individual case, it's not much use.
Neuroscience is advancing rapidly and one day, it surely will be possible to reliably read criminal's minds with brain scans. But not yet. We must resist the temptation to let entrepreneurs blind us with science and claim to be able to peer into a world which is, for now, private.
Given how fascinating the idea is, then, it's no surprise that there have been plenty of attempts to design technological lie detectors, and no shortage of people willing to pay for the chance to use them. All of them have worked, in theory. But that doesn't mean they work.
A group of Scottish neuroscientists recently warned against the seductions of the latest approach – the use of functional magnetic resonance imaging (fMRI) to detect deception. A number of commercial enterprises, such as the US-based No Lie MRI now offer fMRI lie detection, and fMRI evidence has been submitted to courts of law in the US several times, although it has never yet been accepted as admissible evidence.
The judge's conservativism is well placed. To be sure, fMRI is an incredible technology. Scientists use it to probe the workings of the brain, and doctors use it to work out which parts of the brain do what, so they can avoid damaging the important bits during brain surgery.
But it's just not capable of detecting lies with the kind of certainty that could stand up in court. When scientists use fMRI in an experiment to investigate brain function, it's typical to scan 10 to 20 people. Scans are expensive, and we don't do this for fun: we do it because it's very difficult to interpret the results of any individual person's scan. There's just too much variability. Using fMRI you can see which parts of the brain tend to light up in response to, say, listening to music. Or telling lies. But everyone's brain is a bit different and there's a lot of random noise in every scan, so it's only by averaging over many people that you can achieve good results.
With every new technological advance, it's never long before someone claims to be able to use it to detect deception – for a price. Last time it was computers. An company called Nemesysco sell software – Layered Voice Analysis – which they say can mathematically process voice recordings and reveal the emotional stress-patterns associated with lying. If that doesn't float your boat, you can buy the same technology to work out whether someone you're chatting to online is attracted to you.
In 2007, two Swedish academics published a paper criticising the science behind Nemesysco's system. The academic journal that printed the article was promptly slapped with a lawsuit, and the article was taken down amid much controversy, but bootleg copies are available online. It's well worth a read, given that in 2007-2008, the government performed extensive trials of Nemesysco's unproven technology for the purpose of catching "benefit scroungers".
Going further back, electroencephalography (EEG), the brain-scanning technology that people used before fMRI arrived, is crude but still effective at measuring neural activation. It turns out that there's a particular neural response, the P300, that happens when you see something that you've seen before – a recognition spike. So if you show a murder suspect pictures of the murder scene, say, you could tell if they'd been there. Even better than just lie detection, it's mind reading. In theory.
This "brain fingerprinting" is certainly an interesting technique, but we just don't know whether it's reliable in practice. Studies have shown that it works fairly well in the lab on normal volunteers (such as students) instructed to lie about imaginary crimes, but real-life field tests are lacking. That hasn't stopped it being promoted commercially, and EEG has been admitted as evidence in Indian courts several times, although the Indian supreme court recently banned such tests.
This is a common theme. Most "lie detectors" are based on real evidence, but they require you to disregard all of the caveats, the ifs, ands and buts, that are the stuff of science. It's not hard to see why: lie detectors are a commercial product. Caveats don't sell, but if you can show people even a bit of evidence that something exciting should work in theory, you'll go far.
In theory, you can use EEG or fMRI to see through deception, but only if you assume that the brains of hardened criminals with strong motivations to lie behave the same was as the brains of college students. This is also true of the very oldest lie detector, the polygraph, invented over 100 years ago. It simply records heart rate and blood pressure etc, on the theory that when you lie, you get stressed and your body reacts. But does it work on actual criminals? Can it distinguish between stress associated with lying and stress associated with telling painful truths? It's hard to say. Yet if we don't know whether it works in any individual case, it's not much use.
Neuroscience is advancing rapidly and one day, it surely will be possible to reliably read criminal's minds with brain scans. But not yet. We must resist the temptation to let entrepreneurs blind us with science and claim to be able to peer into a world which is, for now, private.
No comments:
Post a Comment