SILENCE, REST YOUR MIND, SILENCE IS GOOD SELF CARE.
The UK has been troubled this past week by revelations that flawed scientific advice given to courts may have led to the wrongful conviction of hundreds of men and women accused of harming their children.
More than 250 infant death convictions, and potentially thousands of child abuse cases, are to be reviewed after judges decided that the cases may have relied too heavily on controversial and conflicting medical theories.
However, a New Scientist investigation has discovered that other, potentially flawed, forensic assumptions are still routinely being accepted by the courts. One such assumption is the supposed infallibility of fingerprint evidence, which has been used to convict countless people over the past century.
Contrary to what is generally thought, there is little scientific basis for assuming that any two supposedly identical fingerprints unequivocally come from the same person. Indeed, according to a report published in December, the only major research explicitly commissioned to validate the technique is based on flawed assumptions and an incorrect use of statistics. The research has never been openly peer reviewed.
This month, the US government also published a set of funding guidelines that rules out further studies to validate both fingerprint evidence and other existing forensic techniques presented as evidence in court. In 2003, a proposal by the US National Academies to validate such techniques collapsed after the Department of Defense and Department of Justice demanded control over who should see the results of any investigation.
Doubts over the reliability of fingerprint evidence were first raised in the US courts in 1999. Lawyers for Byron Mitchell, a defendant named in a robbery case, contested the admissibility of partial fingerprints found on the getaway car, which supposedly matched prints taken from Mitchell. The lawyers asked for a "Daubert hearing" - a special hearing in which judges decide for themselves the scientific validity and reliability of any forensic evidence before it is submitted.
To make a decision, judges apply five Daubert criteria to the evidence, one of which requires the technique to have a defined error rate. No such error rate existed for matching fingerprints. So the justice department commissioned the FBI and Lockheed Martin, which set up the bureau's fingerprint database, to establish one.
Only a summary of the study has ever been made available to the public. It says that there is a 1 in 1097 chance that one fingerprint image could be erroneously matched to another. Because only around 1011 human fingerprints have ever existed, this implies the probability of any false match is effectively zero.
But a number of academic critiques of this study argue that it contains blatant methodological errors. The investigators took a set of 50,000 pre-existing images of fingerprints and made comparisons of each one against itself and all the others. Although this produced an impressive-sounding 2.5 billion comparisons, critics point out that it is hardly surprising that a specific image should turn out to be more like itself than 49,999 other images.
In real investigations, the comparison being made is quite different: forensic investigators have to match new fingerprints taken from the scene of the crime against stored fingerprints. But the study was not designed to test the match between two or more different prints of the same finger, or the likelihood that they are more similar to each other than to prints from any other person.
James Wayman, director of the US National Biometric Test Center at San José State University in California, also claims the sample size was too small to justify its conclusions.
"The government is comfortable with predicting the fingerprints of the entire history and future of mankind from a sample of 50,000 images, which could have come from as few as 5000 people," he argues. He has dismissed the 1097 figure cited by the FBI as an "absurd guess".
Neither the FBI, Department of Justice or Lockeed Martin were able to comment on the issue.
The study does, however, provide some disturbing hints about the reliability of the more realistic comparison between different prints from the same finger. According to research published in December 2003 by David Kaye, a statistician at Arizona State University in Tempe (International Statistical Review, vol 71, p 521), the Lockheed investigators discovered three instances in which two supposedly different fingerprint images from the 50,000 looked unusually alike. By checking back, they found that each pair of prints was actually different images of the same person's finger.
The investigators excluded these from the analysis as mistakes. But despite each pair representing two images of the same print, one pair was found to be just as dissimilar as prints from different people. Two other pairs were also more dissimilar that they should have been.
"What it revealed was that prints from the same person seemed quite different," says Kaye. "They falsified the premise they were trying to prove," adds Simon Cole of the University of California at Irvine, an outspoken critic of the way fingerprint evidence is used.
No one is arguing that fingerprint evidence has no value. But because it is such a long-established technique, its critics say, it has never been subjected to the rigorous scientific scrutiny necessary to work out how often a bogus match is likely to come up.
What is more, fingerprint examiners occupy a privileged position not enjoyed by most experts. They routinely testify that a print left at the scene of a crime is a definite match to a suspect, with no possibility of error. Indeed the Scientific Working Group on Friction Ridge Analysis, Study and Technology (SWGFAST), which approves standards for fingerprint analysis techniques in North America, has stated: "[Fingerprint] identifications are absolute conclusions. Probable, possible or likely identification are outside the acceptable limits of the science."
But critics such as Cole argue that true science inevitably involves dealing with uncertainty. "The idea that there is something about fingerprints that is fundamentally different from any other area of human knowledge concerns me," says Jim Fraser, president of the UK's Forensic Science Society. "There have to be errors. It is a human process."
Since 1998 there have been legal challenges to at least 40 convictions in the US and UK on the basis of fingerprint evidence, including one last week in the Massachusetts Superior Court. Yet despite this, and the concerns of some experts, the US Department of Justice has so far refused to sanction studies to investigate the reliability of this and other "existing" forensic techniques.
In 1998, the department's research arm, the National Institute of Justice (NIJ), asked for proposals to validate fingerprints. According to Cole, it received four proposals, though none was funded. However, this year's "solicitation" form to attract research proposals, published on 6 January, states that "proposals to evaluate, validate or implement existing forensic technologies ... will not be funded".
In 2003, the US National Academies proposed a research programme to examine the scientific credibility of all existing forensic techniques, from fingerprinting and hair analysis to ballistics and lie detection. But the programme, funded by the DoD and NIJ, fell apart when the sponsors made what were seen by the academics as unreasonable demands to control dissemination and review of the material.
"I think it's censorship," says Paul Giannelli, law professor at Case Western Reserve University in Cleveland, Ohio. He believes US law enforcement authorities should ensure that all forensic techniques are placed on a solid scientific footing, even if that leads to difficulties in the short term.
Anne-Marie Mazza, director of the National Academies' Science, Technology and Law programme, say she is now looking for alternative funding for the project. "I think forensic science should become part of mainstream academic science. It should be peer reviewed, and open science communication is not something to be feared," she says. "Let's be honest, most of these techniques are solid, but there's nothing wrong with trying to find out how solid."
Others put the case for investigation more bluntly. "Various efforts to subject scientific evidence in criminal cases to a rigorous standard of scrutiny have made little progress," says Joe Cecil, a researcher at the Federal Judicial Center in Washington DC.
However, one observer, who asked not to be named, put the government's reticence in a different light. "If all of a sudden, all forensic science is doubted, what happens to all the people in jail. The whole criminal justice system could fall apart."
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home