February 12, 2009

Junk Science And The Law

Coming across this article from a New York criminal defense attorney who is also a mighty law blogger, seeing a similar issue addressed in one of my new favorite blogs (happily, chronicling a solid ass-drubbing the anti-vaccination crowd got in the Court of Claims), and having an issue set up in the mock trial competition that I'm coaching a team through right now, I'm reminded that the issue of pseudoscience, junk science, and generally lazy thinking is not confined to matters of medicine and quasi-religion. It infects our courts, too, and in so doing, it degrades the quality of justice served up to us as citizens.

Begin with a proposition that any professional with experience in the courts accepts as a given -- an attorney who looks around for a reasonable amount of time will be able to find some kind of an "expert witness" who will, for a fee, offer testimony in support of pretty much any proposition. Medical doctors are the direct target of this cynical proposition -- it's not for nothing that lawyers sneer at a certain well-populated class of M.D.'s as "whores."* The sorts of medical releases and notes that I've seen in my career are astounding -- shockingly serious-sounding diagnoses, accompanied by releases from and restrictions to work sufficient to indicate that an able-bodied patient's life is completely ruined forever, all based on subjective complaints of what appears to be rather mild pain.**

Now, understand that this proposition extends well beyond the realm of medicine. Greenfield describes a sincere and wide-eyed "expert" testifying that it's just impossible to wipe fingerprints off of a chromed handgun. He's exaggerating for effect, but such obviously wrong testimony could rather easily skew the result of a criminal trial. Certainly there's a lot of questionable forensic science out there, and especially with half of America addicted to police procedural shows on TV, there is a very distorted sense of what forensic science can and cannot prove. Yes, the science can do amazing things, but it is also quite limited in other respects.

But the power, oh the power, of putting someone up on the witness stand, calling her an "expert," and having the court agree with that characterization. It builds up huge credibility in a jury's estimation, it leaves a jury agreeing with anything that the "expert" says. But what is an "expert" witness anyway?

Federal Rule of Evidence 702 defines an "expert" as someone who has "knowledge, skill, experience, training, or education" in "scientific, technical, or other specialized knowledge." The cognate provision of California law, Evidence Code § 720(a) defines an expert as someone who "has special knowledge, skill, experience, training, or education sufficient to qualify him as an expert on the subject to which his testimony relates." That's actualy a less helpful definition than the Federal rules, but it gets at the same thing -- anyone who knows more than the average schlub off the street about something. By that definition, I am an expert on the subjects of ancient history and laying hardwood floors -- flooring because I've had the experience of doing it, and history because I've made an armchair hobby of learning about it. I don't need a Ph.D. in history to know more about it than the average bear.

Well, clearly, no one is going to call me as an expert on laying hardwood floors even though I've done it a couple times as a DIY'er. The sorts of experts who make convincing witnesses are the ones with lots of letters after their names, years of experience in the relevant area of expertise, publications in relevant journals, and so on. The question then becomes -- how is a court, whether in the form of a judge or a jury, to know when a so-called "expert" is giving good science in her testimony, whether she's bloviating, or whether she's advancing a bunch of crap and calling it "science"?

The judge, who is not a scientist, has to apply a legal test to evidence because that's what judges are capable of doing. So in Daubert v. Merrell Dow Pharmaceuticals (1993) 509 U.S. 579, the Supreme Court gave us the legal test: the judge has to look at testimony from experts in the relevant scientific field and decide if the science being used by the expert is "generally accepted" by the relevant scientific community. This does not mean that there is a numerical majority of scientists who subscribe to it; the quality of the scientists who say "yea" or "nay" to a particular method is also considered. In theory, if you've got one guy who's the ultimate, be-all, end-all expert who says, "No way, this is junk," and ten thousand lesser lights who say "Yeah, this stuff is really good science!" then the judge has the discretion to say, "The one guy with the super-awesome credentials is right and I'm excluding the evidence."

The reason is, Daubert requires the Court to also independently determine if the process in question has been subject to the scientific method. In that vein, a court is guided to determine from expert testimony if the process has been subjected to both field and laboratory testing; if the error rate for the technique is both known and very low; if the process has been subjected to peer review (and if so, what the results of the peer review process were); and if objective control standards exist.

These are all good sorts of questions to ask when a questionable technique is advanced. The problem is, they are also the sorts of questions that legitimate scientists include in basing their decision to accept or reject a technique as valid. So it can be subsumed back into the old test anyway. I should mention, while speaking of the "old test," Daubert is a tougher standard than the one it replaced, which relies only on the acceptance of the technique in the scientific community (and which is still the standard in California state courts, see People v. Kelly (1976) 17 Cal.3d 24, affirmed post-Daubert, People v. Leahy (1994) 8 Cal.4th 587). Polygraphs, for instance, do not meet this standard.

What really happens, though, is that courts do a search through other case law to see whether another court has addressed this kind of scientific technique before, and then rubber-stamps that decision. This is stifling to innovation and new techniques, because a court that denies New Method X largely on the basis of its novelty and lack of peer review (which is a result of its novely) will then guide future courts to reject New Method X even if the New Method X is subsequently vindicated in field tests and peer review.

Then, there is another problem -- what happens when a bunch of "experts" submit affidavits signing off on a bunch of what is objectively junk science? In the MMR vaccine case Popehat linked to, the court took the time to delve deeply into the issue and found the "experts" (who had impressive-enough looking credentials) to have poor foundations for their opinions. But it takes an unusually perceptive and knowledgeable court to do that; not all judges have the right kinds of backgrounds to make those kinds of evaluations. I do not think I would. Even in the MMR case, the Court based its decision in part on the following:

The expert witnesses presented by the respondent were far better qualified, far more experienced, and far more persuasive than the petitioners’ experts, concerning most of the key points.
The issue, though, ought not to be which side of a dispute can accumulate the most affidavits from the most impressive experts. If experts can be found who will sign off on most anything, it can become something of an arms race to see which side is simply willing to write enough checks to the largest number of scientists willing to whore out their credentials.

Here's what I suggest. A court is not equipped, on its own, to evaluate science. It needs the assistance of experts. But the issue of whether a proposed method has "acceptance in the scientific community" is fundamentally flawed, and redundant of the validity of the science in question anyway. I say, let's dispense with the "general acceptance" part of this test altogether.

Instead, have experts testify directly about the scientific validity of the test in question. We have guidance from Daubert about what scientific validity is -- has the process been subject to double-blind, objectively falsifiable, controlled testing? Has it been subject to peer review and analyzed in relevant professional literature? Does the process produce reliable results, both in the lab and in practical application? If so, legitimate scientists will sign off on the method anyway. So let the experts explain why the technique is or is not valid, and let the court's inquiry be not about the opinions of a body of experts but rather the substance of what the experts are talking about.


* Obviously, not all doctors deserve this label.
** You don't believe this? I've got one word for you: Fibromyalgia.

No comments: