Jonah Lehrer recently wrote an article in
The New Yorker (and another in
Wired) about decreasing effect sizes in the sciences. An excerpt from the former:
But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
For many scientists, the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
Possible explanations for the problem include publication bias (journals like to publish interesting results), selective reporting of data by scientists, and data mining (scouring already collected data for interesting results, as opposed to collecting data to test an already determined hypothesis).
One of the scientists interviewed by Lehrer proposes as a solution an open-source database, in which researchers are required to outline their plans for research and to document all of their results.
Lehrer's Wired article is titled "Trials and Errors: Why Science Is Failing Us," but science rather seems to be revealing the problem with some of its own methods; the problem is real but potentially self-correcting. Lehrer also makes a philosophical mis-step when he invokes David Hume's skepticism about causation:
Hume realized that, although people talk about causes as if they are real facts—tangible things that can be discovered—they’re actually not at all factual. Instead, Hume said, every cause is just a slippery story, a catchy conjecture, a “lively conception produced by habit.” When an apple falls from a tree, the cause is obvious: gravity. Hume’s skeptical insight was that we don’t see gravity—we see only an object tugged toward the earth. We look at X and then at Y, and invent a story about what happened in between. We can measure facts, but a cause is not a fact—it’s a fiction that helps us make sense of facts.
Hume was skeptical about the view that a cause is a necessary connection between cause and effect, not about whether a cause is a fact. Hume's point is that causal reasoning can be overturned by subsequent experience, unlike reasoning about relations of ideas (such as logical or mathematical truths), which is immune to empirical rebuff.
Philosophical quibbles aside, Lehrer's articles are a useful layperson's introduction to this important issue.
1 comment:
A good response to this post about the "Declining effect" and the need for replication in studies
http://www.sciencebasedmedicine.org/index.php/the-value-of-replication/
Post a Comment