Friday, September 11, 2015

Copper vs. Orphan Black


I recently began watching the BBC America series Copper, which ran for two seasons from 2012 to 2013. The series is not without its flaws, including a somewhat slow start to its main story lines (as noted by a review in Variety published shortly after the series' initial release), but overall I found the series to be an excellent period piece, which does a better job than Scorsese's Gangs of New York in terms of presenting the actual history of the Five Points neighborhood during the Civil War (which is not to say that Scorsese's film is not excellent in many other respects!).


It is surprising or at least disappointing that Copper was cancelled and BBC America's other original series, Orphan Black, is still running strong. While the premise of Orphan Black is intriguing, the lead actress is fantastic, and the showrunners clearly know the tricks of their trade, I had to stop watching the show on account of the increasingly absurd, byzantine layers of conspiracy and melodrama, together with the sense that the main story line is becoming less plausible and less intelligible with each startling new revelation. And despite the fact that a PhD in evo devo was extensively consulted by the showrunners--they even based one of the main characters on her--whenever anything science-y shows up on screen, it always comes out either as clearly fallacious or as unintelligible gobbledigook. Sigh.

Five Points, Manhattan (George Catlin, 1827)

Saturday, September 05, 2015

Psychology Replication Study

Many of you may have heard of the recent psychology replication study, published in Science, in which researchers attempted to replicate 100 hand-picked psychology studies, and were only able to successfully replicate 39 of them.

I am a huge fan of this study, among other things because it encourages other scientists to attempt replication (which everyone agrees is not done enough in the sciences). The result also opens up a bunch of cool interpretive questions about scientific method and statistical analysis.

One obvious mistake to avoid is concluding based on this study that only 39% of the original studies were "correct," in some sense of the word. Just as some of the initial 100 studies probably really were flawed and gave misleading results (which I believe can be thought of as "false positives" without being too misleading), this is also probably true of some of the failed replications as well (which can analogously be thought of as "false negatives").

But can we get more precise with the implications, even to a first approximation? I have an amateur interest in philosophy of science, but am wholly ignorant of experimental design and statistical analysis. So I could use a hand (hence this post).

Can we use Bayesian theory to get some clarity? Of course, we are going to have choose some semi-arbitrary numbers, like the probability that each of the initial findings is a false positive, the probability that each of the initial findings is a false negatives, and the probabilities that each of the attempted replications is a false positive or a false negative.

Apart from the general probability of false positives and false negatives with both the initial findings and the attempted replications, there are more particular factors to consider. One is the expertise of the experimenters; replication may be difficult, because specialized skills and practice may be necessary to successfully create the controlled conditions which will show the initial experimental result. There is also the obvious question of confirmation bias among the researchers who authored the initial studies. And so on.