Friday, February 27, 2015

Null Hypothesis Significance Testing Procedure--Banned!

The psychology journal Basic and Applied Social Psychology has banned the use of the null hypothesis significance testing procedure (NHSTP).

I am not proficient in scientific method or statistics. However, I observe that the article at the link seems to criticize use of the NHSTP based on the fact that there are widespread confusions about the actual meaning of NHSTP. One would think that the solution to this problem is to better educate people about the actual meaning of NHSTP, rather than to ban its use in a journal. However, presumably the grounds for banning NHSTP are in part due to the misplaced degree of importance it is given based on those common misinterpretations. Or something like that.

Perhaps BASP's decision to ban the use of NHSTP is at least partly symbolic--as a way to signal their commitment to more rigorous standards for assessing the importance of research findings than the typically lazy or otherwise problematic uses of the old stand-by, NHSTP. And the ban will get people's attention more than just not requiring the use of NHSTP, as the journal had previously done.

On a related note, this video contains an informative critique of the use of p-values and a defense of the use of confidence intervals as an alternative criterion for assessing the importance of research findings (if that's the right way to put it):


Tuesday, February 24, 2015

The Witches of Chiloe


This article tells the tale of a secret society of witches and warlocks which apparently operated in the late 19th-century in the island of Chiloe, Chile. Most of the evidence discussed by the article is suspect testimony from a trial in which many of the members of the society were charged with a variety of crimes (including murder).

Author Mike Dash argues that the secret society of witches was quite real (despite the superstitious exaggerations surrounding their activities), and that they appear to have combined the religious and magical practices of the island's indigenous shamans (the Mapuche Machis) with covert political activity aimed at resisting the authority of the central government in Santiago and establishing an alternative local shadow-government in Chiloe. According to the testimony of some of the witnesses in the trial, the secret society appears to have taken a turn to the dark side of shamanism (sorcery or witchcraft) as a result of internal political disputes.

Here are some pictures of the beautiful Isla Grande de Chiloe.






Addendum (3/1/15): And here are some pictures of the "calendar" designs used by Machis (shamans) on their drums.



The Lost World of London's Original Coffee House Culture


Historian David Greene has written a delightful survey of London's coffee house culture in the 17th and 18th centuries. (Shorter version here.)



This post could also have been aptly titled "Alternate Careers for PhD's"; Dr. Greene's day works by day as a leader of historically themed walking tours through the City of London.

Sunday, February 22, 2015

Coffee, Health Tonic


A government nutrition panel recommends drinking 3 or more cups of coffee per day. Established health benefits include a reduced risk of cardiovascular disease and type 2 diabetes. Possible benefits include reduced risk of some cancers.

So get to it!

It's interesting to imagine the alternate world in which coffee is widely marketed as a health tonic. We've just taken one big step towards that world. Enjoy it!


A Tale of Gender and the Monty Haul Problem


This article tells the story of the time Marilyn vos Savant presented the solution to the Monty Haul problem in her column in Parade Magazine, and was widely derided by readers, including some accused her of erring on account of her using "female logic" and other gender-based criticisms.

For those not familiar with the Monty Haul problem, this diagram from Wikipedia provides a helpful summary of the problem and its solution:



Imagine you're a contestant in a game show. The host, Monty Haul, presents you with three closed doors, and asks you to choose one of them. Behind one of the doors is a new car, but behind the other two doors are goats. You choose door #1. Monty Haul then opens a door which he knows has a goat behind it--door #3. He then gives you a choice: do you want to stay with door #1, or switch to door #2?

Many people think that the chance that the car is behind door #1 is the same as the chance that it is behind door #2--namely, a 1 in 2 chance. But, in fact, there is a 1 in 3 chance that the car is behind door #1, and a 2 in 3 chance that the car is behind door #2, so the solution to problem is to switch to door #2.

The diagram above shows why. When you make your initial choice, there is a 1 in 3 chance that the car is behind door #1, and a 2 in 3 chance that the car is behind doors #2 or #3. When Monty Haul opens door #3, which he knows has a got behind it, the chance that the car is behind door #3 drops to 0. This means that there is now a 2 in 3 chance that the car is behind door #3.

It's true that people find the Monty Haul problem to be counter-intuitive in general, but the article presents evidence that Savant was attacked because of her gender in particular. Some of her scathing, verbally abusive critics were academics with PhDs (who later had to eat a nice hefty slice of humble pie).

The comments section of this article is also intriguing; a fellow going by the nomme de web of "RaguxCixot" is valiantly attempting to prove that the standard answer to the Monty Haul problem is false, despite all attempts by well-informed fellow commenters to prove otherwise.

As far as I can tell, RaguxCixot is making a very basic error: he is assuming that there is a chance that Monty Haul will open a door which reveals the car, rather than a door which reveals a goat. In other words, Ragux is ignoring the stated assumptions in the thought experiment in order to reach his conclusion. To be fair to Ragux, when the Monty Haul problem is stated, the assumption that Monty Haul will never open a door to reveal the car is not always clearly or explicitly stated. Nevertheless, Ragux is being so enthusiastic in his stubbornness that he is either exasperatingly dense (given his evident understanding of probability) or else one of the most clever trolls I have ever seen. (There is at least a 1/3 chance of the latter, of course.)

Monday, February 16, 2015

From the Department of Better Late than Never (Classical Chinese Philosophy Edition)


In December of last year, the New York Times published a profile on scholar Edward Slingerland's work on the Chinese concept of wu-wei (literally "not doing" or "nonaction"; figuratively, effortless action). I teach courses on Chinese philosophy, and have presented a paper or two which relates to this topic, so naturally the article was of great interest to me.

Slingerland is the most important researcher in this area, but I believe the article over-simplifies its explanation of wu-wei. This seems to be caused both by problems with Slingerland's own account and problems with the article's popular explanation of Slingerland's account. Slingerland has always proposed a sharp dichotomy between effortful action and wu-wei that seems inaccurate to me. In this article, and to a lesser extent in his published works, he draws a comparison between wu-wei and the spontaneity championed by the Romantics and the 20th century counter-culture that also seems incorrect. Both of these points are connected to his characterization of wu-wei as acting on the basis of feelings or instinct and effortful action as acting on the basis of rational calculation. This simply won't do, as most forms of action based on passion are not wu-wei.For example, actions motivated by desire, fear, or anger are not in general compatible with wu-wei.

I think Slingerland is only able to generate his "paradox of wu-wei" by over-emphasizing the dichotomy between deliberate action and spontaneous action. The connection between the two is actually a commonsense notion, because that is how all skills are formed. What first requires deliberate action because of unfamiliarity over time becomes a habit which can be exhibited spontaneously. Or something like that. Treating virtue and wisdom as skills whose perfection involves a kind of effortlessness or spontaneity is not only a Chinese idea; you can see it in Aristotle's conception of virtue as well. However, Aristotle does place a greater emphasis on the role of rational calculation in acquiring and manifesting virtue than does the Chinese philosophical tradition.

Sunday, February 15, 2015

Scott Alexander on Political Activism

Five million people participated in the #BlackLivesMatter Twitter campaign. Suppose that solely as a result of this campaign, no currently-serving police officer ever harms an unarmed black person ever again. That’s 100 lives saved per year times let’s say twenty years left in the average officer’s career, for a total of 2000 lives saved, or 1/2500th of a life saved per campaign participant. By coincidence, 1/2500th of a life saved happens to be what you get when you donate $1 to the Against Malaria Foundation. The round-trip bus fare people used to make it to their #BlackLivesMatter protests could have saved ten times as many black lives as the protests themselves, even given completely ridiculous overestimates of the protests’ efficacy. 
The moral of the story is that if you feel an obligation to give back to the world, participating in activist politics is one of the worst possible ways to do it. Giving even a tiny amount of money to charity is hundreds or even thousands of times more effective than almost any political action you can take. Even if you’re absolutely convinced a certain political issue is the most important thing in the world, you’ll effect more change by donating money to nonprofits lobbying about it than you will be [sic] reblogging anything.

Saturday, February 14, 2015

Neoreaction, Redux


After reading some more about Curtis Guy Yarvin's (a.k.a "Mencius Moldbug") political philosophy, it seems my previous post on this topic may have contained some errors of interpretation. Specifically, I am no longer sure that Yarvin's argument against democratic republicanism is based on the Platonist view that a wise elite should rule over the foolish underclass. In fact, he argues for what he calls formalist neocameralism, which is basically the view that governments should be for-profit corporations which are accountable to their shareholders, but which do not make decisions based on popular elections or votes by representative bodies (such as parliaments or assemblies). This view does not obviously depend on the assumption that the rulers are or must be members of a wise elite. It seems to be a view about the most effective way of designing a political system to meet the goals of maximizing wealth and happiness of the members of society (or something to that effect).

The issues of interpretation are complicated by the fact that Yarvin's neoreactionary philosophy is really a bundle of views, each of which (or at least several of which) are logically independent of the others. For example, Yarvin does sometimes put forth elitist propositions as well as the propositions in defense of 'formalist neocameralism', even though it is not clear what the logical relationship between these propositions is supposed to be. I have heard tell that Yarvin has also asserted various racist and sexist propositions, although I have not yet found any of these (perhaps it's for want of searching; now, many of the commentators on his blog do frequently wallow in the racist/sexist/classist sewer, but it would be fallacious to accuse Yarvin of racism, sexism, or classism solely on the basis of the statements of his commentators).

In any event, the definitive guide and rebuttal to the neoreactionaries has already been written by blogosphere wunderkind Scott Alexander.

A Case of Conflicting Intuitions . . .

. . . Identified by the consistently brilliant Scott Alexander: If we have a moral obligation to vaccinate our children, because failing to do so risks the health of others, do we also have a moral obligation to technologically enhance our children's cognitive and other capacities, if failing to do so would risk the health of others?
I predict half the people will think I’m arguing against vaccination, half will think I’m arguing for mandatory designer babies, and a rare few wonderful people will understand that raising awareness of cases where our intuitions conflict is its own reward.

The Incredible, (Sometimes) Edible Placebo

Robin Hanson proposes that placebos work because the person who administers the placebo shows care for the patient.

Can placebos work even if a patient knows he is receiving a placebo? So claims a person who left a comment on Hanson's blog post:
Interestingly, you can also invoke the placebo effect out of seemingly nothing, by truly telling the people that they are receiving placebos, *but* telling them at the same time that placebos work even if you know they are placebos. The trick is that this still builds on the core of medicine's reputation, so the patient believes this claim, so he has the expectation and now the placebo effect kicks in. This effect cannot be attributed to the pill's reputation, it's due to the doc saying that science shows that the placebo will work. Which it really will, in a self-fulfilling prophecy way.
There are lots of fascinating scientific and philosophical questions relating to placebos. It seems like research on placebos and interest in the therapeutic aspects of placebos has been increasing in the last 10 or so years. Here are some links which summarize the findings thus far:

1. Steven Novella, "The Poor, Misunderstood Placebo," Skeptical Inquirer 34.6 (November/December 2010).

2. "Putting the Placebo Effect to Work," Harvard Health Publications (April 1, 2012).

3. Aaron E. Carroll, "The Placebo Effect Doesn't Just Apply to Pills," The Upshot (October 6, 2014).


And here are a couple of interesting studies on the placebo effect:

1. Mayberg et al., "The Functional Neuroanatomy of the Placebo Effect," The American Journal of Psychiatry 150 (2002): 728-737.

2. Meissner et al., "The Placebo Effect: Advances from Different Methodological Approaches," The Journal of Neuroscience 31 (9 November 2011): 16117-16124.

Saturday, February 07, 2015

Bayes' Theorem and Cancer Screening

If you get a positive result on a mammogram, what is the chance that you actually have cancer?

This video explains how Bayes' theorem can be used to answer this question. (Note: you will still have to do some research to get the exact rate of false negatives, rate of false positives, and rate of people in your age range who have cancer. But the numbers used in the video are approximately correct and the results are very revealing, both about Bayes' theorem and about cancer tests.)


And here is Julia Galef explaining how Bayes' theorem has changed the way she things in everyday life.


Visions of Futures Past

Matt Novak, writing for Gizmodo, has presented images from what he describes as the "golden age of American futurism", the years 1958 to 1963. The images all come from a Sunday comic strip called "Closer Than We Think." While many of the predictions made by the strip are (perhaps inevitably) way off the mark, the strip also had a surprising number of hits or at least impressively near misses. Even the howlers are noteworthy for their retro-futuristic design. Here are some samples.

1. Video interviews in a "one-world job market":


2. Wrist watch TV (not so far from smart phones):


3. Electric cars:


4. Driverless cars:

5. Push-button education:


6. Robotic warehouses (cf. Amazon's warehouse system):


Solaris (1972)


I finally watched Andrei Tarkovsky's masterpiece "Solaris" in its entirety. When it was initially released, the film was widely compared to Stanley Kubrick's "2001: A Space Odyssey" (1968). Tarkovsky's future echoes some of the modernist design which can be found in "2001," but overall the future is less sterile in Tarkovsky's film; there are many shots which linger on the earth's countryside and sundry natural vignettes (as is typical of Tarkovsky), which are later echoed by the more exotic natural scenes from the planet Solaris, and Tarkovsky's future contains more remnants from mankind's past, which reminds us that the future is not only the new, but also the accumulation of history and culture.






The shot of the library shown above represents the most deliberately archaic aspect of the film; the space station's library seems to represent the inheritance of the past and human culture which has been brought into space, and which humans use to try to make sense of the new and the unknown. The most audacious example of Tarkovsky's deliberate archaism is the candelabra with lit candles that makes several appearances in this scene:


Because of its links to the past and to nature, Tarkovsky's imagined future feels grounded and real, despite the primitive special effects employed. The effects are primitive even by the standards of the day; the film's production seems hearkens back to the golden age of cinema, which is probably a reflection of the limited budgets and technical skill of the Mosfilm studio. While at least partly unintended, this actually contributes to the film's timeless feel.

The entirety of the film is available on the internet for free, but I watched the Criterion Collection Blu-Ray edition. Here is a clip which shows the famous levitation scene in the library:


Monkeys, Hot Hands, and Human Irrationality


Tom Stafford is a psychologist who blogs at Mind Hacks. A recent blog post discussed two common types of human irrationality: the gambler's fallacy and the hot hands fallacy. The gambler's fallacy is believing that a future random result has a greater-than-average chance of being different from a previous random result. An example is a gambler's belief that a streak of "red" results on a roulette wheel makes a "black" result more likely on the next spin. The hot hands fallacy is believing that a future random result has a greater-than-average chance of being the same as a previous random result. An example is a spectator's belief that a basketball player is on a "streak" and so should continue to be given possession of the ball as much as possible by his teammates.

Stafford claims that recent research on monkeys shows that humans are not being irrational when they commit the hot hands fallacy. His justification for this is that monkeys in the experiment were observed to make the same mistake as humans.
The reason the result is so interesting is that monkeys aren’t taught probability theory as school. They never learn theories of randomness, or pick up complex ideas about chance events. The monkey’s choices must be based on some more primitive instincts about how the world works – they can’t be displaying irrational beliefs about probability, because they cannot have false beliefs, in the way humans can, about how luck works. Yet they show the same bias.
Stafford's argument is flawed. Monkeys are not taught probability theory in school, they don't learn theories of randomness, nor do they pick up complex ideas about chance events. But this seems irrelevant to whether their behavior is rational or not. Being taught probability theory in school can hardly be a necessary condition for displaying irrational beliefs about probability. What makes beliefs irrational is that they are systematically biased in a way that does not reflect the underlying reality, This could be the case regardless of whether a person (or a monkey!) has taken a course on probability theory. If Stafford's argument were sound, then unreflective humans or humans who had no training in probability theory would not be capable of irrationality, and reflective humans and those with special training would be the only ones capable of irrationality. This seems obviously false.

Stafford next argues that there must be a good reason why the monkeys are behaving this way:
What’s going on, the researchers argue, is that it’s usually beneficial to behave in this manner. In most of life, chains of success or failure are linked for good reason – some days you really do have your eye on your tennis serve, or everything goes wrong with your car on the same day because the mechanics of the parts are connected. In these cases, the events reflect an underlying reality, and one you can take advantage of to predict what happens next. An example that works well for the monkeys is food. Finding high-value morsels like ripe food is a chance event, but also one where each instance isn’t independent. If you find one fruit on a tree the chances are that you’ll find more.
I don't doubt that this hypothesis (or something like it) explains why both monkeys and humans behave the way that they do. The problem is that this argument does not prove that monkeys and humans are always rational. Whether behavior is rational is context-dependent. The fact that the hot hands style of reasoning is rational in some contexts does not imply that it is rational in all contexts.

Finally, Stafford seems to conflate two senses of the word 'rational' here. The first sense is the usual one discussed in the biases and heuristics literature--something like "free from systematic bias". The second sense of 'rational' that Stafford introduces is something like "increases biological fitness". The latter comes into play in the previously quoted paragraph, and is made clear when he discusses the possible role played by evolution in shaping human and monkey behavior:
The wider lesson for students of human nature is that we shouldn’t be quick to call behaviours irrational. Sure, belief in the hot hand might make you bet wrong on a series of coin flips, or worse, lose a pot of money. But it may be that across the timespan in evolution, thinking that luck comes in clumps turned out to be useful more often than it was harmful.
The problem is that these two senses of 'rational' are distinct. Even if the hot hands bias is rational in the sense of "adaptive" or "increases biological fitness", this does not prove that it is rational in the sense of "free from systematic bias".

Stafford is correct to focus on the rhesus monkey study, which is genuinely interesting, and his blog is truly excellent. But I think Stafford's blog post shows the benefits of having some training in philosophy when trying to discuss the implications of scientific research. Of course, rare is the bird who has mastered both the science and the philosophy, which explains the difficulty here.