Thursday, March 18, 2010

Still no silver bullet for hypothesis testing

There is an interesting post at dechronization about the applicability of one-tailed hypothesis (in a nutshell, the question of when it is appropriate to assume that only very high values are interesting but not very low ones). Andrew Gelman also puts a comment this week on the temptation of overusing p-values.

And then there is this article on ScienceNews claiming that statistical illiteracy is pervasive in Science. The author  claims that many studies have statistical flaws, and for me the message is that not all literature is relevant. We won't find groundbreaking research at every corner, but I would add that this is true for any field... I understand that if this widespread neglect of the statistical theory is systematic within some field, it represents a bigger problem. But in this case their solution of embracing Bayesian methodology will just not work: a Bayesian black box can be more dangerous than a classical one. Because the problem is in the reliance of the black box - a software or a protocol that gives you the illusion of answering your question unattended.

Of course I'm focusing only in one aspect of the article, but I wanted to say that there is no shortcut to improving the quality of scientific studies: authors, reviewers and competitors (when the previous two fail) must get acquainted with the statistical methods (be they classic or Bayesian) behind the tools they use blindly.

ps: HT to Daniel Ferrante for the link.


No comments:

Post a Comment

Before writing, please read carefully my policy for comments. Some comments may be deleted.

Please do not include links to commercial or unrelated sites in your comment or signature, or I'll flag it as SPAM.

LinkWithin

Related Posts with Thumbnails