Saturday, April 20, 2013

Do as I say, not as I've been doing

Raising standards : Nature Immunology 14, 415 (2013) doi:10.1038/ni.2603 -- Nature journals' updated editorial policies aim to improve transparency and reproducibility:

. For example, authors will need to describe methodological parameters that may introduce bias or influence robustness and to provide precise characterization of key reagents, such as cell lines and antibodies, that may be subject to biological variability.
(...)
To help improve the statistical robustness of papers, the Nature journals will now employ statisticians as consultants on certain papers, at the editor's discretion and on the referees' suggestions. (...) Exploratory investigations often are not amenable to the same degree of statistical rigor as hypothesis-testing studies.
(...)
Those who would put effort into documenting the validity or irreproducibility of a published piece of work have little prospect of seeing their efforts valued by journals and funders; meanwhile, funding and efforts are wasted on false assumptions.

These are certainly much needed changes in NPG's policy, provided they are not just lip service while in practice neglecting valid statistical criticisms to their flagship papers. I remain skeptical. Would they put their money where their mouths are and demand openness for already published data or allow for retroactive post-review? (The infamous "your criticism won't add to the discussion" boilerplate reply?) Anyway, a few more links:

What We Have Here is a Failure to Replicate | Evolutionary Psychology:

Discussions of why replications aren’t more common – including Pashler’s remarks – focus extensively (but not exclusively) on incentives. If a researcher attempts to do an exact replication of published work, there are two possible results. If the result replicates successfully, it is likely to be difficult to publish because journals tend not to publish replications, though this is changing.(...) Other journals are proving more receptive to publishing replications – and failures to replicate – which will probably have some beneficial effect. In any case, my guess, though I don’t know, is that replications of results are cited relatively infrequently, especially compared to the original results. Publishing failures to replicate is likely no easier than publishing successes.

Authors, Don’t Call Us, We’ll Call You | Psychology Today:

Privileged access is only one of the many means by which political forces distort debates about evidence and select which conclusions are legitimized and which perspectives are marginalized. Because privileged access articles often escape rigorous peer review, the science is often flabby and grossly simplistic, and claims in privileged access articles can be extravagant.

Strategies for challenging inaccuracies and outright misrepresentations in privileged access articles are limited. Journals that grant privileged access also often restrict publishing of letters to the editor to only what authors indicate a willingness to respond. A refusal to respond is effective censorship, causing criticism to be barred from publishing. Even when letters are accepted, they often have severe restrictions on their length (often 400 - 600 words), are often published much later than the privileged access articles, fail to be indexed in ISI Web of science or other electronic bibliographic sources, or are limited to e-letters, not the paper editions of the Journal. It is notable that the webpages of the journals making the most use of privileged access articles do not link subsequent critiques with the original article, so that anyone in defining the critiques has to search for them separately.

Non-consensual replication | john hawks weblog:

You are building one assumption upon another. The disturbing part is that the discipline accepts that some researchers just have a "knack" for making a particular experimental design work, and other researchers may have trouble recreating the exact conditions. That very attitude enables fraud, as we have seen repeatedly during the last few years. In science, if no one else can make the experiment work, it didn't happen.

Opinion: Missing Methods | The Scientist Magazine®:

How has the requirement to share every iota of technical detail with the research community given way to “as described elsewhere,” elsewhere being Never Never Land?  First, I blame the journal editorial boards.  The push in recent years to shorten papers and limit the number of figures has never been clearly rationalized to the research public.
(...)
Next, I blame the authors. Failure to transmit clear and detailed technical details is not just a sin against the scientific community, it’s also indicative of poor internal mentoring skills. (...) This brings up the worst consequence to our increasingly lax eye for technical detail: faster publication of findings in higher impact journals will mean squat if the data will not stand the test of time, and in our field, this means experimental reproducibility.
(...)
The recent proliferation of smaller journals devoted solely to publishing novel methods and technologies is a great advance in this regard.

Actually this last conclusion is wrong -- the existence of journals devoted to these details is an incentive for the authors to hide the details from other manuscripts, in the (sometimes unfulfilled) hope that this slice of research will be a publication on its own. That is, you persuade authors to think about the method, software and data/conclusions as independent entities, and consequently it will become not more, but less likely for them to provide you with the other elements.

Random Image (from here)

No comments:

Post a Comment

Before writing, please read carefully my policy for comments. Some comments may be deleted.

Please do not include links to commercial or unrelated sites in your comment or signature, or I'll flag it as SPAM.

LinkWithin

Related Posts with Thumbnails