Scientists said to be tweaking their experiments
Australian researchers have stated that some scientists are unknowingly tweaking experiments and analysis methods in order to increase their chances of obtaining easily publishable results. Their study has been printed in the journal PLOS Biology, no doubt making some readers wonder if it too has been altered for publication!
The study examined a type of bias called p-hacking, which occurs when “researchers try out several statistical analyses and/or data eligibility specifications and then selectively report those that produce significant results”, according to the authors. While such actions may be conscious or unconscious on the part of the researcher, the end result is the same - data is analysed multiple times or in multiple ways until a desired result is reached.
The study used text mining to extract p-values - a number that indicates how likely it is that a result occurs by chance - from more than 100,000 research papers in the PubMed database, spanning many scientific disciplines, including medicine, biology and psychology. According to lead author Dr Megan Head, from the ANU Research School of Biology, the researchers “found evidence that p-hacking is happening throughout the life sciences”.
Dr Head suggested that “pressure to publish” may be driving this bias, noting along with her co-authors that “there is good evidence that journals, especially prestigious ones with higher impact factors, disproportionately publish statistically significant results”. There is thus an incentive for researchers to selectively pursue and attempt to publish such results, with the study finding a high number of p-values that were only just over the traditional threshold that most scientists call statistically significant.
“This suggests that some scientists adjust their experimental design, datasets or statistical methods until they get a result that crosses the significance threshold,” Dr Head said.
“They might look at their results before an experiment is finished or explore their data with lots of different statistical methods.
“Many researchers are not aware that certain methods could make some results seem more important than they are. They are just genuinely excited about finding something new and interesting.”
The authors acknowledge that p-hacking is a serious issue, stating that the “publication of false positives hinders scientific progress”. Many scientists may be uninterested in replicating previous (supposed unbiased) studies, while others may pursue fruitless research programs based entirely off their results.
Even when scientists review evidence by combining the results from multiple studies - a method called meta-analysis - this procedure will be compromised if the studies being synthesised “do not reflect the true distribution of effect sizes”, according to the authors. They do concede, however, that p-hacking “probably does not drastically alter scientific consensuses drawn from meta-analyses”.
The authors have made a series of recommendations to prevent p-hacking from occurring. They suggest researchers adhere to common analysis standards (performed blind wherever possible) and place greater emphasis on the quality of research methods rather than the significance of the findings. Journals, meanwhile, are encouraged to provide clear and detailed guidelines for the full reporting of data analyses and results.
The need for quality assurance in histopathology laboratories
In histopathology laboratories, where tests are considered the gold standard for diagnosing...
Avoid adverse regulatory action with comprehensive quality
Attention to detail is par for the course within most modern industries and fields, including...
The politics of health: how elections will impact on life sciences
With elections this year in the US, the UK and India — all major players in the life...