Friday, 14 October 2016

Poorly designed animal experiments in the spotlight


High-status journals or institutions no guarantees of carefully-reported trials.
Preclinical research to test drugs in animals suffers from a "substantial" risk of bias because of poor study design, even when it is published in the most-acclaimed journals or done at top-tier institutions, an analysis of thousands of papers suggests.
“You can’t rely on where the work was done or where it was published”
Scientists can take basic steps to avoid possible biases in such experiments, says Malcolm Macleod, a stroke researcher and trial-design expert at the University of Edinburgh, UK. These include randomizing the assignment of animals to the trial’s treatment or control arm; calculating how large the sample needs to be to produce a statistically robust result; ‘blinding’ investigators as to which animals were assigned to which treatment until the end of the study; and producing a conflict-of-interest (COI) statement.
“Nobody in science should not be doing this stuff,” Macleod told journalists at a press conference in London.
But many published papers make no mention of these methods, according to an analysis that Macleod conducted with Emily Sena, at the University of Edinburgh, and other colleagues1. Looking at 2,671 papers from 1992 to 2011 that reported trials in animals, the team found randomization reported in 25%, blinding in 30%, sample-size calculations in fewer than 1% and COI statements in 12%. The papers were not selected at random; they had been included in meta-analyses of experimental disease treatments. Later studies reported randomization, blinding and COI statements at higher rates than did earlier ones, but rates never reached above 45%. “We could clearly be doing a lot better,” Macleod says.
Macleod, M. R. et al. PLoS Biol. 10.1371/journal.pbio.1002273 (2015).
Expand
Hard to predict
The most-cited scientific journals don't necessarily publish papers with more robust methods, Macleod adds. In fact, in 2011, the median journal impact factor was generally lower for studies that reported randomization than for publications that didn't.
The researchers also looked at papers submitted by leading UK institutions to a national research-quality audit. They found that work done at the University of Oxford, the University of Cambridge, University College London, Imperial College London and the University of Edinburgh reported randomization only 14% of the time, and blinding only 17% of the time where it would have been appropriate. Of more than 1,000 publications, only one reported all four bias-reducing measures.
UK funders demand strong statistics for animal studies
Preclinical research: Design animal studies better
Animal studies paint misleading picture
More related stories
“You can’t rely on where the work was done or where it was published,” says Macleod.
“Although sobering, the findings of this paper are not a surprise, as they add to the existing body of evidence on the need for more rigorous assessments of the experimental design and methodology used in animal research. This is another wake-up call for the scientific community,” said Vicky Robinson, chief executive of the London-based National Centre for the Replacement, Refinement and Reduction of Animals in Research, in a statement distributed by the UK Science Media Centre.
Skewed reporting
A separate analysis, also published today, shows that animal-based research on the cancer drug sunitinib is plagued with poor study design2. A team at McGill University in Montreal, Canada, analysed the design of 158 published preclinical experiments, finding that none reported blinding or sample-size calculations and only 58 reported randomization. The researchers reported that publications were skewed towards those that reported positive effects — so much so that the team believes that published studies overestimate the effect of sunitinib on cancer by 45%.
Jonathan Kimmelman, the biomedical ethicist who led the work, says that journal editors, referees, institutions and researchers must all take responsibility for the poor quality of reporting and the consequent risk of bias. “There’s plenty of blame to go round,” he says.
But journals have in the last few years made efforts to address the problem. In 2010, researchers published ARRIVE guidelines for reporting animal research3, which many journals have now endorsed, including Cell, Nature and Science Translational Medicine. And Philip Campbell, editor-in-chief of Nature, notes that since 2013, the journal has asked authors of life-sciences articles to include details about experimental and analytical design in their papers, and, during peer review, to complete a checklist focusing on often poorly-reported methods such as sample size, randomization and blinding.

No comments:

Post a Comment