Data Reproducibility in Preclinical Research and Discovery

Catherine Tralau-Stewart, PhD, CTSI-UCSF Catalyst, moderated the panel.

By Catherine Tralau-Stewart, PhD, Head of Therapeutics for CTSI’s Catalyst Program (Commentary and excerpts from panel discussion)

Headlines in the scientific and mainstream press are drawing attention to a significant but complex issue in how scientists are doing research and the use of public funds to pay for it.

The problem is irreproducibility, or preclinical studies with results that fail to be reproduced by independent teams.

Indeed, Leonard Freedman, PhD, president of the Global Biological Standards Institute, claimed in PLoS that half of the $50 billion spent on preclinical life science research is wasted because it is not reproducible. In 2011 researchers at Bayer suggested that only a third of preclinical academic studies were reproducible.

As a result, an increasing numbers of scientists, and others in the research community,  have continued to ask “Is this a real issue, and if so, what can be done to address it.?” Today there is growing urgency to this call.

Adding a voice to the discussion, a recent panel “Data Reproducibility in Preclinical Discovery” at UCSF tackled the topic at an event organized by Catalyst, a program of the Clinical and Translation Science Institute (CTSI).  As head of the Catalyst therapeutics track and a drug discovery pharmacologist, I facilitated the conversation, which included a number of experts in the field.  I’m also an advisor to the Reproducibility Initiative, one of several groups focusing on the issue.

The timing was motivated, in part, by a recent comment from microbiologist and panelist Parker Antin, PhD, fresh from budget and policy meetings on Capital Hill. “This is the year of reproducibility; this is it,” he said. Antin, a professor at the University of Arizona, is the new president of the Federation of American Societies for Experimental Biology (FASEB), a major policy group representing more than 125,000 researchers.

(L to R): Keith Yamamoto; John Ioannidis; Antin Parker; Elizabeth Iorns; Amanda Halford (Lawrence Tabak from the NIH
joined the panel via live video)

In addition to Antin, panelists at the Data Reproducibility in Preclinical Discovery symposium included Keith Yamamoto, PhD, UCSF vice chancellor for Research; John Ioannidis, MD, DSc, professor at Stanford University; Lawrence Tabak, DDS, PhD, deputy director of the National Institutes of Health (NIH); Elizabeth Iorns, PhD, co-founder and CEO of Science Exchange and founder of the Reproducibility Initiative and Amanda Halford, MBA, vice president of Academic Research at Sigma-Aldrich. [See video]

Sensitive issue at the core of the scientific process

The issue is complicated and sensitive, the group agreed. Some researchers see it as an attack on how science has always been performed.

It goes to the core of how we do science and how we see our contribution and life’s work. However, we need to get through these sensitivities. People are going to have to address this for their next publications and funding applications. The point is, we cannot afford to ignore this issue; it’s not an option.

I want to be clear that no one is suggesting that academic fraud is a substantial part of the problem. It’s about how we do science and how we can do it better.

Being able to independently reproduce data is the gold standard for robust and meaningful research results.  However, opinions vary on the precise definition of reproducibility, as well as on the extent and significance of the problem of irreproducibility.

The majority of reproducibility studies do not directly replicate the initial study but conduct experiments in related systems. “Mostly people validate the observations in other systems. This makes the interpretation difficult,” said Iorns of the Reproducibility Initiative, which aims to directly replicate observations.

Many studies are never reproduced, and instead are assumed to be a solid foundation for further work, including pre-clinical drug research and development, which has an unsustainably high failure rate when it reaches the clinic. One may question whether poor reproducibility in early research is a key component of these failures.

A costly problem in more ways than financial

All of the panelists agreed irreproducibility is a genuine problem. They also agreed it’s difficult to know whether addressing it will result in greater success in research translation. The group acknowledged that the financial impacts are significant, but overly focusing on specific dollar estimates isn’t productive, since it’s tricky to measure.

(L to R): Keith Yamamoto, PhD, UCSF vice chancellor for
Research; John Ioannidis, MD, DSc, professor at Stanford
University; Antin Parker, PhD, president, Federation of
American Societies for Experimental Biology (FASEB)

The most pressing concern is “delivering for patients, ” said Tabak of the NIH. “Think of the scenario where we launch a first-time-in-human study based upon a flawed animal model” and “the untenable circumstance of exposing patients to approaches which at best have no effect, but at worst could have untoward effects. This is unacceptable. The point is to get this right.”

“The lack of public trust in science is fundamental to how we fund science and we must respond robustly to this issue,” added Parker.

The NIH, currently developing its new 5-year strategic plan, is taking several steps to address the issue, including new grant requirements to provide specific information on a study’s rigor and reproducibility. [See NIH FAQ’s here.] Slated to take effect in 2016, the requirements include a strong emphasis on training in experimental design, analysis and reproducibility.

NIH is also convening workshops and roundtables to identify solutions. This includes a 2014 gathering of the editors of leading scientific publications, most of whom endorsed a set of principles and guidelines for reporting preclinical research covering such topics as transparency, data and material sharing, and rigorous statistical analysis.

Identifying causes and solutions

At the UCSF discussion, a number of causes were identified, as well as potential solutions.

Potential causes:

  • The intense pressure to publish, which biases investigators to push dramatic, earth-shattering results, which are more likely to attract journals.
  • The lack of incentives, rewards or venues to publish negative results.
  • Great progress in science and technology enables researchers to tackle complex problems in complex systems. “Such problems and systems include untold numbers of unidentified, and therefore uncontrolled variables, which greatly complicates reproducibility,” said Yamamoto.
  • Early training in rigorous scientific methodology, study design, sample size estimations, study blinding, statistical analysis and replication has decreased. “Lack of training is a key issue,” Tabak said.
  • A vast array of available reagents at a range of prices and quality in an environment of budgetary pressures. These include cell lines and molecular tools which researchers have later found after expensive pre-clinical and even clinical research not to be what was required.
  • Decades of budget cuts and decreased funding, which translate to fewer staff and reduced research expenses.

Potential solutions:

  • Publishers and funding agencies are key to implementing solutions.
  • Ioannidis said, “We need more training in statistical methods and design. Our data ‘suggests that randomization and blinding are only employed in ~20% of studies. This is very low.’’
  • Increasing transparency in reporting methods to a greater level of detail that enables reproducibility.
  • Improving oversight and monitoring of reagent quality. In particular biological reagents are highly variable. “We need to ensure traceable quality of production, batch-to-batch consistency and quality control. Reagent suppliers should help researchers ensure the right quality reagents are used for different phases of research,” said Halford.
  • Talking about the issue in labs. “Principal investigators are obligated to have a conversation on this issue with their team to set expectations,” Tabak said.
  • Exploring the potential of building reproducibility into study designs and the research process.
  • Sharing best-practices such as utilizing electronic lab notebooks, standard in most industry labs. ‘These structure data, data types and replication details but are rarely used in academia,” said Halford.

Creativity, funding

Yamamoto put out a plea to the group to make sure new tools or guidelines don’t squelch the creativity that’s the crux of scientific inquiry, and take into account the diversity of fields within science.

“I would hope we don’t install rules and regulations that are so stringent that we discourage the kinds of free thinkers that the field depends upon,” he said.

The panelists all expressed concern about the funding needed to reproduce studies as well as to instigate changes, especially given that academic labs are already operating lean. “There really does need to be some allocation of funding for reproducibility studies. It’s basically impossible to get funding,” said Iorns.

This gives a flavor of the discussion. I invite you to watch the video for more detail. I’m pleased the conversation has launched at UCSF.  Exploring the issue of reproducibility and brainstorming how to prevent irreproducible studies from getting through the gates of publication must be a priority for the scientific community.[video:https://www.youtube.com/edit?o=U&video_id=1BEUxI3bJto width:400 height:250]

Note: The panel discussion on September 17, 2015 at UCSF was supported by sponsors Sigma-Aldrich and EMD Millipore.