Figure 1. Proper validation of antibodies and detailed record keeping of their use will improve experimental reproducibility. Academic researchers work on average more than 60 hours per week, according to a Inside Higher Ed survey (Flaherty 2014). Why? Many scientists feel the sacrifice is worth it, as they devote their lives to an overwhelmingly important and meaningful human pursuit.

Why then would researchers jeopardize the legitimacy of their work over results that could be potentially irreproducible?

In the early 2000s, a team of scientists at the biotech giant Amgen set out to replicate the findings of 53 “landmark” cancer studies. They arrived at a troubling conclusion: the key findings of only six of the 53 papers (11 percent) could be replicated (Begley & Ellis 2012). One of the authors of that study, C. Glenn Begley, the chief scientific officer at TetraLogic, a clinical-stage biopharmaceutical company developing treatments for cancer and infectious diseases, notes that a primary reason for the irreproducible results was the unreliability of the antibodies used in the studies.

Antibodies are a central reagent in many biomedical techniques, including western blotting, flow cytometry, immunoprecipitation, immunohistochemistry and more. However, despite their ubiquity, studies continue to show that commercial antibodies routinely bind to the wrong targets or multiple targets, delivering unpredictable results (Michel, et al. 2009; Egelhofer, et al. 2011; Berlund, et al. 2008).

These kinds of inconsistencies do not affect FDA-approved drugs, but there is still a significant price to be paid for the current research community standards. “What I can’t estimate is the opportunity cost,” Begley says. “It has caused industry to go down many dead ends and invest huge resources that could have been better placed elsewhere. We might have discovered something that could have helped patients.” Scientists using poorly performing antibodies without validation stand to lose substantially as well.  Retractions of published work as a result of irreproducible data can cast doubt on a researcher’s entire career.

Figure 2. A set of guidelines for appropriate antibody use and reporting.The status quo in academia reveals a difficult trade-off all researchers face. Productivity is most commonly measured by the number of original research papers and the impact factor of the journals the scientists publish in. Although this paradigm includes no concrete metric for quality, it is typically the sole measure used to demonstrate success, an inappropriate situation that lies beyond the responsibility of the individual researcher (Begley, et al. 2015). This culture unintentionally discourages researchers from allocating resources to systematically validate antibodies before drawing significant biological conclusions from data generated with these reagents.

Begley says journal specifications often contribute to this ‘publish or perish’ culture even if unwittingly so. For example, many journals allow researchers to submit cropped western blot images, highlighting the region of interest but obscuring the complete findings. Often positive and negative controls are lacking (Begley 2013), and triplicate western blots are seldom shown. Thus hidden from scrutiny, inconsistent western blot data can more easily pass peer-review. This modus operandi can have a domino effect as the next crop of researchers now see those reagents as "legitimate" and "proven" and use them in their own studies, thus propagating the error.

One particularly egregious example was the case of the erythropoietin receptor (Epo-R). Initially shown to be primarily expressed in erythroid progenitor cells, researchers claimed to have found it to be present on other cell types as well. It wasn’t until much later that the findings from other cell types were proven to be nothing more than artifacts resulting from using non-specific antibodies (Elliot, et al. 2006). Inclusion of proper controls could have flagged the results as erroneous, but these were rarely utilized. Ultimately this had significant impact, leading to failed preclinical and more importantly even failed clinical studies (Endre, et al. 2010).

To improve reproducibility and ensure reliability of their antibodies, academic labs need to look no further than small biotech companies. Like academic labs, small biotechs are often strapped for resources. Still, they place great emphasis on accuracy and reproducibility because no biotech wants to take a drug to clinical trial based on irreproducible results. Begley says that despite the time and cost for antibody validation, that price is “less than the cost of going forward with faulty results.” For the greatest chance at success, they need to ‘fail fast, and fail cheaply.’

What exactly do these companies do that academic labs can adopt to ensure that they are spending valuable time at the bench generating data which can be confidently used to understand critical biological processes?

Validate antibodies economically

Many antibody manufacturers now sell sample sizes that allow researchers to run tests before they commit to an entire vial, eliminating the need to buy multiple products. Some manufacturers even offer antibodies that are extensively pre-validated, like the PrecisionAb Antibody product line from Bio-Rad Laboratories, which are tested on up to 12 biologically relevant cell lines expressing the target proteins at endogenous levels. However, even with this new industry standard, researchers must always validate every new batch they receive. Another simple procedure is to cross-check key results with an antibody from a different manufacturer.

Reduce researcher bias

Human error and bias influences data interpretation. Even the most objective researchers are capable of unintentionally overlooking evidence that is contrary to their hypotheses. Two approaches can mitigate this bias. First, blinded studies can be conducted such that the sample identities are hidden from the researcher until after the experiment is completed and the data is analyzed. These are very manageable, requiring minimal extra time. The only additional time needed is for a second researcher to label the samples with coded names. Secondly, multiple researchers can repeat the same experiment, showing the results are robust and reproducible in-house. Establishing this solid platform goes a long way to help the scientist retain confidence later if conflicting data are presented from other groups who have not followed this path.

Report complete experimental details

Diligently keeping a detailed record of the antibodies (catalog number, lot number and date opened) used for experiments is vital to guaranteeing accuracy and reproducibility. Antibodies are subject to lot-to-lot variability, and each new batch of antibodies must be validated to ensure efficacy.

In addition to internal record keeping, academic researchers should report the details regarding their methods and maintain a philosophy of complete transparency about their data while publishing. In an age of electronic publications and supplementary information, researchers can submit huge data sets, communicating that their research is thorough and allowing other scientists to evaluate their work should they need to. Begley encourages researchers to include everything: the entire image of the western blot, blinded controls, and raw data where applicable. In his words, “If the gel is clean why not show it all?” In fact, new discoveries can come from knowing whether there are higher molecular weight forms of the protein during certain treatments, reflecting changes in regulatory mechanisms.

At a time when quantity of published papers is used as a benchmark for scientific accomplishment, making time to validate reagents, conduct blinded studies, and report complete materials and methods may seem revolutionary and impractical. But as Begley points out, it is a noble cause, “The [researchers] that start doing this first will be trendsetters for the rest.”

Academic researchers make enormous sacrifices to create new knowledge. To undermine their efforts with unreliable antibodies and low quality data is a colossal waste. Fortunately, it is avoidable if we all pledge to do the right thing. The rewards may not appear to be immediate, but the extra time and money spent selecting and validating top-performing antibodies and repeating experiments is a long-term investment for a fruitful career and the greater good of science.