For much of the last decade, psychologists have been debating whether the field faces a replication crisis. In 2015, a widely publicized attempt to replicate 100 studies from three top social and cognitive psychology journals was able to reproduce the results of less than 40 percent of them (Science, Vol. 349, No. 6251). Other replication studies have cast doubt on once-established ideas, including ego depletion and behavioral priming.

These papers have sparked controversy over their methods and over whether large-scale replication studies are a worthwhile use of time and resources. But while the debate continues over how much of the research canon will hold up to scrutiny, the psychologists spearheading the reproducibility movement want to focus on a forward-looking task as well—changing the way psychological research is done to encourage more researchers to share data and research methods, and to replicate research as a matter of course, rather than as a one-off project.

"This year, we've been focused on developing technologies to support more open and reproducible research and on changing the culture of incentives so that it's in researchers' best interest to be open and reproducible," says Brian Nosek, PhD, a psychology professor at the University of Virginia and executive director of the Center for Open Science in Charlottesville, Virginia.

The center is not concerned with psychology alone—it aims to encourage open science in all fields, many of which face similar reproducibility questions. In January, for instance, the center released the first results from a cancer biology replication project, finding that researchers were only able to "substantially replicate" the results of two of five pre-clinical cancer biology studies they sought to reproduce.

Indeed, across scientific fields, researchers are being urged to share their data, methods, computer code and other research products. The aim is not only to address replication questions, but also to help other scientists gain a fuller understanding of published studies and encourage research progress more broadly.

A 2013 memo from the White House Office of Science and Technology Policy directed federal research agencies to develop plans to make the published research results of their funding freely available to the public within 12 months of publication, and to better account for and manage data. NIH, for example, requires its grantees to make available all published research papers in its PubMed database.

Now, the agency is also encouraging its grantees to make results publicly available more quickly through preprints—defined as research papers that have not yet gone through the peer review process—saying that applicants can cite preprints in new grant proposals.

Still, the data underlying most published research—in psychology and other fields—are not shared with the public or other researchers, and many structural and cultural barriers remain. In July, the National Academy of Sciences convened a "Toward an Open Science Enterprise" task force to examine those barriers and make policy recommendations for federal science funding agencies.

The Center for Open Science's growth reflects this mounting interest in its work. Over the past four years, it has expanded from a two-person operation to encompass 60 staff and an $8 million operating budget. Its signature product is the Open Science Framework (OSF), software used by 50,000 scientists that allows them to upload—and share as they choose—their research methods, data and analyses. APA is now creating an OSF-hosted repository to store and archive data, protocols and materials from studies published in APA journals.

The center is also championing several initiatives to reward scientists who use open and reproducible research practices. APA's Office of Publications and Databases, for example, recently signed onto its "badge" system that journals can use to identify studies that include open data or materials. Studies can also earn a badge for having a preregistered design, which includes a plan for analyzing data before they are collected (reducing the possibility that researchers will make inappropriate choices in analysis to achieve statistically significant results).

While the badge system might seem like a small incentive, a 2016 study found that it was effective: When Psychological Science introduced badges, it increased the rate of open-data studies it published from 3 percent to 39 percent in one year (PLOS Biology, Vol. 14, No. 5).

The center is also running a "preregistration challenge" to award researchers $1,000 for publishing their first preregistered study, and has developed guidelines for journals, publishers and researchers interested in implementing their ideas.

"The biggest challenge is that people want to change their research practices, but open science practices are harder and slower, and researchers worry about whether systems will reward these harder and slower practices," says Simine Vazire, PhD, an associate professor at the University of California, Davis. For example, preregistering a study means that researchers must decide on all research and analysis methods, and write them up in detail, before beginning the study. Sharing data might involve learning to use new software that allows it. "We want to make sure that the incentives line up," Vazire says.

Of course, all of this work is not without its challenges and critics. Some psychologists believe, for example, that preregistering studies can impede the creativity and freedom that researchers need to make scientific progress by not allowing researchers the flexibility to adjust their analysis methods and ask new research questions over the course of a study. But advocates explain that researchers can still conduct exploratory analyses, as long as they make clear which analyses are exploratory and which are testing previously formulated hypotheses.

And, critics say, journals that encourage data sharing must do so with the knowledge that, for privacy and other reasons, not all data can be open—and the journals must not penalize researchers who work with such data.

"My goal is to have a vigorous open discussion of these pros and cons, and find some middle ground," Vazire says. "Our critics can help us find where our proposed changes have unintended consequences and improve them."