Nature publishes global study on trust in science involving Heriot-Watt academic

Research led by an international team, involving a Heriot-Watt University academic, has been published in a leading global research journal, examining how reliable social science research findings really are.
The study is one of the largest investigations ever carried out into research credibility across the social and behavioural sciences and includes contributions from Dr Kelly Wolfe, Assistant Professor in Psychology.
The findings come from the Systematizing Confidence in Open Research and Evidence (SCORE) programme, a large international collaboration involving 865 researchers.
We hope that this kind of large-scale evidence helps researchers, funders and policymakers make better‑informed decisions about which findings to trust, build on and invest in, while also improving understanding of how research evidence is tested and interpreted, supporting stronger and more trustworthy research across the social sciences.
The research has now been published in Nature, the world’s leading multidisciplinary science journal, as a collection of three linked papers alongside the release of five additional preprints.
Dr Kelly Wolfe, Assistant Professor in the Department of Psychology in the School of Social Sciences at Heriot-Watt University, said: “The SCORE programme was about asking a simple but important question, how confident can we really be in published research findings.
“We looked at what happens when studies are checked again, tested in different ways, or repeated using new data, and whether people or computer models can predict which findings will hold up.
“What we found shows that research evidence can look very different depending on how it is examined, which is why openness, transparency and careful interpretation really matter.”
The team examined claims from 3,900 academic papers published between 2009 and 2018 in 62 journals.
Across the studies examined, around half of published findings were successfully replicated when the same research questions were tested using new data.
Where studies did replicate, the effects observed were often much smaller than those originally reported.
The research showed that credibility depends on how findings are tested, with reproducibility, robustness and replicability each capturing different aspects of reliability, meaning results can vary depending on the approach used.
The papers covered a wide range of subjects, including criminology, economics, education, finance, health, management, marketing, psychology, political science, public administration and sociology.
Dr Wolfe added: “Our findings show that research credibility is complex and cannot be captured by a single measure.
“In this project, claims were tested in several different ways, including reanalysing original data, applying alternative analytical approaches and attempting to repeat studies using new datasets.
“What we saw is that reproducibility, robustness and replicability each tell us something different about how reliable a finding is, and results can change depending on how they are assessed.
“We hope that this kind of large-scale evidence helps researchers, funders and policymakers make better‑informed decisions about which findings to trust, build on and invest in, while also improving understanding of how research evidence is tested and interpreted, supporting stronger and more trustworthy research across the social sciences.”
Funded by the US Defense Advanced Research Projects Agency, the programme was coordinated by the Center for Open Science.
Human expert assessments were carried out by the repliCATS project and Replication Markets, while teams at Pennsylvania State University, TwoSix Technologies and the University of Southern California developed computer‑based approaches to predicting whether research findings would replicate.
Robustness testing was led by the Metascience Lab at Eötvös Loránd University in Hungary.
To read the research in full, visit Nature: https://www.nature.com/articles/s41586-025-10078-y
To learn more about SCORE, visit: https://cos.io/score/