Usefulness for internal research?
Want research integrity? Stop the blame game
Helping every scientist to improve is more effective than ferreting out a few frauds.
Malcolm Macleod
Most scientists reading this probably assume that their research-integrity office has nothing to do with them. It deals with people who cheat, right? Well, it’s not that simple: cheaters are relatively rare, but plenty of people produce imperfect, imprecise or uninterpretable results. If the quality of every scientist’s work could be made just a little better, then the aggregate impact on research integrity would be enormous.
How institutions can encourage broad, incremental improvements is what I have been working to figure out. Two things are needed: a collective shift in mindset, and a move towards appropriate measurement.
Over the past 2 years, some 20 institutions in the United Kingdom have joined the UK Reproducibility Network (UKRN), a consortium that promotes best practice in research. They have created senior administrative roles to improve research and research integrity. I have taken on this job (on top of my research on evaluating stroke treatments) at the University of Edinburgh. Since then, I’ve focused on research improvement rather than researcher accountability. Of course, deliberate fraud should be punished, but a focus on investigating individuals will discourage people from acknowledging mistakes, and mean that opportunities for systems to improve are neglected.
Research integrity: nine ways to move from talk to walk
At the University of Edinburgh, we have audits as part of projects to shrink bias in animal research, speed up publication and improve clinical-trial reporting. These are not the metrics that most researchers are used to. Many people are initially wary of yet another ‘external imposition’, but when they see that this is about promoting our own community’s standards — and that there are no extra forms to fill in — they usually welcome this shift in institutional focus
Here’s what we are learning to look for at my university.
Integrity indicators. Counting papers published in Science or Nature or prizes received is a poor reflection of performance. Measures should reflect the integrity of research claims: for instance, the proportion of quantitative studies that also publish data and code, and that pre-register their hypothesis, study design and analysis plan. At the University of Edinburgh, we are focusing on the reporting of randomization and blinding in published animal studies that test biomedical hypotheses. Existing tools can be applied to such tasks. The DOIs of publications that match a series of ORCIDs (author IDs) can be identified, the open-access status ascertained through the Unpaywall database, and these details can be linked back to institutions, departments or even individual research groups.
I care more about how my institution is doing compared with last year than about how it performs relative to other organizations. That said, benchmarking can be useful — and working with other organizations can help to develop standard reporting tools without reinventing the wheel.
Evidence of impact. Having data in hand allows an institution to focus on what can be improved, and how. In 2019, only 55% of Edinburgh clinical trials were fully reported on the European Union Clinical Trials Register. Programmes to reach trial organizers (by e-mailing reminders and mentoring them through the process) increased this to 95% in 2021. To build on that, I am working with members of UKRN and others to develop institutional dashboards that will provide real-time data across a range of measures, such as clinical-trial reporting and the quality and timeliness of reporting animal research. ... '
No comments:
Post a Comment