- WORLD VIEW
Is my study useless? Why researchers need methodological review boards

Making researchers account for their methods before data collection is a long-overdue step.
Should researchers have the freedom to perform research that is a waste of time? Currently, the answer is a resounding ‘yes’. Or at least, no one stops to ask whether there are obvious methodological and statistical flaws in a proposed study that will make it useless from the get-go: a sample size that’s simply too small to test a hypothesis, for example.
In my role as chair of the central ethical review board at Eindhoven University of Technology in the Netherlands, I’ve lost count of the number of times that a board member has remarked that, although we’re not supposed to comment on non-ethical issues, the way a study has been designed means it won’t yield any informative data. And yet we routinely wait until peer review — after the study has been done — to identify flaws that can’t then be corrected.
In my own department at Eindhoven, we’ve been trialling a different approach. Five years ago, we instituted a local review board that also evaluates proposed methods. Although some colleagues found this extra hurdle frustrating at first, the improvements in study quality have led them to accept it. It’s time to make dedicated methodological review boards a standard feature at universities and other research institutions, as institutional review boards are.

Some medical trials, animal studies, grant applications and institutes around the world already review methods. For example, in stage one peer review when submitting a registered report, or peer review of a clinical trial protocol, reviewers comment on the study design before data collection. If a study has already passed such hurdles, a methodological review board will no longer need to assess it. That said, there are signs the existing system needs tightening up. The journal Trials has used clinical-trial protocol reviewers since September 2019, for example: it has found some items of methodological information missing in up to 56% of protocols (R. Qureishi et al. Trials 23, 359; 2022). Normal peer review had not flagged this.
To be clear, I do not propose that reviewers debate matters such as frequentist versus Bayesian philosophies of statistics. Instead, the focus should be on basic design flaws that cannot be corrected after data collection, with the goal of ensuring that the data can inform the statistical hypothesis being tested. For one thing, reviewers could check that researchers will collect sufficient data and be able to make the causal inferences they desire (by ensuring that the sample is representative of the target population, for example). Reviewers could deter researchers from performing too many exploratory analyses while reporting only some of them, and help to plan studies that will yield informative results even if the hypothesized effect is absent. They should also check that researchers follow disciplinary reporting guidelines where available.
Critics might worry that methodological reviewers will abuse their power and prohibit certain contentious methods or manipulations — for example, the implicit association test. (Many critics say the test does not measure implicit associations at all.) But the methodological review I propose is not about whether measures and manipulations are valid. Discussions about possible confounding variables and bad measures are too complex and, in my view, are best resolved in the literature.

The most contentious issue in methodological review is that boards have the power, in principle, to bar a proposed study from proceeding. Since we introduced methodological reviews as part of the ethics review process in our department, this has never happened. The methods can usually be adjusted to fit the question, or the question can be rephrased to fit the methods. Over time, we have asked colleagues to increase the level of detail about their study design and analysis methods. This led to more clearly specified statistical hypotheses. People have increasingly used sequential analyses — an efficient way to collect data, involving adding interim analyses at predetermined sample sizes to determine if the data are now sufficient — because writing a sample-size justification makes them realize they are uncertain about the sample size they need.
more...
https://www.nature.com/articles/d41586-022-04504-8?WT.ec_id=NATURE-20230105&utm_source=nature_etoc&utm_medium=email&utm_campaign=20230105&sap-outbound-id=0890EB8787AFA95DD2CB2CC877F02ADB0C4E6F03