Author affiliations
joanne.mckenzie@monash.edu (or @jomckenzie.bsky.social on Bluesky; ORCID 0000-0003-3534-1641)
Extending the CONSORT 2010 statement for reporting cluster randomised crossover trials lays the foundation for improving the completeness and accuracy of reporting
Over the past two decades my research has centred on systematic reviews. Developing and evaluating statistical and research methodology for reviews has been a focus, but I have also collaborated on many reviews. These collaborations inevitably reveal the importance of complete, clear, and accurate reporting of primary studies. At best, incomplete and inaccurate reporting wastes research investment—often substantial in the case of large randomised trials—and at worst, it may lead to incorrect conclusions with far reaching consequences.
Primary studies that are reported well contribute maximally to the evidence base and can be fairly assessed. The opposite is true when reporting is unclear or incomplete. I have spent countless hours scouring trial reports for information that should be present but is not; trying to decipher unclear text; or trying to decide what to do when the information reported is inconsistent within or across reports of the same study. This time adds up and takes away opportunities for other research. When multiplied across the thousands of systematic reviews conducted each year, this is an unconscionable waste of research effort.
More importantly, incomplete and inaccurate reporting inevitably leads to studies not contributing to an evidence base in the way they should. These studies might be excluded from a systematic review because researchers cannot determine whether the study meets the eligibility criteria, excluded from meta-analyses because the required statistics are not reported, or inaccurately judged to be at a particular risk of bias, and so on.
Our research investigating the reporting quality of a particular design—the cluster randomised crossover trial—found that incomplete reporting was common,12 motivating the development of an extension to the CONSORT (consolidated standards of reporting trials) 2010 statement.34 Reporting guidelines such as CONSORT are a way to circumvent incomplete reporting by providing recommendations of what should be reported so that users can fully interpret and reuse the findings in derivative products such as systematic reviews.
Cluster randomised crossover (CRXO) trials, as the name suggests, share features of the parallel group cluster and individual crossover designs. In a CRXO trial, groups of individuals (known as “clusters”) are randomised to sequences of treatments. For example, in a trial that evaluated whether head positioning of patients after acute stroke affected disability at 90 days, hospitals were randomised to implement either the lying-flat position in the first period and the sitting-up position in the second period (sequence 1), or the sitting-up position in the first period and the lying-flat position in the second period (sequence 2).5
The CRXO design can largely overcome the loss in power due to clustering, which means it can be used to investigate interventions where effects are expected to be small but important.678 Relatedly, using the CRXO design may make a randomised trial feasible in circumstances where a parallel group design would not be feasible because there are too few clusters available to detect the target difference, such as when there are a limited number of intensive care units in a country.89 Despite the advantages of the CRXO design, there are methodological complexities that can introduce bias. Careful reporting of the design, informed by tailored reporting guidance, is therefore imperative to ensure the risk of bias can be accurately assessed.
In our BMJ article,4 we provide reporting recommendations for CRXO trials developed through a consensus process, including a survey and consensus meeting with 55 people. The consensus process was strengthened by the involvement of statisticians with extensive knowledge of the CRXO design. The process resulted in consolidated reporting recommendations across the CONSORT 2010 statement and relevant extensions, modified items, and new items. New items on data sharing and patient and public involvement reflect the evolving expectations and requirements of the scientific and wider community.101112 In response to growing awareness of the need for greater clarity about the intervention effects being estimated,1314 we have also incorporated estimands into the statistical methods item.
While setting standards for reporting this design is an important step forward, there is still much to be done. Imminent publication of the CONSORT 2025 statement15 means we will need to align this extension with the updates in the main statement. Such alignment might ultimately be streamlined by adopting collaborative approaches to the coordination and harmonisation of closely related reporting guidelines, such as that being implemented for the PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) statement and its extensions.16
Importantly, we also need to consider how we can support researchers in implementing the guidelines. In the short term this includes creating a bank of examples of complete reporting to accompany the items and guidance to access online. We encourage researchers to contribute to the bank of items by submitting examples to us. In the longer term, artificial intelligence could play an important role in helping researchers to assess whether they have completely reported before submission, or for journal editors and peer-reviewers to undertake such checks before publication.17
Footnotes
Competing interests: JEM is co-chair of the PRISMA Executive.Provenance and peer review: Commissioned, not externally peer reviewed
References
↵Arnup SJ, Forbes AB, Kahan BC, Morgan KE, McKenzie JE. Appropriate statistical methods were infrequently used in cluster-randomized crossover trials. J Clin Epidemiol2016;74:40-50.doi:10.1016/j.jclinepi.2015.11.013 pmid:26633599CrossRefPubMedGoogle Scholar↵Arnup SJ, Forbes AB, Kahan BC, Morgan KE, McKenzie JE. The quality of reporting in cluster randomised crossover trials: proposal for reporting items and an assessment of reporting quality. Trials2016;17:575.doi:10.1186/s13063-016-1685-6 pmid:27923384CrossRefPubMedGoogle Scholar↵Moher D, Hopewell S, Schulz KF, et al. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ2010;340:c869. doi:10.1136/bmj.c869 pmid:20332511FREE Full TextGoogle Scholar↵McKenzie JE, Taljaard M, Hemming K, et al. Reporting of cluster randomised crossover trials: extension of the CONSORT 2010 statement with explanation and elaboration. BMJ2025;388:e080472. doi:10.1136/bmj-2024-080472 pmid:39761979FREE Full TextGoogle Scholar↵Anderson CS, Arima H, Lavados P, et al., HeadPoST Investigators and Coordinators. Cluster-randomized, crossover trial of head positioning in acute stroke. N Engl J Med2017;376:2437-47.doi:10.1056/NEJMoa1615715 pmid:28636854CrossRefPubMedGoogle Scholar↵Bellomo R, Forbes A, Akram M, Bailey M, Pilcher DV, Cooper DJ. Why we must cluster and cross over. Crit Care Resusc2013;15:155-7.pmid:23944199CrossRefPubMedGoogle Scholar↵Arnup SJ, McKenzie JE, Pilcher D, Bellomo R, Forbes AB. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research. Crit Care Resusc2018;20:117-23.doi:10.1016/S1441-2772(23)00754-8 pmid:29852850CrossRefPubMedGoogle Scholar↵Hemming K, Kasza J, Hooper R, Forbes A, Taljaard M. A tutorial on sample size calculation for multiple-period cluster randomized parallel, cross-over and stepped-wedge trials using the Shiny CRT Calculator. Int J Epidemiol2020;49:979-95.pmid:32087011CrossRefPubMedGoogle Scholar↵Arnup SJ, McKenzie JE, Hemming K, Pilcher D, Forbes AB. Understanding the cluster randomised crossover design: a graphical illustraton of the components of variation and a sample size tutorial. Trials2017;18:381.pmid:28810895CrossRefPubMedGoogle Scholar↵Moher D, Collins G, Hoffmann T, Glasziou P, Ravaud P, Bian ZX. Reporting on data sharing: executive position of the EQUATOR Network. BMJ2024;386:e079694.pmid:39137944FREE Full TextGoogle Scholar↵Aldcroft A. New requirements for patient and public involvement statements in BMJ Open. BMJ Open2018.Google Scholar↵Staniszewska S, Brett J, Simera I, et al. GRIPP2 reporting checklists: tools to improve reporting of patient and public involvement in research. BMJ2017;358:j3453. doi:10.1136/bmj.j3453 pmid:28768629Abstract/FREE Full TextGoogle Scholar↵Cro S, Kahan BC, Rehal S, et al. Evaluating how clear the questions being investigated in randomised trials are: systematic review of estimands. BMJ2022;378:e070146.pmid:35998928Abstract/FREE Full TextGoogle Scholar↵Kahan BC, Hindley J, Edwards M, Cro S, Morris TP. The estimands framework: a primer on the ICH E9(R1) addendum. BMJ2024;384:e076316.pmid:38262663FREE Full TextGoogle Scholar↵Hopewell S, Boutron I, Chan AW, et al. An update to SPIRIT and CONSORT reporting guidelines to enhance transparency in randomized trials. Nat Med2022;28:1740-3.pmid:36109642CrossRefPubMedGoogle Scholar↵Page MJ, Moher D, Brennan S, McKenzie JE. The PRISMATIC project: protocol for a research programme on novel methods to improve reporting and peer review of systematic reviews of health evidence. Syst Rev2023;12:196.pmid:37833767CrossRefPubMedGoogle Scholar↵Kilicoglu H, Jiang L, Hoang L, Mayo-Wilson E, Vinkers CH, Otte WM. Methodology reporting improved over time in 176,469 randomized controlled trials. J Clin Epidemiol2023;162:19-28.pmid:37562729CrossRefPubMedGoogle Scholar