Implementation trials often use experimental (i.e., randomized controlled trials; RCTs) study designs to test the impact of implementation strategies on implementation outcomes, service outcomes, and/or patient-level outcomes. This is also commonplace in healthcare delivery research, where experimental trials test the impact healthcare delivery interventions on processes of care, care quality, and/or patient-level outcomes. RCTs remain the strongest (albeit not only) type of study design with high internal validity, thus allowing for causal inferences to be made about the effects of implementation strategies or healthcare delivery interventions on observed outcomes.
Studies testing implementation strategies, as well as interventions to improve healthcare delivery, typically deliver those strategies or interventions to providers, healthcare teams, clinics, public health organizations, healthcare delivery systems, and communities. In turn, these providers, teams, and organizations deliver health-related practices to individuals (e.g., patients, community members). This results in a hierarchical or nested data structure (i.e., patients nested within providers, providers nested within clinics). While this data structure can be accommodated using mixed effects multilevel models, it is sometimes the case that randomization at the level of the individual clinician or individual patient is problematic or infeasible. Study designs that use individual-level randomization can generate ethical issues, cost/logistical burdens, and concerns about contamination between those randomized to receive the implementation strategy or healthcare delivery intervention and those randomized to the control or comparison arm of a study. To minimize these potential threats to internal validity, and address some of the ethical, logistical, and cost considerations, a specific type of RCT—the cluster randomized trial (CRT)—is often needed.
In CRTs, the unit of randomization may be clinics, hospitals, community-based organizations, schools, worksites, or whole communities. These trial designs that randomize clusters are increasingly used to study strategies for implementing evidence-based interventions into health-related settings and to evaluate interventions to improve healthcare delivery. Variations of CRT designs include parallel CRTs, stepped-wedge CRTs, and cluster randomized factorial and cross-over trials.
CRTs designs are increasingly used in implementation science and healthcare delivery research. For example, studies within several consortia funded by the Cancer MoonshotSM leverage CRTs to test implementation strategies and healthcare delivery interventions. These consortia include the Implementation Science Centers in Cancer Control (ISC3) Program, Accelerating Colorectal Cancer Screening and follow-up through Implementation Science (ACCSIS), and Improving the Management of symPtoms during And following Cancer Treatment (IMPACT). In addition, the Implementation Science Study Designs Action Group in the Consortium for Cancer Implementation Science (CCIS) has discussed CRT designs in the context of implementation trials, including measurement and analytic approaches needed to accommodate rapid and unplanned contextual changes that may occur as a CRT progresses.
As interest in this type of study design has expanded, so too have the resources to support investigators in conducting CRTs. For example, the NIH Office of Disease Prevention offers educational materials and a sample size calculator (more information available here); the NIH Research Methods Resources website includes information about CRTs, as well (available here). The NIH Pragmatic Trials Collaboratory also hosts resources on conducting CRTs with examples and studies from a range of healthcare delivery settings (available here).
To complement existing resources, and continue to build methodologic expertise in the design and interpretation of CRTs, the NCI sponsored a virtual short course, Cluster Randomized Trial Designs in Cancer Care Delivery. The course was held virtually Tuesday, May 3rd through Thursday, May 5th, 2022. The course provided training in the design, conduct, and analysis of CRTs, including parallel CRTs, stepped-wedge CRTs, and cluster randomized cross-over trials. Topics covered the rationale for the use of these designs, sample size calculations, analytic methods, ethical considerations, and trial reporting and interpretation, among others. Principles were illustrated using case studies reflecting the different variations of CRTs and with examples drawn from across the cancer control continuum (prevention, diagnosis, treatment, survivorship, and at end-of-life), in both implementation science and healthcare delivery research.
We were very fortunate to have two international experts—Dr. Karla Hemming and Dr. Monica Taljaard—who served as course co-instructors.
Thanks to those who joined!
Wynne E. Norton, Ph.D., is a Program Director in Implementation Science in the Division of Cancer Control and Population Sciences (DCCPS) at the National Cancer Institute (NCI). Dr. Norton holds a secondary appointment in the Health Systems and Interventions Research Branch in the Healthcare Delivery Research Program in DCCPS and serves as co-chair of the DCCPS Clinical Trials Coordination Group.
Dr. Sandra Mitchell is Senior Scientist and a Program Director in the Outcomes Research Branch in the Healthcare Delivery Research Program. She leads efforts to address symptom burden and functional impairment during and following cancer treatment.
Dispatches from the Implementation Science Team, is an episodic collection of short form updates, authored by members and friends of the IS team representing a sample of the work being done and topics that our staff are considering for future projects. Topics address some of the advances in implementation science, ongoing issues that affect the conduct of research studies, reflections on fellowships and meetings, as well as new directions for activity from our research and practice communities.