Rapid Cycle Research
By Wynne E. Norton, PhD
One of several breakout groups from the inaugural Implementation Science Consortium in Cancer (ISCC) conference last year focused on the intersection of implementation science and rapid cycle research (RCR). Led by Drs. Donna Shelley (NYU), Brian Mittman (Kaiser Permanente), and myself, approximately 30 attendees participated in several rotating small group discussions about how RCR could be leveraged to advance implementation science and next steps needed to do so. Many promising ideas were generated and captured in the group’s report-out presentation and summarized in a Meeting Summary Report available for download.
In the interim, several group members have continued to meet on an ad hoc basis to explore next steps toward understanding and leveraging the use of RCR in implementation science. We have interacted with several external colleagues, including some working on reviews of RCR in healthcare delivery settings and others are supporting RCR through targeted funding announcements and publications. As a spinoff of the working group, the Healthcare Delivery Research Program (HDRP) in DCCPS and the Implementation Science team are planning a new workshop on RCR in cancer care delivery settings. We anticipate the workshop to use a similar format as the Organizational Research in Healthcare workshop from October 2019, hosted by HDRP, which included presentations and breakout sessions. Although initially planned for this fall, we have tentatively rescheduled the in-person workshop for summer 2021. Stay tuned for details as plans evolve.
So, with ongoing and forthcoming activities in this area, what exactly is RCR?
As with most areas of scientific inquiry, there are various definitions and conceptualizations of RCR. Some conceptualizations include specific time frames (e.g., one month) within which studies can or should be conducted and results subsequently shared with (or used by) stakeholders and end-users. Other conceptualizations are agnostic with respect to timeframe but emphasize certain features of research that include rapidity, such as the “5 R’s” standard for making research more relevant and actionable, as published by Peek and colleagues (2014).
One of the more comprehensive overviews of RCR in healthcare is a monograph published by the Agency for Healthcare Research and Quality (AHRQ), Using Rapid-Cycle Research to Reach Goals: Awareness, Assessment, Adaptation, Acceleration (Johnson et al., 2015). Importantly, the authors provide a working definition of RCR: “A process by which practical problems are identified and addressed using analysis methods that are incremental and contextually informed” (p. 5). RCR study designs to assess the impact of interventions or implementation strategies on health or implementation outcomes include experimental (i.e., randomization plus manipulation; e.g., randomized controlled trials [RCTs] and variations thereof, such as cluster RCTs or pragmatic RCTs) and quasi-experimental designs (i.e., manipulation without randomization; e.g., interrupted time series, regression discontinuity). Of course, causal inference is strongest in RCTs, and there are examples of using RCTs in healthcare delivery settings, such as testing messages and alerts for increasing smoking cessation, annual wellness visits, and decreasing readmissions. Sometimes, however, RCTs are often impractical or infeasible for this type of study. Rapid cycle designs also have a strong foundation in quality improvement activities, are often linked to learning healthcare systems, and traditionally leverage Plan-Do-Study-Act (PDSA) cycles, Statistical Process Control, and/or interrupted time series designs. Work has also focused on the importance of rapid turnaround qualitative research and mixed-methods evaluations to better understand context and inform changes to clinical care.
RCR has important implications for designing, conducting, and analyzing implementation studies, whether they aim to test strategies or describe and understand implementation processes. At the heart of RCR—and arguably in implementation science as well—is the recognition and importance of developing solid evidence to inform practice change. It is inherently a purposeful approach to research that blends rigor with application; in doing so, it ideally brings together a team of researchers, practitioners, and system leaders to identify problems, test solutions, and adjust or sustain accordingly.
RCR is a relatively new term, although largely grounded in quality improvement approaches that have been in use for decades. Implementation science is poised to increasingly embrace RCR; in the process, the field may benefit from considering key issues and questions as it builds capacity and support for RCR. Some examples are provided below.
- What is the relationship between embedded research, RCR, and action-oriented research? Where do these approaches overlap and where do they diverge? How can we better operationalize these types of research, understand their similarities and differences, to more clearly understand when and where they might be most applicable?
- What training might be needed to support a cadre of implementation scientists with expertise in RCR? What are key elements of RCR that would be essential for researchers (ideally teams of researchers, as the skill set likely necessitates multidisciplinary backgrounds) to know? What other forms of support (e.g., from funding agencies, journals, academic programs) are needed to encourage and support expanded RCR activity?
- When might RCR be most applicable, and when might alternative approaches be better suited for implementation studies?
We look forward your contributions in advancing implementation science by leveraging the principles and approaches of RCR.
Dr. Wynne Norton is a Program Officer for the Implementation Science Team in the Office of the Director in the Division of Cancer Control and Population Sciences at the National Cancer Institute. Read More »