Reflections on D&I Measurement Challenges, Progress, and Next Steps
Gila Neta, PhD, MPP, Program Director, IS Team
On the heels of the 12th Annual Conference on the Science of Dissemination and Implementation , I find myself reflecting on the essential topic of measurement. (It must be the epidemiologist in me.) Given the theme of the conference, “Raising the bar on the rigor, relevance, and rapidity of dissemination and implementation science,” the program was fittingly bookended with talks highlighting the importance of identifying and appropriately measuring key constructs in implementation.
Opening with Amy Edmondson’s poignant talk on psychological safety, we learned her story of discovering this construct in organizational behavior and its importance for effective learning and implementation within healthcare organizations. Hers was a particularly powerful story of her accidental discovery of this critical construct, psychological safety, in her dissertation research examining teamwork as a critical determinant of lower medical error rates. To her surprise, she found that better teams had higher medical error rates. What she ultimately realized was that better teams had higher psychological safety, so did not in fact commit more medical errors but simply reported them more often. Her work highlighted the challenge of objective performance measures as well as defining relevant constructs.
On the tail end of the conference, the closing plenary focused on measurement issues in implementation science. Here, we were reminded of the ever-present challenges of some of our measures being inadequately rigorous (with limited validity and reliability), relevant (not practical or pragmatic), or often slow to develop. Bryan Weiner’s review that kicked off the panel also documented a complete lack of measures for certain critical implementation outcomes. But the panel gave us hope for the future! Bryan’s talk described an NIH-funded study on rapid-cycle development of pragmatic and rigorous measures for three key implementation outcomes: acceptability, appropriateness, and feasibility. Maria Fernandez highlighted her recently NCI-funded work in developing and improving a measure for organizational readiness. And Lisa Saldana described her NIH-funded work on measuring implementation processes, including measuring implementation costs.
Given the need to improve available IS measures, there seems to be a great wave of productivity in measures development fast approaching. NCI recently funded six implementation science centers, each of which has a measurement and methods core, and all six are coordinating efforts to develop rigorous and relevant measures of implementation that may be standardized across studies. Further, NCI launched an Implementation Science Consortium which focused on developing public goods for the field. Among them was a focus on developing standardized measures and guidance on implementation costs, one of the measures that Bryan highlighted as underrepresented in our field.
But there is still much work to be done! While we have long included measurement development as an area of interest in our Trans-NIH PARs, we have few applications that are submitted in response to this priority. After reviewing all NCI-funded DIRH grants, I was a bit surprised to learn that only six of 72 described any sort of measure development work in their applications, and only one of those six was explicitly focused on developing a measure (Maria’s study referenced above). As we’re interested in growing this area of the IS portfolio, we’d love to hear from you all on your specific challenges with implementation measures and efforts to improve them.
Dr. Gila Neta is a Program Officer for the Implementation Science Team in the Office of the Director in the Division of Cancer Control and Population Sciences at the National Cancer Institute. Read More »