Actionable Data
I participated in a meeting where Press Ganey survey data were presented. Normally the presentation is met with silence from the clinicians in the room, but this time people started making comments about it. One noted that the difference between “success” and “failure” as defined in the presentation was quite narrow, suggesting the normative data have a very small spread. Another noted there seemed to be quarter to quarter changes, but nothing seemed to appear as an obvious explanation. From my perspective there were a couple of fundamental errors. First, the data are subject to wide fluctuation, so quarter to quarter comparison does not give a good idea if there was common cause or special cause variation.[1] Although we have data going back several years, no one has taken the time to generate a process control chart where we can tell if the process is stable, getting better, or getting worse. Second, even if we do have the data, we lack insight into what might change the process for the better, particularly given the narrow spread of the data. As the group discussed this problem several ideas emerged. One doctor suggested others were gaming the system, or at least gaming the system in ways we were not. Another suggested the data just didn’t make sense. Another doctor suggested we were looking at the wrong thing—trying to fix the data rather than trying to fix the process. Although we did not necessarily fix the way data are presented in the meeting, we did cover most of the viable options assessing what the data meant. Further discussion led the moderator to observe the problem in most cases involved communication between members of the care team and the patient. This seems likely to be the case, but then leads to the question—what can be done about it? Are the data and observations actionable? If so, what kinds of actions might help? Or is this really a Gordian knot, best cut or ignored? My bias is that there are two challenges. First, we need to understand that our goal is to be average, so we don’t incur financial penalties we can ill afford. This is counterintuitive to high-achievers who become doctors and administrators and is hard to wrap one’s arms around at first blush. The problem is there is a finite amount of time and energy available to work on the various issues that demand attention. If we are average, we are probably balancing the effort-reward equation about as well as it can be done. Second, with large variation in the data and a narrow range of distribution, where one ends up in any given period is also mostly random. Second, we need to grapple with the fact that communication, like culture, is amorphous, but terribly important to organizational and clinical success. If the culture is good, the communications within the culture are likely to be good and scores are likely to improve without specific process improvement. So, what can be done? “High-performing health care organizations know that world-class patient care, innovation, and general success rely on a foundation of highly engaged staff. When Moffitt Cancer Center realized that workforce engagement had slipped below the national average, it deployed a multifaceted strategy that dramatically boosted engagement and leadership accountability, and, in the process, transformed workplace culture.”[2] The authors note the slippage in engagement occurred in conjunction with two other events. First, the organization was damaged by the 2008 financial crisis and was worried about its financial health. Second, in 2012 a new CEO was hired who “viewed workforce engagement as a priority and as an important indicator of senior leadership performance and success.” They ended up with a customized process consisting of some best practices and some trial and error-based efforts. What did they do? First, they put engagement on their scorecard with equal weight to turnover, financial performance, quality, and patient satisfaction and senior leadership was held accountable. Second, they increased the frequency of the Press Ganey engagement survey to annually. Additionally, the began to look at the survey not as “an ‘event’ but rather was viewed as a way of keeping a finger on the pulse of how team members were feeling, identifying trends, and staying in touch with the organization’s most important asset: its workforce.” They thought they would need to customize the approach used in different areas of the organization, but found, instead, an organization-wide approach focused on a single theme for each year was more effective. An example of the themes used were “Stress and Work-Life Balance,” “Communication,” “Collaboration,” and “Resilience.” They had an open-ended process that got input from people at all levels of the organization, but they also developed a series of communication strategies to inform people at all level of the organization as to what was happening using real-time data. Did it work? Engagement as measured by Press Ganey survey increased from 19th percentile to the 87th percentile. But more importantly: “We have also observed consistently and steadily increasing engagement in the trend lines for two key employee groups: nurses and physicians. In addition to increased engagement, we have experienced improved trust between leadership and team members and have observed a greater sense of communication and collaboration in the organization…”” As is often the case, dealing with the “numbers” did not mean trying things to fix the process. Fixing the numbers meant taking a deep dive into the organization’s culture and “the way we do things around here.” In other words, the data were actionable, but not in a linear, direct fashion. But I suggest Moffitt’s success really hinged on recognizing the need to stay “in touch with the organization’s most important asset: its workforce.” Does your organization see the workforce as its greatest asset or its greatest expense? 25 July 2018 [1] Common cause variation is “noise in the signal,” representing all the possible factors that might influence results. Special cause variation, as the name implies, means something is outside the usual range, either for good or ill, and should be identified, if possible. Conventionally this means a result that is more than 3δ, (sigma is the standard deviation in samples that are not normally distributed,) away from the mean result. [2] Heimbeck D, Riley L, List AF. A Multi-strategy Approach to Rebuilding Workforce Engagement. NEJM Catalyst. Accessed 25 July 2018 at https://catalyst.nejm.org/multi-strategy-apporach-rebuilding-workforce-engagement.html. |
Further Reading
Communications Messaging is replacing dialogue in clinical practice to the detriment of all. Confronting The Quality Paradox - Part 1 Engaging Burned Out Physicians Paradoxes for Physician Leadership I started this website to share some notions about physician leadership. I have also reviewed many other people’s ideas about these subjects, some of which I have shared in other articles. What strikes me about much of the “literature” on physician leadership, though, is how they fail to grapple with the inherent paradoxes of the role of physician leader. Here are some that I think matter, with some links to other articles on this site, which develop the ideas in more detail. Quality Metrics |