More on the Quality Paradox
When the British National Health System began using quantitative analysis to measure quality of health care delivered, they noted a curious paradox—managers began focusing on the metrics thereby creating a disconnect with the realities of the patient care underlying the data. This “accounting culture” led to the paradox that improved “numbers” was often associated with perceived worsening of patient care by front-line clinicians. The problem has proven universal, so it was heartening to see two articles this summer addressing the problem, albeit in different ways.
The first, by Bilimoria and associates, reported an evaluation of publicly available hospital rating systems. There were four reports analyzed: US News & World Report got a B, Medicare’s Star Rating got a C, Leapfrog a C-, and Healthgrades a D+. While admitting there is no “gold standard,” the authors noted all the systems had limited data and measures, lacked audits, and included different hospital types in the same groups. They all ended up lumping data into a composite score.
“The methods rating systems use for compiling measures into composites to create an overall hospital score or grade vary tremendously, and there is often limited rationale for the selection and weighting of different elements in the composite. Moreover, there are frequently measures or domains that are weighted equally, though they are definitely not equal in the eyes of any stakeholder.”
Their Table 1 presents a critique of the issues identified in their analysis. Since CMS has the most power to penalize “poor performance,” I wish to focus on their analysis of the “Star System.” Most critically, they ranked potential for misclassification as a D, meaning their assessment was the system had a significant likelihood of misclassifying a hospital’s performance, despite getting B on transparency and usability. CMS’ scale was also not particularly good at spotting iterative improvements, meaning it was not particularly helpful for seeing if changes were moving scores in the desired direction.
Given the problems, the authors recommend better data, better measures, meaningful audit, and external peer review. However, none of these necessarily address the quality paradox. It is a simple hope that somehow doing a better job of measurement will make things better. But have we reached the limit of external “benchmarking” already?
Geisinger Health System has taken a different approach, developing an internal tool based on the IOM six aspects of excellent health care. They had four major conclusions from their experience, which I quote.
1. Improving the value of health care requires identifying and measuring outcomes that matter to patients. Ideally, such information should be acquired efficiently and in real time in order to enable immediate improvement within clinical microsystems.
2. We developed an Assessment of Care tool that is simple and easy to use at the point of care in order to facilitate value-creation in real time. The tool’s design was inspired by the IOM’s six dimensions of perfect care: safety, timeliness, effectiveness, efficiency, equity, and patient-centeredness.
3. The power of the Assessment of Care tool originates in trust. Care teams must commit to responding swiftly and in good faith to the patient feedback or risk the enterprise falling apart.
4. Adoption of the Assessment of Care tool by patients may come naturally. For provider adoption to be successful, the tool must be incorporated into daily work flows.
The authors provide some telling examples of specific situations where questions raised by patients’ negative reports allowed the care team to make rapid, usually inexpensive, changes in the way they provided the care—giving the patient more, or less, feeling of control as they desired. Since the scale is visual, it is quick and easy to do, and less susceptible to gaming— such as “strive for five” signs everywhere—than some of the longer, more involved scales. Although they did not come right out an say it, they imply adoption requires a willingness to listen more than the desire to get an “A.” Perhaps the fact the authors are psychiatrists has something to do with their focus, but I think they are on to something important.
If we want to improve performance, we need to find ways to let our patients tell us what they want. Does your institution see the IOM goals as something “we” should work on and measure, or something we should use to help the patient tell us what they want? We say paternalism is dead in medical practice, but is it dead in hospital care? Do we run hospitals for ourselves—our own efficiency—or for patients and the efficiency of their care? These are tough questions and getting quick and easy answers is unlikely. But organizations that grapple with them in realistic terms, like the folks at Geisinger, are likely to be the survivor organizations.
20 October 2019
 Bilimoria KY, Birkmeyer JD, Burstin H, Dimick JB, Joynt Maddox KE, Dahlke AR, DeLancey JO, Pronovost PJ. Rating the Rates: An Evaluation of Publicly Reported Hospital Quality Rating Systems. 14 August 2019. Accessed 21 August 2019 at https://catalyst.nejm.org/evaluation-hospital-quality-rating-systems/.
 Coffey MJ, Coffey CE. Real-time Pursuit of Outcomes That Matter to Patients. 12 June 2019. Accessed 19 June 2019 at https://catalyst.nejm.org/assessment-of-care-tool-real-time-pursuit/.
Confronting The Quality Paradox - Part 1
Knowledge management (KM) covers any intentional and systematic process or practice of acquiring, capturing, sharing, and using productive knowledge, wherever it resides, to enhance learning and performance in organizations. Which strategy for knowledge management is appropriate in dialysis clinics?
Putting Patients At The Center Of Healthcare
Putting patients at the center is crucial for healthcare organizations, but how can it be done?