Confronting the Quality Paradox - Part 2
Biomed Central published a collection entitled The Many Meanings of “Quality” in Healthcare 19 June 2015.[1] This collection was cross disciplinary and addressed three broad themes: the practices of quality assurance, giving space to “the story,” and addressing moral complexity in the clinic. This is the second of a series of articles dealing with individual papers that resonate with the practice of medicine today. In his paper Accounting for Quality, [2] Pflueger lays out the argument that the accounting notions underlying current quality improvement efforts are flawed in important ways. He begins by noting: Accounting—that is standardized measurement, public reporting, performance evaluation and managerial control—has become increasingly central to efforts to improve the quality of healthcare…Accounting successfully for quality, however, is increasingly shown to be more complex and problematic than typically assumed…The challenges include difficulty specifying a workable definition of quality, challenges determining appropriate levels and locations of measurement, and problems interpreting the meaning and significance of change alongside the effects of case-mix and other factors. They also include resistance to measurement from those being measured, the development in some jurisdictions of “target fatigue,” unwillingness of the public to use and trust in public measures, and gaming activities resulting in “target myopia,” “hitting the target but missing the point,” or other even more insidious behaviors. Let me make this concrete by taking a specific example. The medical literature suggests that glycated hemoglobin measurements (HgbA1c) are a accurate reflection, (with some exceptions), of the average levels of glucose in a given patient and lower levels are associated with a lower risk of target organ damage. As such, targeting A1c levels is an appropriate goal in individual patients. There is some controversy, but these statements are pretty mainstream. Now let us take these ideas and translate them into the quality improvement realm. Now, the achieved, and reported HgbA1c level represents a “pass-fail” level. But is the cutoff the same in all patients? It is also evident the patient’s HgbA1c level reflects a host of behaviors that are not subject to the practitioner’s control. Using good quality improvement methodology, the practitioner may know what percentages of patients are “at goal.” But what if that number is below some pre-specified target? This is where gaming begins—if you discharge your “non-compliant” patients, your percentage will increase. If you are the agency setting the targets, you have a related problem. Do you want to reward attainment of the target, or do you want to reward improvement? At first, it may seem these are the same thing. Knowing the HgbA1c was 11 as opposed to 10, did not really provide clinically relevant information. But for those patients whose random glucoses were generally in range, a HgbA1c levels of 8.0 reclassified them from “success” to “failure.” Changes in therapy are often sufficient to reduce HgbA1c by a modest amount, so process improvement efforts result in increased percentages of patients who were successful. But if the goal is to prevent or delay end-organ damage, the incremental value of improving the “non-compliant” patient from a HgbA1c of 11.0 to 8.0, is greater than the value of improving a compliant patient from 8.0 to 7.0. So which effort should you reward? Since it is easier to “measure twice and cut once,” almost all clinical performance measures look only at pass-fail rates. Note, too, that changing a patient level goal—improving control of blood glucose—suddenly becomes something different when the same test is used as a “process indicator” for quality improvement/assurance efforts. (See part One) So what is going on? Do we just need a better indicator, or a more precise accounting methodology? First, this literature [on continuous quality improvement] conceptualizes the problem of accounting as a matter primarily of uncovering or capturing information as a camera might, through its ever more precise and accurate measurement. It suggests that measurement is a matter of simply better understanding the pre-existing and unvarnished reality or quality—of dividing it up into more precise domains (such as patient safety, patient experiences, and clinical effectiveness) or characteristics (such as structures, processes and outcomes) and then applying a variety of technical tools to isolate these things…It projects accounting as a matter of developing sharper and better lenses: of adjusting for case-mix, establishing data quality, removing potential noise, establishing attribution, undertaking sophisticated factor analysis, and developing more refined and sophisticated data management systems. Second, and relatedly, the quality improvement literature conceptualizes the process of capturing information through refinement of measurement systems as a matter of applying a set of timeless scientific principles. These principles, borrowing from the natural sciences, equate accuracy with representational faithfulness… (Third)…the quality improvement literature suggests that one can only manage what one can measure. It implies that quality improvement can only be achieved through finding and fixing—that is, by accounting for quality and then making changes on the basis of these accounts alone…With quality specified accurately, management was seen to be a matter of rewarding those measures, freeing professionals to interact with them, carefully monitoring core standards, and furthering education in quality improvement. The author contrasts this set of assumptions with assumptions that are common in accounting literature. First, “accounting does not just find things out, but makes them up.” The example he uses is cost-accounting, which I have considered elsewhere under the title How Did You Know? He also considers the current enthusiasm for “patient-experience,” and commented tellingly on initial efforts. At the time, the measurement of “patient satisfaction” was advanced as a primary route through which to give expression to the patient’s view. It was a measure that was precisely defined and measurable with a fair degree of accuracy. Yet, it quickly became clear that this measure would not do…It was argued that this view had to be one that could differentiate between providers, specify what they did or did not do, and could provide actionable opportunities for improvement. The satisfaction survey was inadequate for all these things because, among other things, it revealed too much about the patient, contaminating her view of what the providers did or did not do with her idiosyncrasies, moods, and whims. The second contrast is “accounting is not simply a matter of substance, but of style.” Here again the author returns to the patient satisfaction survey. He notes that the change from patient satisfaction to “patient experience” include concerns and assumptions including a desire to know what providers did (or did not) do, the model of quality improvement that need “a number,” and the re-definition of the patient as the “consumer,” and much else. This style is concerned with regulatory ambitions, political discourse, and ideals of control often at the expense of front-line and situated knowledge and practice…The demand for measures of quality for public reporting purposes almost always goes hand in hand with the development and extension of managerial ambitions to make these measures central to internal control systems…Front line staff that are subject to these managerial pressures, even if they maintain that quality is far more multidimensional than the accounts suggest, are constituted within a management accounting system that enforces one style of knowing about quality. They are made to understand their own performance through the measures and demand of their colleagues they do the same…The directors with quality responsibility (at one site) explained much of their activities to account for quality as a matter of “feeding the beast, while still trying to do the right thing” and many nurses were quick to highlight that quality was something they understood by, for example “getting a feel for the room,” or “putting themselves in their patient’s shoes,” rather than through the quality reports. The third area of divergence cited by the author is “accounting does not just facilitate, but displaces, control. “ By assuming that quality can be adequately and fully captured by numbers, and then managed through mechanisms of rationalized control, quality improvement efforts have the potential to displace quality. They, in other words, might control what is measured while encouraging the accumulation of poor quality in areas that the measures themselves hide…Evidence of this displacement of control through management by numbers is accumulating through the emergence of almost continual quality failures and healthcare scandals alongside the extension of managerial control infrastructure and ambitions. I can certainly relate to these ideas. Dialysis units, for example, are subject to regular surveys by State and Federal officials who have the power to sanction or even close the unit. Their surveys are conducted using “interpretive guidelines” developed at the Federal level, but each survey inevitably focuses on documentation. Some surveyors look for the “big picture,” but others “nit-pick.” As one of my now retired nurse managers used to characterize a lot of what we did for proving quality improvement as “documenting for the checkers.” In her mind, it was a separate activity from assuring patients were getting the care they needed. Certainly, there is no evidence that all of the push for mandated quality measures has had a measurable impact on the lives of dialysis patients. So what is to be done? The author argues that we need to change the way we thing about “the numbers.” We need systems that create messy, overlapping measures of reality, and need to see opportunities for improvement in the discordances. This would arguably make the function of quality management a more difficult, uncertain, and complex task. Indeed quality management would be a matter of not of producing certainty, but uncertainty, ambiguity, and even organizational friction. It would also produce the risk that organized uncertainty, ambiguity and friction might denigrate into mismanagement, irresponsibility, or even negligence. Perhaps what we really need is recognition that tacit knowledge is real, and the development of teams capable of developing and deploying tacit knowledge in conscious and deliberate ways offers the best hope of actual improvement in the quality of care. But this requires recognizing the value of multiple perspectives. I suspect the biggest problem of all, though, is the desire to control. I have suggested repeatedly that my goal is to promote clinical leadership, which is about setting directions, boundary conditions, and responsiveness to multiple pressures, not least those of the patient and family members. While we would all like to have “control,” the sooner we realize that the only thing we can control is our personal reactions to what is going on around us, the better we are going to be. Most of the quality measures out there are beyond our personal control. Patients are not rats in a cage—they are “free-living” experiments of n=1. I often tell the medical young that as physicians we do what we do. Sometimes it works and sometimes it does not. If we don’t take too much credit when it works, we won’t have to take as much blame when it does not. Quality consists of “doing what we do” conscientiously, with recognition of what we do and don’t know about proper treatment, and doing so “care-fully.” We can, and should, work to improve all of these aspects of our care. We just have to be aware of the limitations of our models for measuring these things. 5 July 2015 [1] Swinglehurst D, Emmerich N, Maybin J, Park S, Quilligan S. Confronting the Quality Paradox: Towards New Characterizations of “Quality” in Contemporary Healthcare. BMC Health Services Research 2015;15:240. doi: 10.1186/s12913-015-0851y. Accessed at http://www.biomedcentral.com/1472-6963/15/240, 21 June 2015. [2]Pflueger D. Accounting for Quality: on the Relationship Between Accounting and Quality Improvement in Healthcare. BMC Health Serv Res 2015;15:1783. doi: 10.1186/s12913-105-0769-4. Accessed at http://www.biomedcentral.com/1472-6963/15/178, 25 June 2015. |
Further Reading
Confronting The Quality Paradox - Part 1 Confronting The Quality Paradox - Part 3 Confronting The Quality Paradox - Part 4 There will never be authentic quality within healthcare unless the word explicitly accommodates the truth that a human being is simultaneously both a subject and an object. Confronting The Quality Paradox - Part 5 The Center Effect Some dialysis units have consistently better performance than others, even after adjusting for individual patient variables, which is termed the center effect. This has important implications for hospitals and health care organizations as they respond to public reporting of data. |