I once interviewed a founder of my medical group who was then in his early nineties. When asked what were the greatest advances he had seen in his years in medicine, he replied promptly: “Penicillin and refrigeration.” He went on to explain when working as a pathologist in New York he would do autopsies on babies and children who died of bacillary dysentery, a problem that virtually disappeared once refrigeration made it possible to keep milk from spoiling in the summertime.
I have been thinking about how I would answer my own question. My “outside the box” answer would be the insight that the quality and safety of medical care are as much about system design as they are human performance. Certainly, this insight has motivated a lot of my work in clinical leadership and guided my approach to improving quality and safety, particularly in dialysis units. But recent developments threaten the value of this insight, particularly the notion that physicians and health care systems need to be accountable financially for patient outcomes. The quality paradox is that as money and the accounting culture have invaded medical practice, the focus has often shifted from the patient to the numbers to the detriment of the patient. Thus, it was with interest I read a recent opinion piece by Baker and Chassin from the Joint Commission proposing four accountability criteria for outcomes measures.
Criterion 1 is strong evidence should exist that good medical care leads to improvement in the outcome within the time frame of the measure. The issue is two-fold. First, there needs to be a direct link between the medical therapy available and the desired outcome. This is easiest to see when considering procedural interventions, but less clear with most medical interventions, where the “effect size” of the intervention is more challenging to measure. The authors state:
“Measure developers, payers, and organizations advocating for outcome measures should explicitly examine the effect size versus the effect of other risk factors and question whether a risk-adjusted measure can truly distinguish quality of care across providers.”
Second, the time frame for measurement needs to accord with available data. For instance, the heart failure studies showing large differences with medical or interventional strategies compared to usual care did not show divergence between the two treatment arms until three or more months after the study intervention began. Thus, the 30 day readmission rate for congestive heart failure measures not just medical care during hospitalization, but also other factors such as the accessibility of primary care in the referral area or the number of patients who have become “hospital-dependent.”
A third challenge, which the authors do not address, is the attribution issue. Mehrotra and associates have published a critical analysis, noting there are real problems with the currently more than 150 models. Although data are limited, one study of physician cost profiling using claims data showed more than half of physicians were assigned to a different cost category when something other than the default model was used. Furthermore, physicians may be held responsible for care which is not a normal part of their practice. They cite the models where attribution is given to the patient with the most visits, noting that a dermatologist could be cited for failing to order a pap smear, mammogram, or colonoscopy, none of which are a normal part of a dermatologist’s practice.
Finally, they note for the attribution model to be fair, the provider must be able to influence the outcome. So, they ask, should a hospital be responsible for outpatient care in the community? Advocates of population-based care would say yes, but in the current system, a hospital is not paid to support the outpatient care, and doing so might reduce admissions, thereby additional loss of revenue. Said another way, the hospital would be taxed twice: on the outflow and on the inflow. Given this reality, why would any administration take on the challenge? This is certainly one of the biggest challenges in my area, where the primary care infrastructure is melting faster than the polar ice cap and the volume of hospital business is growing more than 10% annually, putting strain on all the providers—physician and hospital alike.
Baker and Chassin had three other accountability criteria for outcomes measures. The second was the outcome measure should be measurable with a high degree of precision. It seems intuitive that death or readmission would be easy to count. The challenge comes in linking the two events to clinical data. One study, for instance, found a range of 0-75% for the number of patients with pneumonia who were assigned a principal diagnosis of sepsis or respiratory failure, both of which result in higher payment. There is no simple way to distinguish between the severity of the patient’s illness and the aggressiveness (gaming?) of the diagnoses on the part of the hospital’s case management system. The truth is that there is probably some of both in every institution.
They note CMS is moving to patient-reported outcomes measures, known as PROMs, which may prove particularly challenging, mainly because of low response rates seen in other patient-based intitiatives. In their Chicago area ZIP code, for instance, the response rates to the current HCAHPS survey varied from 10-30%, and none are not adjusted for patient characteristics. Whether they can or should be continues to be debated.
The third criterion is the risk-adjustment methodology should include and accurately reflect the risk factors most strongly associated with the outcome. While easy to understand, risk-factor adjustment continues to be uncommon, partly because it is expensive to determine. The fourth criterion is implementation of the outcome measure must have little chance of inducing unintended adverse consequences.
So how is CMS doing using these criteria? Looking at the ten current measures, only two get an unequivocal A rating: coronary artery bypass grafting and NSQIP surgical site infection. NHSN central line-associated bloodstream infection gets a “probable” on the precision criterion, and change in physical function and joint pain after joint replacement and HCAHPS scores also get a probbable on response rate possibly violating the precision criterion. The other five measures all have problems in more than one domain.
Of course, I don’t expect CMS to change its mind based on this analysis. There are two parts to the “value” argument: cost and quality. In my heart, I still believe the primary driver with CMS is cost. No matter what criteria are used, they always have the option to set the payment bar in such a way that many providers will see a reduction in payment. But it is encouraging that organizations like the Joint Commission are finally coming to grips with challenge of defining accountability in a way that should improve medical care rather than just distort it. It may be a small ray of hope, but at least it is there.
26 September 2017
 Baker DW, Chassin MR. Holding Providers Accountable for Health Care Outcomes. Ann Intern Med 2017;167:418-423. doi:10.7326/M17-0691.
 Mehrotra A, Burstin H, Raphael C. Raising the Bar in Attribution. Ann Intern Med 2017;167:434-435. doi. 10.7326/M17-0655.
Accountable Health Communities
CMS has announced funding of the "Accountable Health Communities" initiative. Creative problem solving or misguided government interference?
Another Look at the Value Proposition
A review of published data show pay for performance programs have not impacted either cost of care or health outcomes.
Confronting The Quality Paradox - Part 1