Choosing Wisely
In this week’s New England Journal of Medicine Dr. David Casarett has an editorial entitled “The Science of Choosing Wisely—Overcoming the Therapeutic Illusion.”[1] He begins by reviewing the “Choosing Wisely” campaign, an effort by the profession to identify procedures and tests that were once thought helpful, but where the evidence shows now that they are not. “The success of such efforts, however, may be limited by the tendency of human beings to overestimate the effects of their actions. Psychologists call this phenomenon, which is based on our tendency to infer causality where none exists, the ‘illusion of control.’” In medicine this phenomenon appears as “the therapeutic illusion,” a belief that our medical interventions are more effective than the evidence supports. He calls for physicians to use methods of skeptical inquiry and more research to reduce the impact of the therapeutic illusion on medical decision making. I wonder if the notion that more research would improve the chances of choosing wisely is not also an illusion? What we can know from our science is what we can expect, on average, from a given intervention in carefully chosen patients. We cannot know with precision the outcome in a specific individual. In other words, I think there is an irreducible element of uncertainty when we apply statistical data to the individual. Dr. Casarett has assumed the primary goal of medical care is to improve the science. But what if our goal really ought to be improving the art? After all, at its most basic, all medical care is an interaction between two or more human beings—one hoping to obtain relief from disease and debility, and one hoping to provide it. I agree the therapeutic illusion is alive and well, and has to be considered as we seek to provide optimal care to our patients. When a young physician joins the practice, he/she is eager to get to work and apply the training they have received. Almost invariably they take on cases more seasoned physicians will not, mainly because they over-estimate the benefit to the patient and have not experienced all the costs first-hand. Of course, from the young doctor’s perspective, the old doctors are tired and out of date, and need to go ahead and retire. I usually try to phrase this notion thusly. “We do what we do, and sometimes it works and sometimes it does not. If we don’t take too much credit when it works, we won’t take so much blame when it doesn’t.” A second way I have seen this play out repeatedly is with the introduction of new methods of treatment. At first, patients are carefully selected, efforts are made to restrict the therapy to those patients with the highest likelihood of benefit, and to restrict access for those with excess “co-morbidities.” When dialysis first became widely available, for instance, the consensus was only young healthy people would benefit. Now, of course, the median age for new dialysis patients is somewhere around 65 years and about half have diabetes as the primary diagnosis. So what happened? Advances in treatment have made it less burdensome than before, and have had some effect on survival. In my practice, the highest risk patients used to have a 50% survival of around six months, whereas now it is around 18 months. It is somewhat harder to detect this in national databases, but survival after the first year, which pretty well eliminates selection bias, has been improving. So how does this play out in practice? This week I saw a woman whose renal failure has started getting worse fairly abruptly and I told her she would likely become symptomatic in the next six months. We talked about her options, and she wanted to know two things—what was the longest time I had taken care of a single patient (30 years), and what could she expect, (30 months on average.) Of course, I told her I could only give her the average, and that by definition, half the people did better than average and half did worse. She responded by saying that 30 months did not seem like very much time for all of the effort, but also said she would beat the average. Now this is a clear example of “the therapeutic illusion” at work, but is it right for me to knock her down even further by pointing it out to her? After all, denial is a standard defense mechanism—most dialysis patients will answer the SF 36 question “Do you expect your health to be better next year” in the affirmative, even though there is no “objective” reason to expect this. I saw the reverse this week as well. This patient has had slowly progressive renal failure, but has now gotten to the point where she has a GFR consistently below 10%. I visited with her again about what she was planning to do when she became symptomatic, and she said “I don’t want to do dialysis.” As always, I tried to sort out if this was not “wanting to” or not “going to” do dialysis, as these were different statements. She said “I have seen too many people go through dialysis and they didn’t do well.” This is a clear example of the therapeutic illusion in reverse—after all what happened to others has no impact on what happens to her. Should I accept this as presented and decide on palliative care, or should I encourage her to think about a trial of dialysis with an option to withdraw if it proved too burdensome? Or would that be imposition of my “therapeutic illusion.” In the end, I opted for the second option. Time will tell what we actually do. If the treatments were not so expensive, I suspect we would not feel so pressured to make sure we are “choosing wisely.” But what does that mean when our technology, on average, produces some mix of benefits and costs? And why do we think more studies would make this dilemma go away? I suggest more studies will move the break point for uncertainty, and that is useful as far as it goes. But I think it is an illusion to think we can elevate the science of medical practice to the point where there is no need for the art. 30 March 2016 [1] Casarett D. The Science of Choosing Wisely—Overcoming the Therapeutic Illusion. N Engl J Med 2016;374(13):1203-1205. doi: 10.1056/NEJM/Mp1516803. |
Further Reading
Medical Evidence Medical evidence is a four-source: guidelines, registries, data mining and " in my experience". Different clinical situations use different types of evidence and have different implications for provider behavior. These implications are considered in detail. Patient-Centered Care A consideration of the interactions of patient preferences, evidence-based medicine and peer review. Risk, Reward, and Other Reasons Patients Don't Follow Medical Advice Patients often don't do what their doctors recommend. The problem is important and contributes to "bad" outcomes, yet we have little insight into the problem. The Anchoring Heuristic Businessmen and health policy experts fail to recognize the limits imposed by the experiential nature of medical practice, both of which impact achieving the "triple aim." |