Authors: Jerome Groopman
The question that faced pediatric cardiologists was whether a baby's left ventricle was large enough for the child to undergo repair of the malformed wall and survive the procedure. When Lock was in his thirties, he reasoned that the decision to operate or not should be based on how much oxygen was in the blood leaving the heart. "I was young and made the argument at a national meeting," he said. "Everyone believed me. It was an exercise in pure logic, an exercise that was, at some level, unassailable." Lock reasoned that if the oxygen level in the blood pumped into the aorta was within normal range, then the left ventricle was sufficiently well formed to receive oxygenated blood from the lungs and pump it out to the body; this showed that enough heart muscle existed to allow recovery after a wall repair. The high oxygen level, Lock also deduced, meant that there was no significant shunting of blood from the right ventricle, meaning the left ventricle was strong enough to keep pressures high on the left side of the heart. "On the face of it, it was intellectually correct. It just happens to be wrong."
It turned out that the oxygen content in the blood leaving the baby's heart could be nearly normal even when a significant amount of blood, some 20 percent, was shunted from the right ventricle. "Impeccable logic," Lock said, "doesn't always suffice. My mistake was that I reasoned from first principles when there was no prior experience. I turned out to be wrong because there are variables that you can't factor in until you actually do it. And you make the wrong recommendation, and the patient doesn't survive.
"I didn't leave enough room for what seems like minor effects," Lock elaborated, "the small fluctuations in oxygen levels, which might amount to one or two or three percent but actually can signal major problems in the heart."
Lock recalled a second example of this type of mistake, of relying on strict logic to answer a clinical question in the absence of empirical data. "I said there were patients with severe narrowing of the mitral valve who would always be better if the hole between the left and right atrium were closed. I reasoned that the body would get more blood if the hole was closed. You would get high enough pressure to fill the left ventricle through the narrowed mitral valve." To translate what Lock was saying: you want to maximize the pressure in the left atrium to force as much blood as possible through a narrowed mitral valve into the left ventricle, so the left ventricle can pump adequate amounts of blood out to the body. "It has to be right, correct?" Lock asked. I nodded in agreement. "It is very sound logic. But it's wrong."
After having surgery to close the hole, some children got sicker. This was ultimately found to result from an unexpected ripple effect: even modest increases in pressure in the left atrium rippled back and caused higher pressure in the vessels of the lungs, or pulmonary hypertension. The right heart, forced to pump against this higher pressure, weakened. "These children developed rightheart failure, and clinically they became worse," Lock said. Again, what seemed to be a rational approach resulted in harm. "There are aspects to human biology and human physiology that you just can't predict. Deductive reasoning doesn't work for every case." Sherlock Holmes is a model detective, but human biology is not a theft or a murder where all the clues can add up neatly. Rather, in medicine, there is uncertainty that can make action against a presumed culprit misguided.
Lock didn't immediately learn that it was a mistake to use logic alone. "Twenty-five years ago, when I asserted that oxygen levels should be sufficient to make the decision to repair or not repair a malformed wall between the left and right heart, and it didn't work out, I thought I should just have been smarter." His second mistake, though, about closing the hole between the left and right atrium, was more troubling to him. Lock averted his gaze and his face fell; to be wrong about a child is a form of suffering unique to his profession. "I learned that I need to be more circumspect about making these predictions. I have to be more clear to myself that even though the reasoning seems extremely tight, I am still making it up. And you absolutely have to recognize that what you think you know can have limitations."
Physicians, like everyone else, display certain psychological characteristics when they act in the face of uncertainty. There is the overconfident mind-set: people convince themselves they are right because they usually are. Also, they tend to focus on positive data rather than negative data. Positive data are emotionally more appealing, because they suggest a successful outcome: apparently normal oxygen levels or higher pressures in the left atrium mean surgery will succeed. Lock's errors pivoted on the power of positive numbers: the near-normal amounts of oxygen in the blood, the high pressure in the left atrium. Each of these positive numbers seemed to predict a good outcome. Such data have a powerful effect on our psyche, particularly in settings of uncertainty; they appear to be safe harbors in a storm, places to firmly dock our minds and point us to the next leg of our journey. But biology, particularly human biology, is inherently variable. Those variations, at times very small and easily ignored, can prove important. They reflect significant differences that our most refined measurements fail to capture. Lock is also concerned that many physicians assume all numbers have equal certitude or validity. "People don't throw in specific gravity," Lock said, meaning that not all results should be given equal weight in making decisions. You learn which numbers to respect and which to discount.
Specialists in particular are known to demonstrate unwarranted clinical certainty. They have trained for so long that they begin too easily to rely on their vast knowledge and overlook the variability in human biology. This is why Lock's epistemological focus is so important. He is constantly trawling his mind, reminding himself that the situation is uncertain and acknowledging that necessary actions and decisions made with the best intentions may not apply to every patient.
It is very difficult to do what Lock does: always to reflect rather than tacitly act on scant precedent. In their book
Professional Judgment: A Reader in Clinical Decision Making,
Jack Dowie and Arthur Elstein assemble articles from experts with contrasting opinions about physician cognition and how to improve it. Many of the contributors are from the Bayesian school of decision making, invoking "expected utility theory." This theory holds that the utility of a certain outcome is multiplied by its probability, and it determines the expected utility in the face of uncertainty. The calculation, based on axioms, has the doctor choose the path with the highest number emerging from the formula. Of course, much of what doctors like Lock deal with is unique; there is no set of published studies from which decision-analysis researchers can derive a probability.
Some experts contend that it is not only a unique case that makes a Bayesian approach untenable in many clinical settings. Donald A. Schön at MIT has written extensively about how professionals think, and he differs sharply from the decision-analysis camp, which relies on applied mathematics to model diagnosis and treatment—mathematics originally used to optimize submarine searching and bomb tracking, and that has been fostered by the advent of computers. The physician in the trenches, Schön emphasizes, faces "divergent situations where ... relying on a large database to assign probabilities to a certain diagnosis, or the outcome of a certain treatment, completely breaks down." Lock sees himself as a rational thinker, a physician who looks to logic and makes deductions. But he also understands the limits of that logic, an understanding gained from hard experience.
Schön could be describing Lock when he writes: "Because of some puzzling, troubling, interesting phenomena, a physician expresses uncertainty, takes the time to reflect, and allows himself to be vulnerable. Then he restructures the problem. This is the key to the art of dealing with situations of uncertainty, instability, uniqueness, and value conflict."
Yet it is not only in Lock's world where the specter of uncertainty shadows decisions. David M. Eddy, a professor of health policy at Duke University, says: "Uncertainty creeps into medical practice through every pore. Whether a physician is defining a disease, making a diagnosis, selecting a procedure, observing outcomes, assessing probabilities, assigning preferences, or putting it all together, he is walking on very slippery terrain. It is difficult for non-physicians, and for many physicians, to appreciate how complex these tasks are, how poorly we understand them, and how easy it is for honest people to come to different conclusions."
Jay Katz, a physician who teaches at Yale Law School, has examined the defenses that physicians deploy against awareness of uncertainty. He looks to the earlier work of Renée Fox, who identified three basic types of uncertainty. The first results from incomplete or imperfect mastery of available knowledge. No one can have at his command all skills and all knowledge of the lore of medicine. The second depends on limitations in current medical knowledge. There are innumerable questions to which no physician, however well trained, can provide answers. A third source of uncertainty derives from the first two: this is the difficulty in distinguishing between personal ignorance or ineptitude and the limitations of present medical knowledge. Fox observed physicians on a ward struggling with uncertainty, and their numerous psychological mechanisms to cope with it, including black humor, making bets about who would be right, and engaging in some degree of magical thinking to maintain their poise and an aura of competence in front of the patients while performing uncertain procedures.
Katz lumps Fox's three categories together under the rubric "disregard of uncertainty." He believes that when physicians shift from a theoretical discussion of medicine to its practical application, they do not acknowledge the uncertainty inherent in what they do. Katz argues that while uncertainty itself imposes a significant burden on physicians, the greater burden is "the obligation to keep these uncertainties in mind and acknowledge them to patients." He observes that "the denial of uncertainty, the proclivity to substitute certainty for uncertainty, is one of the most remarkable human psychological traits. It is both adaptive and maladaptive, and therefore both guides and misguides." As a law school professor, Katz knows that witnesses at scenes of accidents "unwittingly fill in their incomplete perceptions and recollections with 'data.'" There is a "pervasive and fateful human need to remain in control of one's internal and external worlds by seemingly understanding them, even at the expense of falsifying the data ... Physicians' denial of awareness of uncertainty serves similar purposes: it makes matters seem clearer, more understandable, and more certain than they are; it makes action possible. There are limits to living with uncertainty. It can paralyze action." This is a core reality of the practice of medicine, where—in the absence of certitude—decisions must be made.
Another defense against uncertainty is the culture of conformity and orthodoxy that begins in medical school. This is inherent in the apprentice process. For example, in Katz's first year at med school, the faculty of one distinguished university hospital taught his class that thinning the blood with anticoagulants like heparin or Coumadin was the treatment of choice for a threatening pulmonary embolism and that using any other therapy constituted unprofessional conduct. At another equally distinguished hospital, the students were told that the only correct treatment was surgically tying off the inflamed veins. "One could use such an exposure to controversy as training for uncertainty." In neither setting, Katz recounts, was the divergent made a teaching exercise. "Nor were we encouraged to keep an open mind. In both we were educated for dogmatic certainty, for adopting one school of thought or the other, and for playing the game according to the venerable, but contradictory, rules that each institution sought to impose on staff, students, and patients." Katz's observation, made two decades ago, still holds true.
One would think that primary care physicians, such as general practitioners, internists, and pediatricians, grapple most with uncertainty. But Lock opens our eyes to the truth that specialization in medicine confers a false sense of certainty. Recall how Shira Stein was cared for by teams of specialists in one of the world's best pediatric hospitals. Yet a series of cognitive errors went unrecognized. Confirmation bias, the attention to data that support the presumed diagnosis and minimizing data that contradict it, was prominent. Specialists, like Shira's doctors in the previous chapter, are also susceptible to diagnosis momentum: once an authoritative senior physician has fixed a label to the problem, it usually stays firmly attached, because the specialist is usually right.
Specialization can persuade the expert that the treatments his fellow specialists prescribe are superior. For example, in the case of prostate cancer, surgeons, radiation therapists, and chemotherapists often disagree about the respective merits of their treatments, often without sufficiently doubting the effectiveness of their own approach. So a patient's chance first encounter with one specialist may guide that patient to choose the therapy of that discipline—but that is not a true choice. If instead the patient meets with several specialists and is informed of each approach without bias, he might choose another option.
Ideally, as Lock said, we could perform large clinical trials to remedy the differences in opinion among specialists. This seems simple but in fact ignores the complexity of human biology and patients' needs. Says David Eddy:
In theory, uncertainty could be managed if it were possible to conduct enough experiments under enough conditions, and observe the outcomes. Unfortunately, measuring the outcomes of medical procedures is one of the most difficult problems we face. The goal is to predict the use of a procedure in a particular case and its effects on that patient's health and welfare. Standing in the way are at least a half dozen major obstacles. The central problem is that there is a natural variation in the way people respond to a medical procedure. Take two people who, to the best of our ability to find such things, are identical in all important respects, submit them to the same operative procedure, and one will die on the operating table while the other will not. Because of this natural variation, we can only talk about the probabilities of various outcomes—the probability that a diagnostic test will be positive if the disease is present (sensitivity), the probability that a test would be negative if the disease is absent (specificity), the probability that a treatment will yield a certain result.