Imagine you’re responsible for planning a public health response to a virulent disease that is expected to kill 600 people. You have to choose between two management programs:
- If Program A is adopted, 200 people will be saved.
- If Program B is adopted, there is a one-third probability that all 600 people will be saved, and a two-thirds probability that no one will be saved.
Once you’ve made your choice, consider the same scenario, but with the following options:
- If Program C is adopted, 400 people will die.
- If Program D is adopted, there is a one-third probability that nobody will die, and a two-thirds probability that 600 people will die.
This test was developed by psychologists Amos Tversky and Daniel Kahneman and was published in Science in 1981. It’s also described in Kahneman’s remarkable 2011 book Thinking: Fast and Slow. Kahneman won the 2002 Nobel Prize for economics, Tversky having passed away in 1996 (apparently Nobel Prizes are never awarded posthumously).
It’s a test of willingness to accept risks, and part of their larger body of work on decision making behaviour. The pairs of options each require the participant to choose between accepting a sure thing (options A and C), or taking a risk (options B and D). Options A and C are factually identical (200 people live and 400 die in each), but differ in that they are expressed in either positive (A) or negative (C) terms. If our risk taking behaviour is consistent, choices should be the same for the two pairs. However, given the options outlined above, a significant majority of respondents (72%) chose Program A over B for the first selection, but then rejected the sure thing, favouring Program D (78%) over C in the second selection.
The differences relate to how the options are expressed, or “framed”. In the first pair, the choices are designed to focus on saving people. In the second, the focus is on how many people will die. When the intention is to save lives, it appears we are risk averse. When the situation is seen to be bleak and inevitably fatal, we are much more willing to engage risk. Put another way, when given choices that result in fundamentally identical outcomes, most people will avoid risk when they perceive potential gain, but are willing to engage risk when confronted with potential losses. “Framing” of our choices is therefore hugely influential.
Kahneman and Tversky describe this as the “psychophysics of value” and describe two ways of thinking. “System 1” thinking is automatic, involuntary and intuitive. It’s also easy, requiring very little effort – “lazy” thinking, one might say. “System 2” thinking is computational, requiring attention, time and effort. We have to actively decide to undertake System 2 thinking. They are the “fast” and “slow” options referred to in the title of Kahneman’s book. The thought experiment described above is System 1 thinking very much in action. World economies, stock markets, politics, advertising and consumerism are all very much about how System 1 thinking can be promoted and manipulated. Hence, the Nobel Prize.
We don’t have to go far these days to find an example of this principle in action. Donald Trump’s bombastic rhetoric in the American presidential primaries seems, at least in part, an attempt to “frame” the dialogue in negative terms (American weakness, vulnerability and multiple perceived foreign threats). By doing so, he develops a sense of fear for the future that he hopes will encourage the electorate to engage a risky, non-conventional alternative (i.e. him). Since Mr. Trump has basically no experience in elected office, foreign affairs or any of the expected concerns of a potential president, he needs to steer people away from System 2 thinking. He’s been doing a remarkably, frighteningly, effective job to date.
If you think physicians are above such influences, apparently you’d be mistaken. Tversky and his colleagues carried out a study at the Harvard Medical School wherein physicians were given information about the expected outcomes for surgical versus radiotherapy approaches to lung cancer (McNeil et al. New England Journal of Medicine 1982; 306:1259-62). The five-year survival rates favoured surgery, but with greater short term risk. Half the physicians participating were provided information that focused on survival (one month survival rate 90%), whereas the others were given mortality rates (10% mortality in the first month). Guess what? Eighty-four percent of the Harvard physicians favoured surgery given the first description, compared with only 50% when given the second description. System 1 thinking dominates when we focus on bad outcomes.
So how is all this relevant to medical students and practicing physicians? There are clear implications for our understanding of the concept of informed consent. In Ontario, this comes under the Health Care Consent Act of 1996, which reads, in part:
Consent is not valid unless it is informed. A physician must provide a patient with information about the nature of the treatment, its expected benefits, its material risks and side effects, alternative courses of action and the likely consequences of not having the treatment.
The following clause describes the terms “information” as follows:
The information provided to a patient must be information that a reasonable person in the same circumstances would require in order to make a decision about the treatment.
All, seemingly, very reasonable. However, given what we now understand about the power of framing in making critical choices, it appears making a choice is about much more than the factual content of information provided. The manner in which options are provided and the way in which outcomes are presented will be very influential in determining the response. Imagine an elderly patient with coronary and mitral valve disease who is highly symptomatic and considering surgery for both symptom relief and prolonged hospital free survival. Consider the following three presentations, all of which are factually true:
- “There’s a 90% chance that you’ll survive and be home within two weeks.”
- “The surgery carries a 10% risk of dying either in the operating room or within two weeks after.”
- “There’s no chance you’ll be alive within two years without surgery.”
Do you think there’s a difference in how patients and their families will respond to those three statements?
Do you think the person delivering those options has capacity to manipulate the decision?
Is this a problem?
I think we’d all agree that the answer to the first two questions I’ve posed is unquestionably “yes”. The third is obviously controversial.
Legislation is important and necessary to ensure protection of the public, but it will always be limited in its ability to penetrate the individual relationship between physician and patient. Its role is to balance the need to ensure rights that protect the vulnerable, while not handcuffing or interfering with the delivery of individual care. One can only respect the intention and great challenge of lawmakers who struggle to achieve that balance.
Physicians will therefore always bear a high responsibility in counseling about medical decisions. While it’s true that the patient and chosen advisors always have the “final say”, it’s both disingenuous and irresponsible for physicians to suggest that informed consent begins and ends with the provision of factual information. There’s no escaping the high responsibility that goes with advising. There’s no short cut. Truly informed consent can only be provided by someone who truly understands the patient’s full history, personal situation, wishes and ability to process information. Moreover, it can only be provided by someone who not only understands all that, but also has the patient’s best interest as their primary goal. In our increasingly busy, hospital and service-centred approach to acute care, all this is becoming more difficult to provide.
Are there solutions? Increasing involvement of primary care physicians or non-physician providers, advanced directives and enhanced access to all medical records will all help. At the heart of the matter, however, must be a recognition that the process of deciding to undertake a particular treatment or procedure is at least as important as its actual provision, and should be recognized as such.
Physicians can’t, and shouldn’t avoid being influential in patient decision-making. Is that a problem? Not if that influence is rooted in a truly caring relationship, informed by a deep understanding of the patient’s full situation, wishes and aspirations.
Anthony J. Sanfilippo, MD, FRCP(C)
Undergraduate Medical Education