The alarming rise in health care costs haunts our society. The United States now spends $2.6 trillion per year on health care,1 and the spiraling costs are placing unsustainable burdens on employers and workers, Medicare and Medicaid, state and local governments, and American families. A growing proportion of Americans are now foregoing health care to pay for other household needs or are facing bankruptcy.2 A variety of strategies have been proposed to slow medical cost inflation, such as realigning financial incentives to discourage costly procedures, accountable care organizations, the patient-centered medical home, and malpractice reforms. Evidence that any of these ideas will bend the cost curve remains limited.
A more basic but possibly neglected strategy for reducing demand for health services is to confront unrealistic beliefs about their benefits. Health care expenditures ultimately begin with a decision to use the service, a decision that may rest on false expectations—among patients, clinicians, or both. Removing the need for the service by correcting such misperceptions is a potentially more effective way to curb costs than many current reforms can achieve. Financial incentives are important, but they are weak when pitted against core beliefs. If patients and clinicians widely hold that a procedure is life-saving and harmless, any reform is unlikely to curb demand until those misconceptions are addressed.
Studies suggest that patients, clinicians, and society often hold unrealistic expectations about the effectiveness of tests and treatments. Two articles in this issue add to that literature. In New Zealand, Hudson et al3 surveyed 977 primary care patients and found that many overestimated the benefits of cancer screening and chemopreventive medications. The minimum benefit from screening that respondents deemed acceptable was less than their known benefit. The survey had a modest sample size and low response rate (36%), and its findings might not be fully applicable to other countries, but US studies have reported a similar problem. For example, a variety of studies document Americans' appetite for procedures of dubious effectiveness and their overestimation of benefits.4,5 Many Americans underestimate the probability of harms and are quite willing to receive false-positive results and unnecessary biopsies for the chance to detect cancer.6,7 Public complacency about the safety of health care is only occasionally shaken, as when a conspicuous tragedy or disclosures of industry wrongdoing draw attention to specific dangers.
Physicians are not immune to false beliefs about clinical efficacy or complication rates.8 Correcting such misperceptions has always been part of the impetus for the evidence-based medicine movement and its promulgation of systematic evidence reviews, practice guidelines, and other tools that present the facts on benefits, safety, and scientific uncertainties. Even these tools, however, can reflect the misconceptions of those who produce them. The specialists who serve on expert panels derive much of their clinical case knowledge from the patients with advanced disease who fill their clinics. Having seen the worst of the worst, they are less sympathetic to expressions of concern about the potential harms of interventions or imperfections in efficacy studies.9 Whereas epidemiologists consider the population denominator to put the numerator in perspective, the world of specialists is confined to the numerator, giving them a skewed basis for judging the population prevalence of diseases or benefit-risk ratios. Were this not enough, the preeminent scientists who often serve on guideline panels bring additional biases, such as being the authors of key studies under review or having financial ties to industry.10
Guideline panels composed of generalists tend to produce recommendations that are more conservative than those dominated by experts,11,12 in part because they are chosen for their skills in critical appraisal and because they have little to gain from the recommendations. In an essay in this issue, Hoffman et al cite this phenomenon in explaining why guideline panels dominated by cancer specialists advocate prostate cancer screening beginning at age 40 years, despite evidence that the lifetime benefit of an earlier starting age is 1 averted death per 1,000 men.13 Even guideline bodies harbor unrealistic expectations of efficacy.
A seemingly simple solution is to arm patients and clinicians with more realistic data, the very motive behind the production of evidence-based decision support tools for clinicians and decision aids for patients. Large initiatives in comparative effectiveness research are now underway to assemble such data for patients,14 and research in decision science and risk communication is seeking the best formats and framing for explaining likely outcomes and scientific uncertainty.15 Information technology and innovative infographics are helping to address challenges with health and numeric literacy.
These important efforts can help only to the extent that people make choices through the cognitive act of weighing benefits, risks, and scientific uncertainty. In real life, decisions are shaped by affective influences: beliefs and fears; vulnerability; faith and trust; long-standing routines; personal experiences; messages conveyed by advertising and media; and the advice, testimonials, and transmitted knowledge imparted by trusted sources. Patients' explanatory models of illness may clash with scientific data but represent a form of “evidence” that must be respected. Fact sheets and bar charts exert marginal influence if they ignore this larger context.
If people are widely convinced that a screening test or drug is beneficial, confronting these beliefs can, if anything, engender suspicions about one's veracity and motives. Whether the messenger is one's physician, a health plan, or a government task force, attempts to set more realistic expectations about benefits, risks, and scientific validity are often taken as insensitivity to suffering, discrimination, or a pretext for cutting costs, rationing health care, or threatening personal autonomy. In today's media environment, the political narrative these ideas feed allows for viral dissemination of distorted characterizations by websites, talk shows, blogs, and social networks. Ours is an era of “death panel” debates in which facts are swept aside by political agendas and talking points. It is an increasingly difficult environment for the American public to receive, let alone absorb, undistorted scientific information from reputable bodies.
Unrealistic expectations therefore persist, surviving not only on misinformation but also by serving other purposes. For example, false beliefs meet the psychological needs of patients for hope and safety, as well as for action, agency, and a sense of control. They enable clinicians to feel they are making a difference; even physicians who know better order unnecessary tests to please their patients.16 False expectations fuel market demand for products, industries, and health delivery systems and can be fomented by misleading advertising. Confronting these expectations can not only dash hopes but potentially threaten profits, shareholders, clinical practices, industries, legislation, and political careers.
But good news on the horizon hints at a shift in societal attitudes. Increasingly, overutilization of medical services, overdiagnosis, and profligate use of screening tests are being covered by major newspapers and magazines17-20 and are the subject of books in the popular press.21,22 The American Cancer Society has adopted more rigorous methods for developing screening guidelines, and in broadcast appearances its chief medical officer has openly discussed the limitations of screening.23,24 Although the US Preventive Services Task Force recommendations about the starting age for mammography sparked infamous outrage in 2009, the same group's recommendations against prostate-specific antigen screening met with softer criticism when proposed in 2011, and its 2012 recommendation to delay the starting age and reduce the frequency of cervical cancer screening—first issued by the American College of Obstetricians and Gynecologists25—raised no tempest.
Equally encouraging is the Choosing Wisely Campaign, organized by the American Board of Internal Medicine Foundation.26 In April 2012, 9 medical specialty societies—from primary care to oncology and nuclear cardiology—each released a list of 5 tests or procedures that their specialists commonly use and “whose necessity should be questioned and discussed.”27 Consumer Reports and 11 other organizations are helping these medical groups relay these messages to large audiences in consumer-friendly language.28 For example, the material on antibiotics for sinusitis, cobranded by Consumer c Academy of Family Physicians, uses plain-spoken headings: “the drugs usually don't help,” “they can pose risks,” and “they're usually a waste of money.”29 Organized medicine appears to be embracing this movement: the foundation's website now lists 25 specialty societies that have joined the initiative and will be releasing their own lists of questionable procedures in late 2012 or 2013.
Time will tell whether such efforts succeed and whether the medical profession will emerge as the change agent that brings more realistic expectations to patient care. Regardless of whether physicians or other stakeholders ultimately take the lead, the power of this strategy should not be overlooked by government, businesses, or others who urgently seek solutions to the health care crisis. The best way to reduce wasteful spending is to convince the purchaser that the product is not worth buying. It is a straightforward economic argument, but it can also save lives.
Footnotes
-
Conflicts of interest: none reported.
-
To read or post commentaries in response to this article, see it online at http://www.annfammed.org/content/10/6/491.
- Received for publication September 16, 2012.
- Accepted for publication September 27, 2012.
- © 2012 Annals of Family Medicine, Inc.