Back to Journals » Patient Related Outcome Measures » Volume 8

Assuring quality health care outcomes: lessons learned from car dealers?

Authors Dimsdale JE

Received 8 July 2016

Accepted for publication 2 November 2016

Published 7 January 2017 Volume 2017:8 Pages 1—6

DOI https://doi.org/10.2147/PROM.S116766

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 3

Editor who approved publication: Dr Liana Bruce



Joel E Dimsdale

Department of Psychiatry, University of California, San Diego, CA, USA

Abstract: Health care systems want quality but struggle to find the right tools because, typically, they track quality in only one or two ways. Because of the complexity of health care, high quality will emerge only when health care systems employ multiple approaches, including, importantly, patient-reported outcome perspectives. Sustained changes are unlikely to emerge in the absence of such multipronged interventions.

Keywords: quality assurance, patient-related outcomes, patient satisfaction, hospital accreditation

 

Introduction

Medicine has always focused on high quality of care, but the more we study it, the more we see the challenges. If quality is our destination, it seems that the journey is interminable. Despite journals and academic societies dedicated to this area, despite federal reports and data mining, decades of hospital accreditation reviews and lawsuits, high quality is as close as the nearest horizon; we just cannot seem to get there.

Part of the problem is that “quality” refers to diverse things: clinical outcome, process of care, societal access, and financial cost. This paper addresses the first two components of quality (outcome and process) by drawing on clinical vignettes that point out the problems with achieving quality. Because the quality assurance (QA) movement had its origins in the automotive industry, this commentary contrasts how quality is monitored by health care systems and car dealers.

Vignettes #1 and #2 got me thinking (Box 1). How well do we do with our patients? What are our thresholds for satisfactory clinical care? Do we ever communicate to our patients that anything less than “completely satisfied” is not “satisfactory” to us? Why cannot we do this better? Can car dealerships teach us to practice better medicine? Is it possible that “quality” is tracked more precisely in industrial settings than in our hospitals?

Box 1 Hospitals vs car dealers, part 1.

This paper describes contemporary efforts to improve quality of care and their limitations. It is written from the perspective of a physician and a sometimes patient, from decades of practice at diverse hospitals, and as a longtime member of committees dealing with QA, risk management, and credentials.

Practicing medicine is far more complicated than repairing cars; there is more uncertainty in diagnosing and treating medical issues than fixing a suspension. Furthermore, for most cars, replacement parts are available, and the cars do not have to return for multiple follow-up service calls. In medicine, however, we frequently cannot fix patients (we sometimes do not know what is wrong, and we certainly do not have “the parts”). That said, how is quality of care monitored in medicine?

Ratings of quality such as those performed by the US News and World Report rely heavily on peer reputation, whereas ratings by agencies such as the Joint Commission rely on process measures to track quality of routine care (eg, whether the charts are up-to-date). Do these ratings of quality do the job?

It is interesting that much of Deming’s landmark work on quality in large organizations focused on contrasting American and Japanese automobile manufacturing practices. He “wrote the book” on quality; it is fair to say that management executives practically genuflect to his writings.1 However, genuflection does not guarantee behavior out of Church and, when it comes to health care, one would have to assume that leaders have not understood his work.

Was patient Vignette #1 an anomaly (Box 1)? Consider the following three vignettes, which point out other types of quality concerns (Box 2).

Box 2 Hospital nuisances not reported.

Nothing terrible happened in these instances. The outcomes were favorable, but the processes were deficient. It is safe to say, however, that if this were a car dealership and not a hospital, the dealership would lose the customer. If the hospital was seriously interested in delivering excellent care and in containing costs, it would want to know about such things and improve its efficiency. We have an enormous QA industry in health care, but there is room for improvement. Can car dealerships teach us a thing or two?

This paper outlines the benefits and pitfalls of the diverse approaches used to track quality. All too often, organizations focus on only one or two of these methods of tracking quality. True and lasting improvements in quality are unlikely to emerge unless all of these approaches are pursued.

Major negative reinforcers (aka punishment)

The earliest and most pervasive efforts at tracking quality involve responding to huge negative reinforcers. Fortunately, sentinel events such as operating on the wrong limb are rare.2 They are “never events”, events that never should have happened, but they do occur, and thus Centers for Medicare and Medicaid Services and other insurers deny reimbursement and/or impose steep fines when these “never events” are discovered.

Hospitals expend enormous efforts preparing for site visits by Joint Commission on Accreditation of Health Care Organizations (JCAHO) and other regulatory agencies. While preparations for these evaluations can be helpful in improving quality of care, the site visits are typically on a 3-year cycle. Thus, they elicit Herculean but intermittent efforts to address quality of care. Preparing for such visits is akin to cramming for a final exam. Hospitals quiz their employees: “Quick, what is our mission statement? What is a ‘code Adam?’ Define the meaning of the acronym RACE.” These accreditation activities focus mainly on documentation, which means that institutions deploy massive efforts on policies, procedures, and charting, as if these factors were the most important metrics of quality of care. Preparations for accreditation visits can be so distracting and expensive that some have wondered if they actually detract from quality care.3 After all, who got the quality care – the patient or the chart?

Lawsuits are also part of the mix of negative reinforcers, although they reflect only a minuscule sampling of patients’ care. Another limitation of lawsuits as a driver for quality of care is that health care systems expend untold dollars on defensive medicine in an effort to forestall liability, thereby detracting from quality care.4 On the plus side, effective risk management programs not only respond to lawsuits but also examine whether hospital procedures should be adjusted to forestall future lawsuits.

Health care systems shun adverse publicity regarding quality deficiencies. These relatively rare but profoundly negative reinforcing events can lead to public humiliation and loss of market share. Such publicity is costly in terms of money and reputation, and because it is so costly, hospitals track some metrics of quality. The problem is that these QA efforts are only the tip of the iceberg, and they do not address more systemic problems.5 Addressing only major deficiencies is a far cry from asking the customer: “Are you completely satisfied?”

Tracking patient complaints

Another way to improve quality involves responding to patient complaints. The limitation to this approach is that very few patients complain.6 Those who do are frequently construed as “complainers”, and as a result, their information is treated dismissively There is a whole dimension of complaining behavior that is readily observable in everyday contexts. For instance, some people routinely send back food at restaurants, while others never complain about atrocious food. It is difficult for many people to find the right balance between complaining and assertiveness. This sort of phenomenon is readily apparent in health care settings as well. Some patients cannot be satisfied, no matter what they are offered. The system treats them as nuisances. Other patients endure things that are unacceptable, and because they do not speak up, we cannot correct the problems.

Hospitals do set up “complaint offices” to review such matters, but they focus on ad hoc responses to a specific complaint. Their principal focus is to placate the patient or family rather than to modify the care delivery for future patients.

Many very telling complaints are never reported to the hospital (Vignettes #1, #3, #4, and #5, Boxes 1 and 2) but are relayed to friends, family, or via Internet blogs. The patient does not want to complain lest he/she get someone in trouble or lest the staff retaliate against him at his next patient visit. He does not want to be “a bother”.

The venerable institution of the comment card is widely employed in multiple industries including hospitals. If displayed on the ward at all, the cards are buried in the admission paperwork or are provided in an inaccessible location. They are rarely completed. Patients say they were unaware of such programs or did not want to bother or felt that their comments would not be taken seriously.

Internet postings comprise the 21st century equivalent of comment cards. While sites such as Yelp or Emily’s List invite comments, there is no guarantee that the poster is an “authentic” customer; he might be the restaurant owner himself or an underhanded competitor. Some patrons rave about how good one dish is, while others harshly criticize. In other words, such anonymous postings have issues with both veracity and the idiosyncrasies of the reviewer. They are at their best when multiple reviews are available, but, as mentioned above, the “accuracy” of these postings is always in doubt, and it is hard to get a nuanced evaluation other than a “thumbs up” or “thumbs down” opinion.7

QA monitoring

The QA movement follows closely on the heels of similar practices in the automotive industry. This effort can be enormously productive because it brings together many segments of the health care team. Instead of monitoring one’s own behavior, a group now performs that monitoring, and there is a commitment on the part of the group to change behavior in the health care system accordingly.

However, there are downsides to QA efforts. QA committees can readily be derailed and sabotaged. The QA ideology, with its focus on monitoring what is measurable, can be trivialized. Instead of focusing on something important, committees frequently measure something that is both easy to measure and clinically insignificant or noncontroversial. Health care systems brag about the number of QA projects they have undertaken, as if that number defined quality.

It does not have to be that way. Coordinated efforts to observe, track, and change behaviors (eg, handwashing for infection control) are real success stories of the QA effort. They have targeted truly important issues and have led to lifesaving changes in policies. On the other hand, QA committees all too often track feckless quality indicators with marginal beneficial results. It is hard to imagine that QA committees will suffice. After all, the Inspector General’s Office study of 130,000 patients found that hospital employees report only 15% of the errors and accidents that harm hospitalized patients.8

Industry regularly sets up “tiger teams” to aggressively address quality improvement. The sad reality is that medicine’s tiger teams are more reactive than proactive. If hospital QA committees are tiger teams, they are usually comprised of toothless, tired tigers.

Mining databases

Large health care agencies examine medical records and billing data in order to make inferences about quality. Variables such as the number of coronary artery bypass graft procedures/year, length of stay, and mortality are obtainable from such databases.9 However, their interpretation is difficult. Did hospital X have a high mortality rate because their team was inexperienced, or were they attracting sicker patients? While proxy measures for such complex information are obtainable, the fact is that it is difficult to abstract any but the coarsest information from reviewing large numbers of health records. That information must of necessity be both concrete and quantitative (eg, dollars spent on a procedure, whether the patient was readmitted within 48 hours). While these “numbers” can provide an indication that something is amiss, they do not guarantee that a hospital identifies the source of the problem.

Secret shopper

Department stores sometimes employ “secret shoppers” to assess the quality of service from the customer’s perspective. Similarly, some health care systems hire people to pretend to be patients and see how well the system responds.10 How much time did it take for someone to answer the phone? Were they courteous? How long does it take to schedule an appointment? While this information can be helpful, there are limits to how far a secret shopper can probe the system and how appropriate it is to increase the workload of employees who have to interact with these pseudo patients. A secret shopper approach may have helped the patient in Vignette #6 (Box 3), but the hospital has to have some inkling that there is a problem with their patient scheduling before they can mount a secret shopper analysis. How do they find out if patients do not speak up?

Box 3 Hospitals vs car dealers, part 2.

Positive reinforcers

Increasingly, health care systems reinforce providers for meeting certain benchmarks for quality. Behaviorally, such approaches make good sense, but they are limited by a number of factors.11 Does the reinforcement reach the right person? For example, in patient Vignette #1 (Box 1), if fighting postoperative infection is the goal, which person should get the bonus and recognition for fixing the plumbing – the surgeon, the custodian, or the hospital CEO?

Another limitation with monetary incentives is that the dollar amount of the reinforcement may be trivial. Performance awards are certainly appreciated, but how large should an award be to change behavior? Is a quality performance award effective if it amounts to only a small percentage of salary? There is a joke about a man who adds a 15% tip to his lunch bill and is surprised by the waiter’s brusque response: “Do you know by the time I split your tip with the busboy, the maîtred’, and the kitchen staff, I get exactly 35 cents. You can keep your damn tip!”

My economist friends say, “If you fix the incentives, the quality will follow.” I suspect they are right, but I worry. I worry that the incentives are too small and come too late to genuinely change behavior. In a system as complex as a hospital, the problem is usually with a weak link. How do we get hospitals to pay attention to weak links? How do the incentives get to the right people?

Seeking patient reports

Many health care facilities track quality with various generic anonymous questionnaires (eg, Press-Ganey inventories).12 By relying on a common metric, hospitals can compare themselves with peers in terms of patient satisfaction. There are two limitations to this approach. One is a sampling problem: it is hard to know whether the patients who respond to the surveys are representative of the patients treated. More problematic, generic questionnaires focus on gross measures (“wait time in the examining room”, “satisfaction with nursing care”). How does a patient rate satisfaction when she had two terrific nurses, three satisfactory nurses, and one terrible nurse? Does one number convey this information in any useful manner that can assist a hospital in improving care? Does “wait time” equate with the doctor’s ability to listen and to treat? Patient reports of global satisfaction are helpful, but their relationship to clinical outcomes is surprisingly tenuous.13

Finally, there is the car dealership model. Can we take the time to survey patients by phone a day or two after discharge and ask them questions that will address all aspects of their hospital experience? This approach is considerably more detailed than a simple analysis of satisfaction, and thus there is considerable pushback against it.14 Nonetheless, useful information is likely to emerge from semistructured interviews that include questions like:

  • Was there anything you especially liked or were impressed by?
  • Was there anything you did not like or thought should be improved?
  • What was the admissions experience like?
  • What was the nursing care like?
  • What were the doctors like?
  • Were you completely satisfied? If not, we want to know about it.

Note that these questions are more open-ended and more likely to reveal specific deficiencies. Obviously, institutions need to look for patterns rather than isolated responses, which is why data mining is also important.

When I discuss this idea with hospital administrators, I get interesting pushback. “This would be a lot of work. It would be costly. It would violate HIPAA (the Health Insurance Portability and Accountability Act). We would have trouble with the unions.” Hospitals may be resistant to this approach because it requires time and careful interviewing. Furthermore, the ideal interviewer would be a physician or nurse who was regarded as an independent reviewer. This individual would be charged with filtering the patient reports, looking for common threads, and feeding back the information to the health care system. Given what is at stake in medical care, a 5- to 10-minute patient interview after each discharge (or even after every “X” discharges) would seem eminently justifiable.

The costs for such programs would be determined by the size of the health care system and its responsiveness to feedback. Some health plans might interview a smaller percentage of their discharged patients. Other plans might interview all discharged patients and/or meet with patients before their discharge. The personal meeting would probably yield a larger percentage of participating patients; it may also encourage patients to keep better track of variations in quality.

Would health care systems act upon the information they receive? I went grocery shopping on Memorial Day. The checkout clerk asked, “Did you find everything you needed?” Actually, I did not; the produce section was half empty. When I mentioned this to the clerk, she said, “Oh, you should never shop on a Monday holiday; we do not get produce shipments on holidays.” In other words, the clerk felt it her duty to ask about the shopping experience but did not try to do anything about it.

Tracking quality care is not for the faint of heart. Achieving high-quality health care requires an institutional culture change. Long-practicing physicians or nurses understand the nuances of their own health centers and know whom to call to remedy a problem. They are also less likely to be perceived as a “JCAHO-like outsider”.

Regardless of cost of the program or size of the health center, these interviewers would feedback to the system the sorts of problems encountered. In Vignette #1 (Box 1), for instance, the interviewer would review with the nursing supervisor how training on the patient-controlled analgesia (PCA) devices might be improved and would talk with housekeeping or facilities engineering to discuss how to maintain shower drains. If the interviewer heard of similar problems subsequently, he would bump the information higher in the chain of command.

Conclusion

In the US, health care expenditures currently amount to 17% of gross domestic product. The fact that some of these expenditures do not even yield quality outcomes is deeply troubling.15 If health care systems are interested in getting traction to improve quality of care, they must make a sustained effort to track the issue constantly from multiple vantage points and be committed to the process. This process of tracking quality and moving toward improved patient care never ends. The odds of a single failure in quality may be small, but quality failures emerge because of the concatenation of multiple interactions and processes. Searching for problems proactively is not going to be a panacea any more than the other efforts for quality management described in this paper. However, quality is more likely to emerge when the quality efforts include seeking comments from our patients.

In health care, we could do far worse than to imitate the practices of our neighborhood car dealer (Vignette #7, Box 3 and Vignette #8, Box 4). The dealer feels it is important to track quality for a car repair. Do we in medicine feel the same way?

Box 4 Car dealers' quest for quality.

Acknowledgments

The author thanks the many colleagues whose vignettes about dilemmas of improving quality shaped this paper, and the anonymous reviewers for their suggestions.

Disclosure

The author reports no conflicts of interest in this work.

References

1.

Deming WE. Out of the Crisis. Cambridge, England: MIT Press; 1982.

2.

Johnson F, Logsdon P, Fournier K, Fisher S. Switch for safety: perioperative hand-off tools. AORN J. 2013;98:494–507.

3.

Morey T, Sappenfield J, Gravenstein N, Rice M. Joint commission and regulatory fatigue/weakness/overabundance/distraction: clinical context matters. Anesth Analg. 2015;121:394–396.

4.

Jena A, Schoemaker L, Bhattacharya J, Seabury S. Physician spending and subsequent risk of malpractice claims: an observational study. BMJ. 2015;351:h5516.

5.

Bloche M. Scandal as a sentinel event – recognizing hidden cost-quality tradeoffs. N Engl J Med. 2016;374:1001–1003.

6.

Hemenway D, Killen A. Complainers and non-complainers. J Ambul Care Manage. 1989;12:19–27.

7.

Roberts D. Yelp’s fake review problem. Fortune. September 26, 2013.

8.

Department of Health and Human Services, Office of the Inspector General, Hospital Incident reporting systems do not capture most patient harm, OEI-06-09-00091, January 5, 2012.

9.

Calderwood M, Kleinman K, Huang S, Murphy M, Yokoe D, Platt R. Surgical site infections: volume-outcome relationship and year-to-year stability of performance rankings. Med Care. 2017;55(1):79–85.

10.

Bryant A, Levi E. Abortion misinformation from crisis pregnancy centers in North Carolina. Contraception. 2012;86:752–756.

11.

Caveney B. Pay-for-performance incentives: Holy Grail or Sippy Cup?. N C Med J. 2016;77:265–268.

12.

Press Ganey Associates, Inc. Patient experience- patient-centered care. Available from: www.pressganey.com. Accessed October 7, 2016.

13.

Wright J, Tergas A, Ananth C, et al. Relationship between surgical oncologic outcomes and publically reported hospital quality and satisfaction measures. J Natl Cancer Inst. 2015;107(3):dju409.

14.

Manary M, Boulding W, Staelin R, Glickman S. The patient experience and health outcomes. N Engl J Med. 2013;368:201–203.

15.

Moore B, Levit K, Elixhauser A. Cost for hospital stays in the United Sates, 2012. Statistical Brief #181. Healthcare Cost and Utilization Project, Agency for Healthcare Research and Quality. October 2014.

Creative Commons License © 2017 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.