• Periop Proceedings

    This blog is intended to encourage open dialogue and learning amongst members and interested parties of the perioperative and periprocedural communities for the purpose of envisioning and encouraging higher performing systems.

  • About Our Bloggers

    Dan Krupka

    Dennis Fowler

    Warren Sandberg

  • Print this page

You are here: Twin Peaks Group >

Blog Archives

The charade of hospital rankings – Revisited

Posted at 5:32 pm on Mar 31, 2015 by

In August 2013, inspired by an article by Elizabeth Rosenthal in the New York Times, I wrote about the charade of hospital rankings. Ms. Rosenthal had reported that Dr. Nicholas Osborne of the University of Michigan had found a large discordance (a fancy technical term for lack of agreement) between the rankings published by U.S. News & World Report and HealthGrades. My point was more fundamental: I suggested that the rankings were not helpful to a person seeking care for a particular condition because they were not condition- or speciality-specific.

A broader discordance among ratings is reported in the March 2015 issue of HeHA March 2015alth Affairs. The authors include researchers well-known for their work in health care quality and safety. Having analyzed the ratings produced by Consumer Reports, HealthGrades, the Leapfrog Group and U.S. News & World Report, the authors reach a conclusion elegantly summarized by the title: “National Hospital Ratings Systems Share Few Common Scores And May Generate Confusion Instead Of Clarity.” Exhibit 1, illustrates some important differences.

Exhibit 1 – Hospital rating systems used by four organizations

Rating organizationFocus of ratingsScoring system
Consumer ReportsSafety – “hospital’s commitment to the safety of their patients.”0 – 100 scale
HealthGradesConsistent performance on risk-adjusted mortality and complication rates for 27 conditions and proceduresList of Top 50 and Top 100 hospitals (no ranking; no list of low performers)
Leapfrog GroupSafety – freedom from harmLetter grade (A – F)
U.S. News & World ReportBest medical centers for the most difficult patients based on 16 specialties, of which 12 are “data-driven,” not reputation driven0 – 100 for hospital and each rated specialty

It is evident that just differences among the scales – never mind the focus of the ratings and associated details – complicate the process of making comparisons. The challenge is compounded by differences in criteria for selecting hospitals to be rated. When all is said and done, only 83 hospitals were rated by all four organizations. The absence of agreement was striking: No hospital was rated as a top performer on the four lists, and only three hospitals appeared among the top performers if only three lists were compared. No hospital was rated as a low performer on the three lists that rate a full range of performance.

The authors conclude that “[w]hile the lack of agreement among these rating systems is largely explained by their different foci and measures, these differences are likely unclear to most stakeholders. The complexity and opacity of the ratings is likely to cause confusion instead of driving patients and purchasers to higher-quality, safer care.” They suggest that these organizations should help the public and other stakeholders interpret their results and provide detailed information regarding the methods used to arrive at the ratings.

Although it is difficult to argue with these suggestions, I believe it is futile to create ratings at the hospital level. It does not help to answer the question I raised in my previous blog on ratings: “where [do I] find the best care in my region for my condition, covered, of course, by my insurance plan?” Moreover, it doesn’t help hospitals to target improvement programs.

I believe that all stakeholders will remain ill served by the majority of current rating methods until medical specialties screw up their courage to develop meaningful ratings. Because such ratings could be applied uniformly to all hospitals, useful comparisons could be drawn. Sadly, no specialty has followed the example of the Society of Thoracic Surgeons.

Perhaps it’s time for the prominent authors of the article and other leaders in health care quality and safety to use their influence to mobilize specialty societies to develop rating systems. Perhaps they could also encourage state regulators to apply pressure to specialties’ local chapters.

Posted in * View all Blog Posts, Quality and safety | Tagged , Leave a comment

Consumer Reports shoots itself in the foot

Posted at 1:39 pm on Sep 30, 2013 by

Just after I referred to hospital rankings as a charade, Consumer Reports jumped into the fray with ratings of 2,463 hospitals and specific ratings for 5 common procedures: knee replacement, hip replacement, back surgery, coronary angioplasty and carotid artery surgery (Consumer Reports, September 2013, p. 31-41).  The ratings are based on billing claims for Medicare patients, they are risk-adjusted, and use death of a patient undergoing elective surgery and extended length of stay as quality measures for 27 different procedures.

I am deeply disappointed.  Here’s why: In September 2010, Consumer Reports took a giant step forward by making public the ratings for Coronary Artery Bypass Grafting (CABG) developed by the Society of Thoracic Surgeons (STS).  This was correctly referred to as a watershed event because a medical society, not a third party, had developed the rating.  The STS had come up with a three-star index for service/hospital combinations using clinical, not administrative, data.  The index was based largely on outcomes – risk-adjusted mortality and morbidity – with a component that takes account of conformance to best practice.   By contrast, Consumer Reports now takes a step back by (1) rating hospitals rather than a service at a hospital; (2) using administrative data rather than clinical data; and (3) coming up with ratings not created by clinicians themselves.  Why would any intelligent patient requiring a particular procedure rely on the Consumer Reports rating when making an important decision?

To give Consumer Reports its due, the article includes some excellent points:

  • They’re open with their objectives: to shed light on hospital quality and to push for greater transparency.
  • They observe that current bad incentives provide little reason for hospitals to improve.
  • They indirectly criticize NSQIP for not allowing its ratings to be made public.
  • They acknowledge that hospitals can excel on one procedure but not so well on others.  For example, they mention that the Massachusetts General Hospital excels at CABG, but performs less well on hip replacement.
  • They commend STS for making its ratings public.

Many readers who examine the Consumer reports ratings will question the methodology behind the ratings because many hospitals with excellent reputations, e.g., Memorial Sloan-Kettering Cancer Center, Johns Hopkins Hospital, the Cleveland Clinic Foundation, Mayo Clinic – St. Mary’s Hospital, and Virginia Mason Medical Center, receive an “average” rating.  However, Consumer Reports does not even report whether it tried to calibrate its ratings for procedures by comparing its scores for CABG with those created by the STS.

In summary, I am saddened that Consumer Reports, which is highly regarded for its product ratings, could have – in time – developed credibility in health care ratings.  Instead, it jumped the gun and shot itself in the foot.

Posted in * View all Blog Posts, Quality and safety | Tagged , Leave a comment
  • © Copyright 2023 Twin Peaks Group, LLC
  • All Rights Reserved.