In August 2013, inspired by an article by Elizabeth Rosenthal in the New York Times, I wrote about the charade of hospital rankings. Ms. Rosenthal had reported that Dr. Nicholas Osborne of the University of Michigan had found a large discordance (a fancy technical term for lack of agreement) between the rankings published by U.S. News & World Report and HealthGrades. My point was more fundamental: I suggested that the rankings were not helpful to a person seeking care for a particular condition because they were not condition- or speciality-specific.
A broader discordance among ratings is reported in the March 2015 issue of Health Affairs. The authors include researchers well-known for their work in health care quality and safety. Having analyzed the ratings produced by Consumer Reports, HealthGrades, the Leapfrog Group and U.S. News & World Report, the authors reach a conclusion elegantly summarized by the title: “National Hospital Ratings Systems Share Few Common Scores And May Generate Confusion Instead Of Clarity.” Exhibit 1, illustrates some important differences.
Exhibit 1 – Hospital rating systems used by four organizations
Rating organization | Focus of ratings | Scoring system |
Consumer Reports | Safety – “hospital’s commitment to the safety of their patients.” | 0 – 100 scale |
HealthGrades | Consistent performance on risk-adjusted mortality and complication rates for 27 conditions and procedures | List of Top 50 and Top 100 hospitals (no ranking; no list of low performers) |
Leapfrog Group | Safety – freedom from harm | Letter grade (A – F) |
U.S. News & World Report | Best medical centers for the most difficult patients based on 16 specialties, of which 12 are “data-driven,” not reputation driven | 0 – 100 for hospital and each rated specialty |
It is evident that just differences among the scales – never mind the focus of the ratings and associated details – complicate the process of making comparisons. The challenge is compounded by differences in criteria for selecting hospitals to be rated. When all is said and done, only 83 hospitals were rated by all four organizations. The absence of agreement was striking: No hospital was rated as a top performer on the four lists, and only three hospitals appeared among the top performers if only three lists were compared. No hospital was rated as a low performer on the three lists that rate a full range of performance.
The authors conclude that “[w]hile the lack of agreement among these rating systems is largely explained by their different foci and measures, these differences are likely unclear to most stakeholders. The complexity and opacity of the ratings is likely to cause confusion instead of driving patients and purchasers to higher-quality, safer care.” They suggest that these organizations should help the public and other stakeholders interpret their results and provide detailed information regarding the methods used to arrive at the ratings.
Although it is difficult to argue with these suggestions, I believe it is futile to create ratings at the hospital level. It does not help to answer the question I raised in my previous blog on ratings: “where [do I] find the best care in my region for my condition, covered, of course, by my insurance plan?” Moreover, it doesn’t help hospitals to target improvement programs.
I believe that all stakeholders will remain ill served by the majority of current rating methods until medical specialties screw up their courage to develop meaningful ratings. Because such ratings could be applied uniformly to all hospitals, useful comparisons could be drawn. Sadly, no specialty has followed the example of the Society of Thoracic Surgeons.
Perhaps it’s time for the prominent authors of the article and other leaders in health care quality and safety to use their influence to mobilize specialty societies to develop rating systems. Perhaps they could also encourage state regulators to apply pressure to specialties’ local chapters.