How to Hit a Moving Goal Post? Play a Different Game.
By: Vivian S. Lee, M.D., Ph.D., M.B.A. | Aug 19, 2013 10:00 AMKeeping score in sports is easy. A touchdown is 6, a grand slam is 4 and a three-pointer is self-explanatory. Ranking professional teams in their divisions and leagues is just as straightforward. Who ever wins the Super Bowl, World Series or NBA Championship in a particular year is number 1.
We use scores and player statistics, metrics of performance, to measure a player’s or a team’s performance over a particular period of time. Then we compile these scores and statistics to derive ranking systems. The ranking systems are transparent—just pick up a copy of any major newspaper or go on-line. Teams and players are motivated; they know who they have to beat to move up, they know what statistics they have to improve (completed passes, batting average, etc.), and they can focus their days on improving those skills that are critical for improvement. In turn, athletes and their teams are rewarded accordingly—whether in championship prize money, lucrative endorsements or level of public adulation.
Health care rankings also matter. They matter a lot. Why? For one, we are undergoing a seismic shift in health care payment reform—shifting from a fee-for-service model to a value-based system, where our physicians and hospitals are being compensated for the value they deliver to patients. Measuring that value accurately, clearly and consistently is vital to successful transformation.
For most patients, health care options are local. But nationally recognized rankings in healthcare and academics are important to some consumers—including corporations like WalMart and Pepsico who fly their employees to get the best value for health care—as well as to prospective hires, medical students, philanthropists, and others.
As drivers of change, we should also acknowledge that like professionals in the sports industry, the professionals running medicine are highly competitive. They (we) are motivated in part by rankings. We want to win.
So why aren’t metrics and rankings in health care working?
The problem in health care is two-fold, both with the metrics themselves and with the translation of those metrics into ranking systems.
First, we’ve confused metrics with indicators. Presently over one thousand quality measures are considered important by insurance companies, CMS, medical specialty groups, and others. This would be the equivalent of defining hundreds of performance metrics to rank individual players in the NFL. Rather than a fixed set of stats like touchdowns, yards, interceptions, and overall rating, a player’s stats would be measured by the number of pull-ups, push-ups, times for 10 m sprint, 100 m sprint, and so forth, as indicators of their abilities. All of that information would be distracting and somewhat subjective. To be useful, every ranking system in health care needs to select a reasonable number of the many available metrics. These measures would serve as an accurate reflections, of the processes, performance, values, and culture desired.
Second, is the issue of translation—what health care ranking systems do with these metrics. Presently, there are at least a dozen national organizations that publish ranking systems of “best hospitals” or the like. Not surprisingly, given the plethora of metrics available, they are proliferating—California has at least 12 of its own, for example.
All health care ranking systems use different formulas, most of which are kept secret. To make matters worse, some, including U.S. News and World Report, rely heavily on reputational surveys, which many in academic medicine consider no more that a popularity contest. In this age of electronic medical records and abundant public quality/safety metric reporting, we agree with Ezekiel J. Emanuel and Andrew Steinmetz that it is irresponsible to build ranking systems on reputational scores alone.
Further, as Elisabeth Rosenthal pointed out recently in her New York Times analysis, ranking systems that rely only on reputation move organizations in the wrong direction. They encourage investments in marketing and advertising campaigns that add little value in our health care systems and add to the estimated $750 billion we waste in health care per year in the U.S. (IOM report).
Is there hope for health care rankings systems?
There are a few remarkable examples of ranking systems that meet the criteria for driving the kind of change our country’s health care system needs. One of the best is limited to academic medical centers--University HealthSystems Consortium (UHC), which includes 118 academic medical centers from across the country. UHC methodology evolves every year (to counteract gaming), and the metrics used to rank academic hospitals include robust measures of patient safety (e.g. mortality and other safety indicators), patient centeredness, quality/effectiveness, equity and cost. Transparency is core to this system. All the data from the 118 academic medical center members are pooled and publicly accessible to benchmark and validate conclusions.
For the past decade, we at the University of Utah have focused on UHC metrics and ranking to drive institutional change. UHC ranked us #1 in quality in the nation in 2010—and we confess we touted that result, as well as being ranking in the top 10 for each year since.
While academic medicine comprises 2% of acute health care and 23% of uncompensated care in the country, the lessons learned from this system are generalizable. We, as leaders in health care, must continue to focus on transformation. We will accelerate our progress by agreeing upon the right indicators and systems of benchmarking, rather than by chasing a moving goal post.
It’s time for health care metrics and rankings to start playing on the same field.comments powered by Disqus