Nov 14, 2014 10:00 PM
One of the things that piqued my interest about coming to Utah 3 years ago was a book by Richard Bohmer called, “Designing Care.” There’s a section in that book about Intermountain Healthcare and its “culture of learning” organization—a key component to health care reform success and something our own Wendy Chapman is championing at UUHS. When I was at New York University Medical Center I purchased a copy of the book for everyone in senior leadership, and was trying to arrange a field trip to Utah the week that a headhunter called about the job here at the University of Utah.
Intermountain’s success at building a culture of learning owes a great deal to Dr. Brent James, the health system’s Chief Quality Officer and an early champion of reducing variation––or, differences in how doctors practice––as a means to improve the quality of health care. Provocative, and hard for some physicians to accept at first, his ideas are now widely embraced by health industry leaders around the globe.
An alumnus of the University of Utah four times over, including having received his medical degree here in 1978, Dr. James is on the U.’s faculty in the departments of Family and Preventive Medicine and Biomedical Informatics Research. Recently, he agreed to share insights and tales from the trenches with some of our medical students as a Dean’s Round Table guest. I was able to ask him about his career path, which he describes as “a series of accidents,” and his predictions for the future of health care. What ensued was a fascinating discussion about transparency in health care, its risks and benefits.
Q: How did you get interested in quality and safety?
A: It was a complete accident, in truth. I was trained as a clinical researcher. I come from the Dana Farber Cancer Institute, the largest multi-center clinical trials group in the world. I was over GI tumors. I’d typically have about five or six big randomized trials running at any given point in time. I did that for years [and]…suddenly became a single parent and moved back to Salt Lake, primarily to get closer to family. I took the job at Intermountain pretty much sight unseen. I had no idea. I mean, I was a fairly traditional academic researcher and I got here and discovered that I’d fallen in among administrators. There was a guy named Steve Busboom, who was President of Finance at Intermountain, and he had heard about variation literature, variation in clinical practice. …Steve wanted to drive it a level deeper. Up to that point everybody had looked at hospitalization rates––how often were patients treated for a particular problem with a particular treatment––and it showed massive variation geographically. Steve wanted to say, “OK, imagine we’ve got these patients in our facilities, in our hospitals, do we treat them the same for the same disease?” I jokingly say that all these years later that Steve has forgiven me for what I did to him. What was going to be simple electronic data analysis––oh, at the time this was 1987––cost about $50,000. I insisted that we track every co-morbid illness on admission to the hospital, and individually stage them [including data] on every complication, long-term outcomes, etc.
Q: You didn’t have electronic medical records back then, so these all had to be manually recorded for every patient?
A: Yes, manually recorded. What we found was massive variation. Eventually we carried it to the level of the individual physician.
Q: Give us some examples of variations in practice, and tell us why it matters.
A: One of the first things we studied was a surgical procedure called a transurethral prostatectomy, [removal of part of the prostate gland through the urethra]. We looked at 16 urologists doing high volumes of this surgery and asked how long [was each patient in the operating room]? And we looked at the grams of prostate tissue removed. The fellow who was removing the most was removing 42 grams of prostate tissue and [those] at the median were removing 13 grams. So 13 compared to 42 on very carefully balanced patients, who, so far as we could tell were identical. Surgical times ranged from a low of about 38 minutes to a high of 90 minutes. What we really did was take the tools of clinical research and swung them over and focused them on care delivery performance. Up until that point, everyone assumed if you were a trained physician, you were a master. The fact that you did it meant it was quality. What we were doing was completely destroying that idea.
Q: How did the urologists react?
A: Their first response was they challenged the data. My response was, “I’ve never seen a perfect set of data. Let’s go through these cases, your cases, doctor, and see if we’ve got it wrong or right.” After a little bit of that we decided the data were pretty accurate. They [also] challenged my access to the data. …No one is really enthusiastic about getting examined, in fairness. But that didn’t stand up very well, as you might expect. Truth is that the way I approached it was I said, “You know, every physician commits to study the care they give to their patients and the outcomes that occur with an aim to improve your care.” That’s why we call it the practice of medicine. It’s part of our core ethical commitment. I said, “That’s what this is all about. I’m not interested in whether you’re a good doctor or a bad doctor. I’m not sure that I could tell statistically. On the other hand, you guys are really different from each other. Why?” What that does is stimulate thought and creativity. …Some of the administrators wanted to say: Good doctor, bad doctor. But mathematically, statistically, your ability to do that is extremely limited.
Q: Why is that? If you have outcomes data, why can’t you rank?
A: There are a whole series of problems. David Eddy, the guy who was the father of evidenced-based medicine, did a lot of work in this area. He took the example of prenatal care for identifying birth outcomes. He identified in the literature all the factors that contribute to good birth outcomes; there are seven big ones. Prenatal care is one of them. It’s roughly equivalent to the other six. But that’s the one that physicians could control; it’s small, it’s about 5 percent. The trouble was, if you then modeled it and said that of all the science we know for predicting a good birth outcome, what portion of the variability in outcomes does [prenatal care] explain? It’s about 25 percent. Seventy-five percent of the factors that determine good birth outcome are unknown to current medical science. And that means if I do comparisons, I’m making the implicit assumption that those 75 percent are equivalent across whomever I’m assessing. [In addition] there are mathematical problems; you get these very wide confidence intervals. And there’s a set of documentation problems about did they measure it, did they record it accurately, etc.
Q: Is this true for other areas, too, that we only know about a quarter of the factors leading to outcomes and that the other 75 percent are beyond our control?
A: It depends on what you’re looking at. The strongest models you get are looking at things like open-heart surgery. The portion of outcome variability that you can predict [in health care] ranges from about 10 percent to 85 percent. It depends on what you’re looking at. But sometimes it’s quite low. It’s interesting, and people who try to evaluate us from the outside, they don’t evaluate that. They don’t ask themselves, how good is the science?
Q: After explaining the data to urologists, did they work together, and did you see variability decline and outcomes improve?
A: Dramatic reductions in variability associated with significant improvements in outcomes and reductions in cost of care is what came out of that. The funny thing was, as you’re looking at the detailed data, there was never an instance where one physician was consistently good or consistently bad. We were tracking about 90 individual factors. I’d have the guy who appeared to be the best guy in the group––he was best on grams per minute, for example–– and on average, he had a lot to teach. The trouble was that there were things where he was the worst guy in the group. And if you looked at the data for any length of time at all, you walked away convinced that it wasn’t a matter of picking the best physician. Best care was scattered across that group and every one of those physicians had something to teach and something to learn. The group knew more than any individual.
Q: That’s fascinating, because there’s so much talk about transparency and about making everything public, from outcomes and costs to charges. But if you were to say to surgeons, “We know all these metrics, such as your grams per minute, and we’re going to post them online,” that has such a gotcha feel to it, and immediately the public wants to rank. Then you lose the learning opportunity. So how do we balance consumers’ desire to know with the need to create a safe space for our providers to learn and improve?
A: What we learned from this is that you don’t focus on the person. You focus on process. …A TURP [transurethral prostatectomy] is a process. And when we looked it that way, we started to say, how can we best execute this process? It’s the idea that people can change. These person-focused––we call them judgement systems––provoke a very predictable pushback by whomever’s judged. You stand those in contrast to learning systems [which] focus on process. …We studied this twice at [the Institute of Medicine] and our definition of transparency is a process-based definition. When the government does it, their definition of transparency is what they call accountability. But it turns out the accountability approach is very seriously, methodologically flawed.
Q: Give us an example of how health centers are being measured by the government, and what would be the ideal measurement, instead.
A: When [the Centers for Medicare and Medicaid] started to collect process data and publish them on a website, Hospital Compare, one measure for patients with heart failure [was]: Did you educate them about their medications? The original work was done here in Utah; if you educate them well, it’s associated with a fairly significant drop in the mortality rate. A particular hospital on Hospital Compare was in the 47th percentile, and their Board of Trustees had come back and said, “Get that ranking up. We want you in the 90th percentile, at the top of the class.” What you just saw was goal displacement. The goal wasn’t, keep my patients alive. The goal had shifted to rank high. The regulation actually says the nurse has to document in the medical record that meds education was done. That’s the measure. …So they had printed up some really nice, double-sided, glossy meds education sheets. And sometime in the middle of the night when they had slack time, the nurse would grab a stack of these and go by the patient rooms and drop them off––the entire hospital, they couldn’t distinguish who had had heart surgery and who hadn’t. Then they would come back to their workstation and…document them en-masse. This meets the requirement of the regulation. They went to top of the class, 100 percent. They kind of got the wrong process, but in terms of managing that process, boy did they ever nail it. And the trouble is this could make you deeply cynical, couldn’t it?
Q: So what’s the solution?
A: The aim is to identify processes that improve your practice of medicine, then systemize them and measure against them. You can always cheat. But good physicians don’t cheat. What is your purpose, to look good where you can manipulate these systems like crazy, or to be good in terms of what really happens when people come to seek your help in their hour of need? And it’s a choice you’ll have to make. Most of us, when faced with it, make the right choice.
My favorite way to do it is to take a care delivery group, a clinic or a hospital, and identify…clinical processes have a [large] impact. We call it standard work, and you design a standard approach to this particular process. Labor and delivery, for example: It’s the biggest single process that Intermountain operates. It’s 11 percent of our total system volume. We deliver about 34,000 babies a year. Well, we hammered out an evidenced-based practice protocol for it, and you blend it into [the] workflow. The aim is to reduce the burden on clinicians. Here’s the problem. I can demonstrate scientifically that, with very rare exceptions, I can’t write a protocol that perfectly fits any patient. The people who come to us for help are different from one another. But it’s a way of dealing with complexity. Medicine has become complex, and in a complex environment, if I standardize my work, it means I don’t forget things and it makes me faster.
Q: When you applied to medical school, did you know you wanted to go into research?
A: I was on a Ph.D. track for physics and I was going to be a physics researcher. By real happenstance, the University of Utah was a hotbed of computer science, and the physics department had gotten really heavily involved in it. We were writing computer programs to do symbolic calculus, and we were hammering this stuff out. And I was talking to a post doc at Columbia University, the finest physics program in the world at that time, and he was in his second post doc and he couldn’t get work. That was news to me, so I said, “What should I do?” and he said, “medicine. It has really interesting research problems and pretty easy money.” I got into medical school and I encountered patient care, and the next thing I knew, I was in surgery residency. They were such interesting problems, you know, you’re problem solving. It was physically challenging. It was mentally challenging. I would probably still be out somewhere practicing surgery except for the fact that early in medical school I’d set up to do a fellowship at the National Cancer Institute, and it reminded me of the research side. I worked my way through medical school as a systems programmer, so I came out of medical school $3,000 in debt. That was common back then. The short answer is I’ve always been able to get into whatever I’m doing and get lost in it. You just develop skills and that’s when opportunity knocks. I didn’t have a plan. My career was a series of accidents.
Q: Share some predictions for the future of health care.
A: I think it will be better. When people ask, “Do you think I should go into medicine,” I say, “Definitely.” If your interest is in helping people when they need help, it’s the best profession the world has ever seen. The difference will be, complexity. David Eddy, again, said, “The complexity of modern medicine exceeds the capacity of the unaided, expert mind.” It’s forced it to team-based care. So your skills at leading an effective clinical team are going to become more and more important. One of the ways you simplify is you go to standard work…so you can focus your attention on the stuff on the edges. It’s a way of dealing with complexity. Some of you will get into the job of building the standard work to free up your colleagues so they can think. Every generation faces a challenge of making it better than we are today. I see the rate of improvement accelerating. To be on that ride––I wish I was 24 again. Get lost in it, though. Master it. Enjoy it. Don’t learn the tricks of the trade. Learn the trade. Learn every chance you have and when you do that, you’ll find that opportunity will knock.