false
Catalog
QI: Value in Imaging 2: The Business Case for Qual ...
MSQI3218-2022
MSQI3218-2022
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Okay. So why do we ignore quality? And I think if we ask this question, everyone in the audience is going to say, well, I know why we ignore quality. It's because we don't get paid for it. So if we were being paid to improve the quality of care we provided, I'm sure lots of practices would be more interested in it. But I think it's a little bit simplistic actually. It sort of implies that we're only interested in money and that we're inherently materialistic and evil in a way. We're prioritizing money over patient care. And I think that's a little bit of an oversimplification. Let's look at some basic data on how we're doing quality-wise. This is a graph plotting life expectancy at birth against cancer mortality per 100,000 in the population. And because we happen to be talking in the United States, I'll just point out how the United States is doing. There's the arrow right there. So you can see if you are born in the United States, you are going to live on average a couple years less than if you live in some of these other countries over here. In terms of our cancer mortality per 100,000 population, kind of around the average there. This is 2015 or later data, deaths per 1,000 live births. It's a good way to see how are you doing from a healthcare standpoint. One thing to note is that not all the bars are the same height. So different countries are performing differently in this respect. Here's the United States right there. Okay. Here's a graph here looking at per capita spending. How much money are we spending on healthcare? It's divided into government and compulsory care, which is the dark blue. And private or voluntary money you're spending, that's the light blue. How much money are we spending? There's the United States right there. So the outcomes are not commensurate with the money we're putting into this. And if you look at, again, forget the USA for a second. Just look at how much the variation is across these different countries. If you look at the far right-hand side of the graph, which happens to be Mexico, and the far left-hand side of the graph, which happens to be the United States, we're talking about a nine-fold difference in how much money is being spent on healthcare. And there's really no or little relationship between that amount of money that's being spent and the actual outcomes we're looking for. So radiologists will say, well, that's not our problem. How can we control infant live birth mortality? That's kind of outside our scope maybe. We're amazing. And in fact, if you do Rab Pierce scoring and you advertise that to other people, I suspect that your accuracy is going to be very high, particularly if you're using that data to influence people to purchase your services. And many of these Rab Pierce-based scoring mechanisms almost encourage you to have very strong concordance rates with each other. Let's look at some data of how radiologists do. What's our quality look like? This study here looked at concordance between two different radiologists looking at the same abdominal CT. And it also looked at concordance between the same radiologists looking at the same abdominal CT separated in time. Those are not very good numbers I would offer you. If you look there at the discrepancy rates, we're talking about 25 to 40% or so discrepancies for major findings. This is a meta-analysis that was done by Andy Rosenkranz's group looking at second interpretation. So an imaging exam is done and then another person interprets it later at another institution. This is a meta-analysis. This is like a plot here showing the different rates and different studies. And look at this. This is the best estimate here, about 20% major differences. Now ideally we'd like it to be somewhere over here, right? We'd like it to be, well, sometimes we see some different things, but it doesn't happen that often. Well, these studies are showing you that we make major differences of opinion looking at the exact same studies, and it's very common. So from a quality standpoint, I would argue this strong variability is not a good thing for our specialty. So we've talked about money. We've talked about the fact maybe we could do some things to improve, but I'd like to start really by telling you a little bit of a story. And this story is about disease and stench and neglect. And I'm going to show you some pictures. This is a real story that happened in London in the mid-1800s. So I'm going to take you back now. It's 1848. It's 19th century Victorian England. And you're walking along the Thames, which is a river that supplies much of London's drinking water. And you can barely breathe. There's pieces of fecal matter that's bobbing along the water in clouds of burgeoning industrial waste. And that's making it slow transit on the way to the sea. And there's few other than you that are walking along the shore and those that do are holding flowers or rags to their face so they can breathe. And the city's been gripped in a cholera epidemic for more than 15 years, resulting in the deaths of thousands of people. Because it's going to be another 13 years before Louis Pasteur introduces the germ theory. And in the meantime, people are literally dying for answers. Most people blame something they call a miasma, which is a theoretical invisible cloud of poisonous air that's believed to emanate from decaying matter. And the only way to escape when you see this happening, people dying around you, is to flee as fast as you can so you can escape the miasma, the invisible cloud of particles. So now it's 1849. And the physician John Snow publishes a paper called On the Mode of Communication of Cholera. And he actually argues that cholera is a human-spread contaminant in water. It's not particles in air. And the revolutionary words he's saying, if you think about it, is that if you get sick from cholera, the reason why is because you ate somebody else's fecal matter. You ate somebody else's human waste. And that thought is so horrifying, he's basically ignored. So now it's 1854. And the cholera epidemic has raged for five more years. And there's thousands more people who have died. And the city smells horrifying. Snow, who's desperate to convince his skeptics, maps the outbreak at a street level and finds that many of the cases are centered at a very specific intersection. Broad Street and Cambridge Street, indicated by those blue dots there. And coincidentally, right at that spot, there's a water pump. So he says maybe the water pump is the source of this cholera epidemic. So he recruits the help of a friendly priest. And this scientist and this priest work together to convince the city that the pump is the problem. So he walks up to the pump and he looks around a little bit, makes sure no one's looking at him. And he takes the pump handle and steals it. Okay? Steals the pump handle. And what follows is the cholera epidemic drops to almost zero. Did that encourage them to fix the water problem? No. This is 1855. And a famed scientist, you may have heard of him, Michael Faraday, this is some quotes from a letter he wrote to the citizens of London in the newspaper. The whole of the river was an opaque, pale brown fluid. If we neglect this subject, we cannot expect to do so with impunity, nor ought we be surprised if many years later a hot season gives us sad proof of the folly of our carelessness. The reason I'm telling you this story is for you to think about living in a city where your main supply of drinking water is so contaminated and it smells so horrible and there's people dying everywhere. Why aren't they taking more action? Why isn't anyone doing anything? I mean, this has been going on for years and years. But he, like snow, is largely ignored. So three more years pass and now it's 1858 and it's getting hotter. And although the outbreaks have slowed, disease continues to recur and cartoons depicting this Father Thames offering pestilence to the city are commonplace. So you open the paper now, it's 1858, and you find death rolling down the paper, rolling down the river rather. Dead animals are floating alongside and the stench is unbearable. And Faraday had foreseen this was going to happen. He knew that heat would bring evaporation and it would expose this miles of excrement that's laying just beneath the surface, spreading it bare to be baked under the summer sun. This is the Great Stink of London. And this actually happened in real life. And the people there wrote stories saying that they couldn't breathe and that for miles in all directions, that was a sudden change in the wind, it would make people throw up because it was so horrifying. It was like over 100 degrees and all the water was exposing all of this waste that they had tolerated for years and years. Okay, so what was their solution to the problem? We have people dying in countless numbers. It smells so bad you can't breathe for miles around. So rather than sort of fix the problem, they decided to cover up the problem. So they took this lime chloride and they dumped it in the river and they actually coated the things that, those window curtains that hung up in Parliament so they couldn't smell it as much. They covered up the problem. But it didn't work. The smell was so overpowering, they were forced to act. And so finally, push, bond, all measure, the legislature said, okay, fine, we'll fix it. So they hired this guy, Basil Gett, civil engineer, and he fixes the problem. It took lots of money and lots of effort, but he fixes it. The stench goes away. Waterborne disease is nearly eradicated. There's a few more details to this story, but that's kind of the summary version. So over a many-year process, we have something that is quite visible to the public and is literally killing tons of people. There's a mixture of ignorance in there of what's actually causing the problem, but it's affecting their daily life on an extreme level, and yet no one's doing anything. So what's the problem? So can we conclude here that the people didn't care? I would say no. There were thousands of people who were dying, and there was literally a river of excrement flowing downtown. So what got the Parliament to finally do something to fix the problem, which may have some implications for what we're trying to do in the quality space? It wasn't facts, and it wasn't people dying, and it wasn't pestilence. It smelled so bad they could no longer cover up the problem. That was the final motivator that got them to do something. How did Snow convince the people to stop drinking the water? Did he go door to door and say, I really think there's a problem. We need to stop drinking the water. No. He stole the pump handle. So instead of making it a people problem, he made it a system problem. So this is an interesting book here called Switch, Three Pearls for Enacting Real Change. What looks like resistance to something is often a lack of clarity. If you're living in London, and it's 1850, and someone tells you that the river's a problem, and it smells really bad, and everybody's dying, and the average citizen doesn't know anything about germ theory, they say, what do I do to fix the problem? How can I, the individual citizen, fix this massive problem? I can't. I don't know what to do, and because of that, I'm gonna resist whatever you tell me to do. And what looks like laziness and inability to fix a problem is often exhaustion. I've complained about this for so long, I'm out of energy anymore. It's not that I don't want to fix it, it's that I've had no clarity on what I'm supposed to do, and I'm just exhausted from that process of thinking about it. And what looks like a people problem is often a situation problem. The pump was the issue here. Getting the people to believe that germ theory was a thing wasn't the solution here, it was stealing the pump handle. I'm gonna go through a couple of different things that summarize why I think we have some challenges in healthcare to communicate effectively what's going on, and then to take steps to effectively fix it. The first is we have a problem of noise. Now if you work in the airline industry, and a plane blows up, that's gonna be all over the news. If you work in the steel industry, and you have too many people get hit by steel beams and they get crushed to death, that's gonna be a public event. In healthcare, if you have a patient who comes to your hospital and dies, there's a question mark. Was that a normal death? Is it expected death? Were they sick? So there's noise that's cloaking what we're doing. If we look at our best estimates on medical error and how it contributes to patients dying, the best estimate is the third leading cause of death in the US as of 2013 data, an estimated 251,000 people. If you compare this to AIRA's human data, which came out in 2000, that estimated about 44 to 98,000 people dying from medically preventable problems. So like London in the 1800s, death is common, it's unexplained, and it's very difficult to understand. If I go to a group of people who are physicians and say, you are killing lots of people, what is that gonna be met with? I don't know what to do with that information. What is my specific clear outcome that I'm supposed to be pursuing? This is too vague. It's too big, and it's too difficult to understand. If you plot unpreventable deaths against preventable deaths on an annual basis, this is kind of what that plot might look like based on census data and that BMJ data I showed you earlier. Most of the deaths that occur are unpreventable. They're people who get sick and they die, but some of them are preventable. How do you figure out how to make the blue very clear and visible to the people who want to solve the problem? It's hard. So quality has a visibility problem. If we can't see it, it's easy to ignore. So in radiology, if someone comes to me and says, you must do X to change your practice, and it's hard for me to exactly understand why I'm doing that or to draw a direct link between what I'm being asked to do and how that affects patient care, I might be resistant to it or ignore it. I'll give you a little anecdote here. My 74-year-old grandmother got pneumonia. She was admitted to the hospital. Unfortunately, she did not recover and she passed. Okay, fine. My 74-year-old grandmother got pneumonia. She was admitted to the hospital and then the ICU, and on day eight, she developed sepsis and she passed. Okay, that seems reasonable. That could happen. My 74-year-old grandmother got pneumococcus. She was admitted to the hospital, then the ICU, and on day eight, she developed MRSA sepsis, and then she passed. Now, the medical people are gonna be like, oh, that's a little weird. That sort of doesn't link together exactly well, but the patient's family are like, well, grandma is grandma. She got pneumonia. She died. What about if on day six, she was looking great and left the ICU, and then on day eight, she got the MRSA sepsis and then she passed? And what if treatment wasn't initiated for 32 hours? So the simple version of the story seems like, okay, that could happen, but once you start peeling off all the little individual details, you're like, well, maybe there's an opportunity there for us to improve our care patterns. So in this anecdote, my 74-year-old grandmother got pneumonia, she was admitted to the hospital, and she died, and if this is a story, no one's gonna bat an eye. This is normal. So-called old people die. It's normal for people to get bloodstream infections, and you can't expect everyone will survive. You could make that argument, and of course, we know that's false to assume that all these cases must be okay. Simple steps to save grandma are to wash your hands, use full barrier for central lines, clean with chlorhexidine, avoid femoral access, and take out catheters as soon as you're able to, and if you do these simple, specific, clear steps, what happens? You have a massive decrease in the amount of central line infections that patients get, and you have a massive decrease in central line morbidity and mortality, and in this particular study, which was New England Journal of Medicine 2006, up to 66% sustained reduction across over 100 ICUs. So those simple steps like wash your hands, et cetera, which published in 2006, how are we doing now in 2018? Do you think every hospital has 100% compliance with hand washing? I don't think so. So why is that, why are you having a hard time getting facts now to translate into actual care? Now in this study here, do they tell the sites, you know, stop killing patients? No, that would not have been very effective. Instead, they provided specific CDC-advocated evidence-based steps that were simple to follow and could be widely adopted. So clarity around the objective, something that's very clear-cut and simple steps. Did they tell people to try harder? No, they provided clarity through specific objectives and turned the people problem into a situation problem. Instead of saying, Dr. Smith, you are a problem and are causing too much patient harm, instead they said, group of doctors, let's work on these simple steps to solve the problem. What about some antagonistic pressures in healthcare and radiology in particular? There's some books by a group of people to talk about confronting brutal facts. And you may disagree with these, you may say these are opinions, that's okay. I think there are some elements of truth to the things I'm gonna share with you. This is the tripartite mission of academic medicine. We have research, clinical care, and education. And maybe in clinical care you can kind of embed a subpart called quality. But the question is, if you do have a tripartite mission in academic medicine, you will always have an internal dynamic where one is pulling on the other. What if you have a question that comes up where they say, well this is good for education but bad for patients? Or this is good for a research mission but kind of bad for patients? Where is that tug gonna line? Where is our dipole moment gonna end up here? So I'm gonna share with you some facts as I see them, and you might consider them opinions, that's okay. And the academic side. In 2006, we learned that we can do simple steps to massively reduce bloodstream infections. If another academic person replicates the same work, that's gonna end up in a lower tier journal, it's gonna acquire a lot of their work, and they're gonna get less notoriety. So in academics, people are incentivized to innovate, they're not incentivized to replicate. When we spend long time on long, high risk, high reward projects, it actually makes it harder for people to get promoted. If you tell a young person, do this really high profile, very long standing thing that's gonna materialize in six years, they're never gonna get promoted. People in academics are rewarded for grant money, even if it's not necessary for the work they're doing. And academic success is defined by titles and talks and papers and positions and really not by patient advocacy. And these things challenge people in academics from really totally buying in to what we're doing on the quality side. What about education? We learn our trade by practicing on patients. Now there's some oversight involved in that, but a lot of times we're doing things by inexperienced people. And that's true even once we become attendings. How many times have you been asked to pick up a new thing you didn't know you were going to have to learn? Or, hey, I need you to kind of start reading these knee MRIs because the practice needs your help. I haven't read those in six years. How often does that happen? We do many things for the first time without oversight. And when we talk about independent trainee call at night, this is sort of an evolving landscape. It pits these different things against each other. It pits the educational mission against patient care and against actually attending physician lifestyle. Now let's say we work outside of academic environments where education and academics aren't as important. I think these facts apply across all of our discipline. We're paid to interpret studies, not to do so correctly. We get paid to scan regardless of its indication, although that's changing. We're incentivized to emphasize efficiency over accuracy as fast as possible. And we're graded on compliance, not on patient outcomes. So if we put all these things together, how do we think this affects the quality of care we provide? I would argue there are incentive structures for what drive our behavior are really misaligned. So I'm going to give you a thought experiment for you to consider. On one side of the room, there's a giant pile of money. And on the other side of the room, there's a complex, opaque patient care challenge with no clear solution and no direct line of sight regarding whether it's even solvable. Okay, where are you going to go? Grab the big giant pile of money? I suspect maybe. Okay, let me reframe this a little bit. Now the big pile of money is sitting opposite a drowning stranger. And you have a life preserver sitting right next to your lounge chair. What are you going to do in that case? Are you going to go get the money? Are you going to save the person who's drowning with your life preserver? I suspect you're going to go save that patient. So it's not that money trumps all of our priorities. It's the challenging complexities of health care promote sustained tolerance of the status quo because we have unclear solutions and it begets this decision paralysis where we don't know what to do. And in such an environment, the money is going to override the moral imperative because that imperative is cloaked in noise and uncertainty. Decision paralysis and intellectual exhaustion are common things that result from this environment. I want to improve, but I don't know how. And sometimes when we try to measure the quality we provide, we try to measure each other's performance. Here's how this conversation usually goes. Your rad peer results are in. Look here, you got three fours. Okay? And this person says, great, I'm awful. And I hate you. So we have this unhappy marriage between quality assurance and quality improvement. And because of this, it's made us learn that quality is almost like a bad word. They're sitting at opposite ends of the table and they're not too happy with each other. So my OPPE, like what measures my performance, is did I get my TB test, am I compliant with CME, are my rad peer errors statistically acceptable, did I click that box in radiant like I was supposed to, and are my ER turnaround times fast enough? That's how I measured. So quality is being treated synonymously with compliance. And that's a problem, I think. So here's what goes through my head when I look at that OPPE. Why am I even being tested for TB? Most of the time I'm sitting in a room. Didn't the ABMS get sued over the whole CME thing? Rad peer errors are really biased and unusable data. I can click the box, but is that me practicing at the top of my license? And the ER dispo time is five hours. Yeah, I can make my turnaround time one hour, but I'm meaningfully affecting their turnaround time. When compliance equals quality, I think people get demoralized and they become skeptical of the whole issue of quality in the first place. And there's a branding problem here, because real QI, real meaningful quality work, ends up in the same thought basket and viewed as a giant waste by the average person who's trying to negotiate this. So here's a little quote from a senior radiologist. If you're going to make me check the boxes, I'll check them, but I'm going to do it with a minimum amount of work so I can get back to things that actually matter to our department. They're failing to see the connection between what they're doing and how it actually benefits the patient. And we have to do a good job of drawing that line of sight for them so the objective is clear. So I don't think we ignore quality because we're evil or we don't care. I can't speak for all of you, I guess. But most people I don't think are. I think we want to do the right thing, but our system is designed to give us bad results. So here are some reasons why I think we ignore quality. Calamity in health care is normal. People get sick and die. So we hide in that noise because they have a hard time seeing the problems. Our incentive structures are horribly misaligned if we're trying to actually improve patient care. We pay for volume and quality work in many academic circles is unglamorous. We don't know what to do. I would love to save somebody's life. Just tell me how to do it. There's no training on this and direction often lacks specifics. Go improve. And we've learned that quality is an enemy because it's historically punitive and arbitrary and QA is not QI. So how can we fix this? On the calamity side, we need to make the hidden visible. We need to use big data, statistics, wisdom to make things which were previously hidden unveiled so we can see what's going on. This is important. We need to make sure that our incentive structures feed on the improvement mechanisms we're trying to generate. We don't know what to do. So we need to train our teams, we need to speak specifically about things, not vague terms. We don't want to overload people with the 50 projects that are ongoing. Give them three things that you think are going to really improve care today and then align people's interests. And we need to focus on improvement, stop arbitrary measurements, and avoid, in my opinion, peer-to-peer grading because it's demoralizing. So how are we going to pay for this? We're going to listen to the next talk and he will tell you. My name is Lane Donnelly and I'm going to talk to you about accounting and costing related to quality. I don't know if I'll exactly answer the question that you proposed but hopefully this will be helpful. So as you know, there are test questions embedded in this and this is a promotional photograph from the old horror movie Creature from the Black Lagoon. To make the test questions less horrifying, whenever you see the creature from the Black Lagoon show up on a slide, there's information on that slide that will be helpful to you answering your questions. So as we know, there's movement toward at-risk population health management related payment models which we'll hear about later in the session. And because of that, it is becoming a core competency for health care organizations and radiology departments to understand and decrease their costs. As we make that movement, radiology moves from being a revenue center in a fee-for-service type setup to a cost center in a value-based care. And related to that, we'll talk about costing and also emphasize the hidden cost of poor quality care. Much of this information is in a recently published radiographics article. So I thought I'd start with a scenario and this is a capital budgeting scenario. So I'm a pediatric radiologist by background and I'll use a children's hospital in this scenario. So the radiology department in this children's hospital has put forth a business plan to buy a new additional MRI scanner. It's going to go into a already existing shelved space and it's estimated to cost $3 million to do this. At this particular hospital, 100% of their outpatient business is fee-for-service and all of their commercial contracts for inpatient is still fee-for-service. So MRI is their highest margin modality and they make money when they do MRIs. They have a huge problem with backlog. It's 65 business days to get a pediatric cardiac MRI with general anesthesia and 48 business days for a pediatric neuro-MRI. They are in a very competitive environment so when they can't get patients in, there is competition that they can go to so they lose business. Their number one ranked dissatisfier in their prescany services is dissatisfaction around access. Their return on investment in the proposed business plan is 1.8 years so it'll take 1.8 years to pay for the magnet and then they'll be essentially printing money. The construction can also be used to improve some MRI safety zone issues that they have in their MRI area and some of the new MRI features would help improve the quality for some of the tertiary and quaternary services that they provide. That same organization has recently opened a new building as part of their hospital and they had to staff up immensely, physician and nurse-wise, to open that new space and the patient volumes are lagging behind to what they've staffed. In addition, the billion dollars of depreciation to build this building is now hitting their budget. Again, they're in a competitive marketplace so they have some fiscal constraints. They've had a long-standing 3A bond rating which makes borrowing money cheap for them but the rating agencies are concerned and potentially going to downgrade them related to decreased cash on hand related to the issues in the top. So there's a lot of competition for a very restricted capital and the organization elects to decline the MRI purchase, mainly related to the bond rating issue. So the questions are, when they made this decision, did they take into account the missed revenue opportunities, the loss of business, the effect on patient satisfaction, the ripple effects on subspecialty referrals, the potential adverse effects related to MR safety in the safety zones, stagnant quality, and how does their accounting system take these things into effect. So with that, we'll move on and we're going to talk about first costing in medicine. And there's an excellent article in radiology by Dr. Rubin on costing in radiology and health care that I recommend in which much of this information comes from. So the bottom line is, historically, our understanding of cost in radiology have been incredibly rudimentary. We mix up the terms, charges, payments, costs, and fees when we're discussing these things. And as we alluded to in the beginning, it's becoming increasingly important to understand the cost of services that we provide, both related to us in value-based care moving to a cost center. In situations where there's a single bundled payment, the radiology department needs to document the related costs, demonstrate the value that they provided so that the appropriate amount of funds are allocated to them through funds flow. And then also in dealing directly with patients, many patients, as you know, now have high deductible plans. There are large out-of-pocket payments. And when they receive a bill for $3,000 for an MRI or something along those lines, they demand justification for why they're being charged that much. One thing worth mentioning is that, like other things related to relativity, there's relativity of costs. The perspective on costs by the patient might be related to the price, what their copay is, other lost things like time and travel. To the provider, it is predominantly related to the resources required to provide the service and the desired margin. And to the insurers, it's the amount that they have to pay plus the desired margin. For our purposes today, we're going to be talking about costs relative to the provider. So we've talked a little bit about costs. So what are the costs that we allocate to a particular service that we provide? And these are broken into direct costs, which are items allocated to a specific procedure. With indirect costs, they might be variable, dependent upon the volume of procedures provided, such as contrast or supplies. They go up as you do more studies. And then there are fixed. These are costs not dependent on the volume of procedures provided, predominantly equipment and depreciation and most staffing-related expenses. And there are also indirect costs, also referred to as overhead. And these are items that cannot be allocated to a specific procedure, usually things like electricity, facilities, environmental services, shared services, and things along those lines. And here is the creature. And you can see his hand is looking at that items that cannot be allocated to a specific procedure are referred to as indirect costs. Now that we've talked about some of the basic things of costs, what are we trying to decide the cost is for? What type of procedure? So usually that's related to CPT codes, which is current procedure terminology codes. And there are about 8,500 of these as defined by CMS. And about 840 of those, or about 10% of them, are related to radiology. The issue is that for the level of complexity of imaging services that we provide, sometimes the CPT codes are not adequate to meet that level of complexity. So organizations can define their own imaging, or also sometimes referred to as IMG codes. And they use these codes to route examinations between information systems, to define what the referring physicians see as choices of imaging exams to order, and then to perform administrative tasks, such as linking a standardized report template to a specific imaging procedure. And the relationship between CPT codes and imaging codes can be confusing. So in some scenarios, one IMG code may represent multiple CPT codes. For example, if your organization decided to have an IMG code for MRI of the brain plus total spines for neurosurgeons to order for follow-up brain cancer patients, that has four CPT codes associated with it. Much more commonly, one CPT code may be associated with multiple IMG codes. So we'll take, as an example, in pediatrics, ultrasound abdomen limited is the CPT code, but there are multiple IMG codes that might be associated with that. An exam for ultrasound hypertrophic pyloric stenosis, appendicitis, intussusception, rule out abscess, and those are all very different ultrasound procedures and have very different reports associated with it. And then finally, there are scenarios where one CPT code equals one IMG code or imaging code, such as MRI of the brain. And again, here's the creature. So again, CPT codes and IMG codes can be related in multiple different ways. So why are we talking about this? When you're trying to determine costs, IMG codes might be a more accurate thing to use, particularly in scenarios like this, rather than CPT codes. So now that we've talked about basic costing and what we're trying to cost, what are the ways that we can try to determine the costs? And there have been two buckets of approaches to this. One is the top-down approach of ratios to cost of charges, and then the bottom-up approach of activity-based costing. So ratio of cost to charges, or RCC, assumes that costs are related to charges. And it identifies the total amount of charges and then applies a calculated factor, or an RCC rate, times the charges to determine the costs. The benefit is this takes very little effort, and there's very available data associated with it. The disadvantage is it's completely inaccurate and relatively useless in that sense. This is also similar to RVU-based top-down costing as well. So concerning charges, all industries use costs to determine their charges or pricing, except for healthcare, in which the estimated costs are based on the charges. The other way to do it is the bottom-up approach, or activity-based costing, or ABC. And this assigns resource costs to discrete tasks. You create process flow maps showing all the steps in the process, and it assigns costs to products based on their use of activities. It's more accurate. It assigns overhead expenses according to utilization. And the most accurate form is time-based ABC, where you either, through electronic data or actually by following people around with a stopwatch, determine how long different aspects of the delivered product take, and then times that by the amount that it costs. So essentially, in ABC costing, you take all of the variable, fixed, and indirect costs, and you attribute those to whatever you're trying to cost most accurately, probably, in imaging code. So we've talked about costing in medicine. Now we're going to talk a little bit about accounting and budgeting. So if you were to look in either most of your organization's budgets or in your own radiology department budget, there probably are either line items that pertain to a quality and safety, or you may have a cost center where all of the line items related to quality and safety sit. And if you look at those, what you'll probably find in there is the cost related to a number of quality-related employees, whether you have quality specialists or folks like that. Those costs, you probably might have a physician leader, a medical director for quality, or vice chair for quality, or chief quality officer, and a percent of their time might be protected, so you would probably see that. There might be some costs related to ACR accreditation costs or ABR certification costs, but that is probably the bulk of what you would find in such an accounting unit. Another thing about accounting and the cost of quality is the mindset is typically higher quality requires higher costs, that better requires more, and that pertains to materials and supplies, machines or imaging equipment, and labor. In fact, economists have said things like productivity isn't everything, but it is almost everything. This mindset ignores the cost of poor quality. And if you have limited costing information, this can lead to bad financial budgetary decisions. As an example, let's say back to that hospital in the scenario that they were having fiscal challenges related to the opening of the new building and there are other situations, and they decided that they needed to have a 10% labor cost across the entire organization. And they don't have the time to say your area makes money and you don't have to do as much cutting and you don't, so you do have to do cutting. It's much easier to say there's a 10% cut in general. And what this typically does is transfer low-level work related to elimination of lower-level positions up into higher paid positions, and in many senses, in that respect, doesn't make financial sense. So the industrial approach or total cost of quality is different from this, and this was first described in 1956. And basically, it looks at the cost of quality related to two factors. The cost of control, also referred to as the cost of conformance, and the costs of failure of control, or the costs of non-conformance. And each of these two buckets has two major subparts. So under cost of control, there are prevention costs. These are the costs that arise through efforts to keep defects from occurring at all. And appraisal costs, these are the costs that arise from the process of detecting defects via inspection, tests, and audits. Under the failure of control, or poor quality, you have internal failure costs. These are costs that arise that when a defect is caught internally and dealt with before it actually reaches the customer or patient, in our case. And external failure costs, where the defects actually reach the customer or patient. So we're going to talk about each of these areas. Prevention costs are, again, those costs that arise from efforts to keep defects from occurring. And in industry, they talk about quality planning, statistical process control, investments in quality information systems, quality training and workforce development, et cetera. In radiology, we have things like protocol standardization, standard report creation, radiology dashboards and scorecards, and the time it takes to create those daily management systems and huddles to ensure that we're ready to take care of the patients we're seeing today. Laboratory readiness, trainee work hour restrictions, critical results notification and the efforts put into that, and annual evaluations, or OPPE. And here is the creature, and it shows that OPPE and annual evaluations are a type of prevention cost. Appraisal costs in industry are related to material tests and inspections, acceptance testing, equipment testing, quality audits, et cetera. In radiology, we have new equipment testing, protocol development, machine calibration and phantom studies, time for random peer review, and things along those lines. If we look at the cost of failure control or poor quality, and look at internal failure costs, in industry, these are related to waste, scrap, rework, material, procedure costs, and failure analyses. In radiology, we have costs related to repeat imaging studies, near-miss related improvement projects, the costs related to root cause analysis team and action plan formation, process redesign, education and re-education, service delays, and loss of confidence from hospital administration or internal referring services related to issues that they see in our services. And again, here's the creature, and RCA action team and action plan formation is a form of internal failure costs. And finally, external failure costs, and in industry, these are complaints in warranty or complaints out of warranty, product service, product liabilities, and returns. In radiology, we have patient and family dissatisfaction, referring physician dissatisfaction, lost referrals, ease of access challenges related to loss of business, malpractice costs, third-party payer refusal to pay for certain avoidable complications, at-risk contracts, increased length of stay, distracted staff, and poor morale and disengagement, which is probably our biggest form of waste related to these things. And again, patient and family satisfaction, dissatisfaction is related to an external failure cost. So we see the visible things related to poor quality, but there is a large hidden cost of poor quality related to these distracted staff and unhappy customers and things along those lines. And obviously, there's a happy balance between how much we want to put into prevention and how many errors and things along those lines we can avoid. So there are many examples of these things in radiology. For example, MRI post-contrast images not obtained in a sedated brain tumor, follow-up case so the kid has to come back and be resedated, a positive ultrasound for appendicitis, but that communication did not get to the ED and the patient is sent home. Outside head CT comes with a patient, but it's not read, and it's a child abuse case and the kid's discharged from the ED, MR schedule delays and the dissatisfaction related to that, delays in your schedule because it takes so long for the patients to navigate the parking and elevator systems to get to the MRI scanners or wherever they're going, a standardized report mix-up where it was meant to be abnormal, but they left normal in the impression, MRI the wrong knee scan and it has to be repeated, et cetera. And organizations pay a price for this nonconformity or poor quality, and these costs are often significant and underrepresented when budget decisions are made. So if you look at companies, it's estimated that a thriving, well-run company, about 10 to 15% of their total business are expenses related to poor quality. On average, it's about 15 to 20%, and in poorly run companies, it's as high as 40%, and for us in healthcare, it's probably closer to the higher number. So the costs, the mindset of having a cost for quality accounting can be a powerful tool. And just a couple closing comments. So it's interesting that we even ask ourselves this question, what is the cost of poor quality? You know, ownership of quality, most of our administrative structures are set up so that our quality and safety folks are separate from our operational folks, but it is the operational leadership and obviously the frontline operations people that have to feel ownership and empowerment to affect quality in order for you to improve quality at all. So the other thing is, as I mentioned, and a good segue into the final portion of this session is that with value-based payment models and at-risk populations, radiology becomes a cost center, bundle payments, you need to document your cost and value and take into account the costs of poor quality. Thank you very much for your attention. I'm Andy Rosenkranz from NYU, and I'll be speaking about MACRA and Medicare's quality payment program. So just as some background as to what motivated this program, if you look in recent decades, since its inception, there has been unsustainable growth in Medicare spending, greatly outpacing growth in wages as well as the GDP, such that in 2015, U.S. health care spending was estimated to be about $3 trillion, about $10,000 per person, approaching a fifth of the U.S. economy, with the Medicare trust fund to be depleted by 2029. Around this time, the Department of Health and Human Services, their secretary, had this editorial in the New England Journal of Medicine giving a goal to address the issue, that they wanted 90% of Medicare payments be value-based by 2018, about half flowing through alternative payment models. It was then that Congress passed, in 2015, legislation titled the Medicare Access and CHIP Reauthorization Act, or MACRA. And it was kind of a striking piece of bipartisan legislation at the time, it was passed by large majorities of Republicans and Democrats in both the Senate and the House. Here we see the 92 to 8 vote. And just to kind of contrast it to the Affordable Care Act, so the ACI really targeted how patients interact with the health care system, it really reformed patient insurance, improved access through the private insurance marketplace. MACRA really relates to us, it really wants to be a seminal legislation to reform how physicians practice day in and day out today, and doing this through Medicare. Medicare is the single biggest payer of health care service in the U.S. that can really drive physician practice through Medicare reform. It repealed the earlier SGR formula to stabilize physician payments and consolidated and streamlined prior federal quality programs into a new legislatively mandated overarching program to reward value over volume and to provoke alternative payment models. The law itself is brief, only about 98 pages, but then to implement its overall framework, then fell on Medicare to enact regulations to make it a reality. So this was its first set of rules to really implement MACRA, and it's quite detailed, it's still here, about 2,200 pages, so you're welcome to read through the whole thing. But they created a program to implement it, it was called the Quality Payment Program, so the QPP basically is the enactment or the fulfillment of the statutory elements of MACRA. Through it, there's really two pathways in which we get paid, the Merit-Based Incentive Payment System, or MIPS, or Advanced Alternative Payment Models. And this is really viewed as a spectrum, going from fee-for-service to MIPS, and eventually where it wants us to be is in the APMs. So MIPS is, at least at the beginning, the large majority of physicians, is a modified fee-for-service system, where each year, we receive a score, so basically an annual performance period, from zero to 100, and that score will determine our fee-for-service payment adjustments with a two-year offset. Medicare needs time to look back and see how we did and calculate our score. So we're in MIPS now, it's live, the first year was 2017, so it's over, there's nothing we can do to really change our score from that year. Medicare has actually already calculated that for all of us and our practices, and that will adjust our payments next year. And it's a percentage, so it goes up over time. So in 2019, it can start as on this plus or minus 4%, going quickly to plus or minus 9%. So in that year, say in 2019, each bill or each payment you receive from Medicare will be adjusted by that amount. And it actually gets a little bit more complicated than this. The 4, 5, 7, 9, this is fixed as a floor, but it's basically, it has to be budget neutral. So it's basically putting all physicians in kind of a tournament where we're competing against each other, and the losers are paying off the winners. And on the positive side, it can scale this up or down to maintain that neutrality. And the reason it does this is it might not be a total balance of winners or losers, there may be a large excess, say, of one or the other. So say there's very few winners, or I'm sorry, say there's very few losers, then they won't be paying all that much penalties into this pot, and there'll be very little bonuses to pay out to the many winners, and this plus side will be scaled down to something smaller. And the way that it'll kind of determine this balance between winners and losers is through this performance threshold. This is our score from 0 to 100, and each year, Medicare will set that where it'll have its own process to set that at a certain level, so it can kind of titrate this up or down. There'll be a zero adjustment wherever it sets that threshold, and as that threshold gets lower and lower, there'll be fewer losers who will be paying that max negative adjustment. You'll then have this much larger pool of winners who then have to divvy it all up amongst themselves. Now separately, there is this $500 million pot for exceptional performance bonuses. This is not budget neutral. Law set it out, and this has a separate pot, and this has a separate threshold as well. So to see what it did in 2017, which was intended as a transition year, score of zero means you really did participate at all. You have the maximum negative payment adjustment. The performance threshold was set at three, so if you got three points, you have a neutral payment. You have no bonus or penalty. Anywhere from four to 69 is the positive adjustment, and from 70 up to 100, you get the positive adjustment, plus you can get a part of that exceptional performance bonus. So really what Medicare did is they kind of created this big comprehensive program, and then right before it went live, kind of at the last hour, they dropped the bar all the way down to the floor, at least for the first year. So with a score of three, all that requires doing is submitting one quality measure for one patient in the year. So you really just did anything at all, and you avoid the penalty. They really wanted to get physicians and practices used to participating and knowing what it was all about. The problem is, at least at the beginning, and the way it's starting to play out for now, is that because there's so few paying the penalty and so many dividing the bonuses, there's a very small bonus, even if you have a perfect score of, say, 100. Now, in terms of how the score is derived, they call it the final score, and it's coming from four separate categories. So there's quality. Quality replaces the earlier PQRS system. Improvement activities, which is a new program. The advancing care information category, this is going to be renamed next year as promoting interoperability, which replaces the earlier certified electronic health records technology or meaningful use program. And then the cost category, which replaces the value modifier. So it's basically sunsetting earlier programs into this new comprehensive initiative. And it has weighting between each of these categories. Each has a certain percent that then get kind of added up into overall score, and Medicare has its own target for the percents that it wants to assign to each, with the bulk coming from quality and cost. Now, you may have heard about this. It's a program that acknowledges non-patient-facing physicians. And this is in comparison to earlier programs Medicare has had. They're more and more, they want to recognize how much physician practice varies amongst specialties, how we're not all the same in what we do, you know, what an anesthesiologist from a pathologist from a surgeon from a radiologist does is really different. They want to make the programs meaningful to each specialty and have us just do, you know, what's most relevant to us. So here, for physicians who don't have much, who don't have much direct face-to-face patient contact, you can, if you qualify for being non-patient-facing by Medicare's definition, you have adjusted kind of scoring criteria that you need to meet. And here, it's this definition, if you build up to 100 patient-facing encounters within the year, and the overwhelming bulk of diagnostic radiologists, and even a lot of interventional radiologists will, at least by the way Medicare has defined it, will still qualify. And this is, you know, a good thing. It doesn't mean we're not patient-centered or that we're not impacting patient management. It just means we're not directly with the patient. It lets us to score better. So for most radiologists, at least in year one, the advancing care information category will be re-weighted down to zero as a direct result of being non-patient-facing. There's this additional category cost. Now, this actually isn't re-weighted to zero just for being patient-facing, but for most of us, it'll be re-weighted to zero for other reasons. It's kind of a complicated category. You have to meet certain thresholds to be scored in this, and it has to do with how much Medicare spends for your patients. And you have to have patients who during their encounters, you're the physician who builds the plurality of their services. The reality is very few of us will meet the thresholds here. So for most of us, we're only going to be scoring two categories, and overwhelmingly quality. So quality, we have to report at least six measures from a list of about 300 MIPS quality measures. You have to include at least one outcome measure, and they're graded on performance relative to national benchmarks using unique measure sets that Medicare has crafted for each specialty. And at least at the outset, it's just kind of carried along the earlier PQRS measures. So if you're familiar with the PQRS, if you were involved in that for your practices, for at least the beginning, it's the same measures. These have been around for years, didn't create new ones just for the beginning of MIPS. So all these are process or structural measures having to do with dose reporting. There's a number of mammography measures. Also there are specific measures having to do with how you read bone scans, one on carotid dopplers, and you can actually go on their website and get the full list. Now it's more than just the dollars and cents. It also stands to influence your public reputation, and this is built into the law itself that Medicare has required to make this publicly available. So they have the Physician Compare website. The website's live now, but it's going to be more and more ramped up and more and more information added, and they'll be taking your MIPS scores, the measures you reported, how you did, and making those available, and there's a limited amount of information there already. Now in terms of improvement activities, this is the other 15% for most of us. This is a little bit different. This isn't something you get scored on. This is kind of a binary, you just attest that you did it, and it's like you did it or you did not, and a lot of these are maybe newer concepts, things that Medicare wants to encourage. There's kind of like a testing ground. They're not fully vetted and developed that you can actually get like a 42%. It's kind of some newer things that maybe one day could be more of a full-fledged measure. If you're non-patient-facing, you have to do two high-priority activities or one medium activity. They're all high or medium. It doesn't have any low activities, and it has a list of over 90 improvement activities, and if you go through the list, some are radiology-relevant. These they try to focus more on care coordination, population management, so they have some activities relating to appropriate imaging. There's one on patient satisfaction. You can fulfill through certain types of collaborations with your referrers. I think one thing that will have a big impact on how MACRA influences many of us, and maybe I think one of the key points of this talk is that of group reporting. So you can actually participate in MIPS as an individual or as a group, and this has to do with how you report data to Medicare and get paid for Medicare and whether you're doing it. It has to do with the level of the TIN or taxpayer identification number. So it's really, it's not just, you know, it's not saying are you employed by a group. It's really how Medicare views you and are you billing under the same TIN as your practice, and this will probably become the dominant participation method in the future. From a financial and administrative perspective, this is much easier, say, rather than having to separately interact with Medicare for, you know, if you have 100 people in your practice or 500 or more. Rather than do it for each person separately, do this all as the group level. The issue is that you all, everybody in the group gets a single performance score across all these categories and a single collective payment adjustment for everybody in the group, even if the submitted data does not account for all group measures. So you have to remember, so you have to think about that and what that means. So say we pick, these are our six quality measures, you know, maybe there's only a small fraction of the group who's affected by those measures. There could be some people in the group who they contributed no data to those measures, that those measures have nothing to do with what they, you know, maybe we just take some mammography measures, a NukeMed measure, an IR measure on complications from biopsies, and there could be nothing that relates to what I did as an abdominal imager. We'll all get the exact same score and then we'll all get the exact same payment adjustment. So we may all get that same plus 2%, minus 2%, whatever it is, even though there might be a very small number of physicians who can kind of carry the whole group. Now if we take this to an extreme, say like a large multi-specialty practice where, like say where I am at an academic medical center, you know, NYU, we have over a thousand physicians and we all participate in Medicare as one large tin. So this isn't even just, say, different physicians within the specialty, this is even across specialty. So we're in the same tin with our surgeons and primary care docs and our sociologists and pathologists and ER docs. We could say where are our six measures and say, well, the cardiologists are great. You know, they've been doing registries for years and they're really leaders in this and we can do a number of cardiology measures and they could carry the whole, you know, the whole university system and we could get payment adjustments based on just what a small group did, even for physicians in entirely different specialties. So I alluded at the beginning that there's a second pathway here, the advanced alternative payment models. Not many physicians are in this, but I think that's where it wants us to be eventually, so I'll speak about this as well. So the idea of advanced payment models has been around, it's not new in macro, but what the law did was it recognized a subset of these termed advanced APMs, which is what it really focuses on. So you have to be an advanced APM to get out of MIPS. So it gives three criteria. So this is a payment model in which the physician bears more than nominal financial risk for monetary losses for which it gives strict criteria. You need quality measures comparable to the MIPS quality category and you have to use certified electronic health records. Now there are benefits of being an advanced APM. So these are extra incentives beyond any you're already getting from the advanced APM. So if there are certain shared savings that you're going to get from your model, you'll get this in addition from Medicare. So one, you're excluded from all the nonsense of MIPS. You don't need to worry about those categories and scores and measures. All the MIPS reporting burden you're relieved from. You get an automatic 5% bonus payment through 2024 on top of the bonus payments from your APM. And then beginning in 2026, a 0.5% higher annual increase in Medicare payments compared to anybody in MIPS at that time. Now in terms of these models and where they come from, actually the Affordable Care Act had created the Medicare Innovation Center or the CMMI explicitly for this purpose that the Innovation Center was supposed to develop these models, test them, implement them and kind of work with physicians in moving into these. I think the issue is that at least if we look at what was recognized as advanced APMs in the first year of the program in 2017, these largely relate to primary care. There aren't really models here for specialists, let alone radiologists. I think it's one of the challenges here that Medicare didn't really, I think, know how to create models specific to every individual specialty or really understand their practice in ways that, say, a radiologist could have a non-fee-for-service based model. This was something they had been struggling with and what MACRA did to get around this is this was built into the legislation in 2015 as it recognized a new mechanism called the PFPM or Physician Focused Payment Model. The idea here is that specialties, societies or other entities can develop their own APMs and submit these to a new agency created to evaluate them, critique them and approve them and then actually work to start developing and actually testing them. So the idea is that we as a specialty could say, well, here's a model that we think will work well for radiology because we understand what we do and then we work with this new agency to kind of do some tweaking and then actually start to do pilot testing. So there's a lot of interest in this and if you go to their website, they're all, they're actually, all the models that have been submitted are available for public review and commentary and lots of groups have been submitting these, the surgeons and the GI docs and urology has one on prostate cancer and you'll see them there and strong interest in radiology as well. For instance, we have models around breast cancer screening, maybe incidental findings management. I think the challenge here is these models have been submitted and there was an agency set up to, you know, to on a rolling basis review them and so far they've actually approved zero of them. So it hasn't worked out quite the way that I think the, the law has intended and setting it up as well intended. And then there are some legitimate reasons why they've helped. Some have had strong commercial interests or other reasons, but, but hopefully some will begin to be passed. So actually just very recently at this point, Medicare has received all the data from year one 2017 and has evaluated and seen where the benchmarks were and determined where the threshold fell and how everybody did, who are the winners and losers and what the adjustments were and released a summary of it. And I think this is actually very telling, so I'm gonna actually walk through their summary in more detail. So over 90% of QPP participants were in the MIPS. The median score was 89, so that's on a hundred point scale. So people did really well. Now one, I think one of the biggest factors predicting how people did was the size of their group. Just the larger groups had the ability to, to invest the, the infrastructure, the resources, develop the IT systems to really track these metric measures and activities and see how they were doing and pick the right ones and turn in the data the right way. So individuals tended to have an average, actually had a median of 76, groups of 91. But if you actually look at what they define as small groups, they struggled. Small groups had a score of 38, large groups of 90 on median. And this really means, I think the law is actually, this whole program has really been a different reality for different positions. I mean, there are some people I know who are in a very small practice, say a solo single specialty practice, and they've had a lot of their payments coming from Medicare system. Some of them, this is really scary. I mean, this has led to questions of, you know, or do they, can they still see Medicare patients? Do they need to consolidate, join a larger group, or could this even threaten their existence? Could that plus or minus percent, could that even be their full operating margin? For those of us in large groups, you know, we may not even really even notice, or it's just something kind of happening in the background. Five percent received a negative adjustment, but 19 percent among small practices. Two percent had a neutral adjustment, 22 percent a positive adjustment, 71 percent a positive adjustment plus the exceptional performance bonus. And if you had the perfect score, if you did everything right, you know, you did all the measures, all the activities, you got 100 points on your final score, you got at the end of the day a plus 1.88 percent bonus. So this just came, so some people are saying, was it worth it, did the money they put into it more than that bonus? And I think in year one, it was. I mean, on the flip side of it is that threshold of three to avoid a penalty was just, was to truly be a transition, and they can start moving it, moving it up, and the law actually said by year three, it had to be the mean or medium by the prior year. So this can very quickly get very difficult with actually some much more, you know, real dollars at stake. And I think the groups that did well at the beginning, you know, were used to the system and familiar with it, would be better positioned to do well later, versus kind of ignoring it and quickly later saying, well, now it's for real and trying to catch up. But at least at the beginning, it's been kind of underwhelming. So kind of looking ahead and where this is going in the coming year. So they've actually already passed a law to change the law. So in the Bipartisan Budget Act of 2018, this was a major spending bill, actually buried in here some material that, some text that actually changed the macro. So it slowed down its implementation. The cost category, which I haven't really focused on, was delayed somewhat. But in addition, it delayed by three years, the requirement to eventually get that threshold to the mean or median of the prior year. Now they say it doesn't have to happen until year six. This is something Medicare has a track record of doing and making these programs that can seem very scary and intimidating. And then when they see the reaction from the physician community and people panicking, they kind of back off on it a little bit. So what's going to, so where we'll be in the next couple of years. So year one was three points, year two, 15 points, year three, 30 points. So yes, it is being ramped up, but very slowly. And they did out of five bonus points for small practices, these were the ones that tended to be most likely to get the penalty. So now we're going to have even fewer practices being hit on the negative side. Now, in addition, they keep excluding more and more physicians from the program entirely. So now these are increased from earlier iterations. You're excluded if you have under 200 Medicare beneficiaries, under $90,000 in Medicare billing. And this will be next year, under 200 covered professional services. So as they're excluding more doctors as well, a lot of these were the ones who ended up paying into that penalty pool. So MedPAC, a very influential agency that releases reports to Congress, advising it on the Medicare system. Medicare isn't required to do what MedPAC advises, but does need to consider it, often is influenced by it. They had an initial report on MIPS last year, and if you read it, this actually was a fairly scathing report. They said there's a conflict system that will not identify or appropriately reward high and low value clinicians, massive reporting effort, conflicting signals, basic aspects that will make it difficult to succeed in later years. They said it was based on a small number of patients per measure, small differences in performance may lead to big difference in payment. The future of large bonuses eventually in MIPS may keep us out of APMs. Then later last year, they came out with an even stronger statement, MedPAC urges repealing MIPS. So they said there should be an immediate repeal and replace, or really burdensome, not enough incentive, not rewarding high value, severely flawed process measures, since it's self-reporting, we can just cherry-pick the measures we like and all do well, costly to do all this. It was over a billion dollars, they estimated in 2017, just to do the reporting, which is more than what they felt was the money that saved Medicare. Now alternative, which is more direct, is go for fee-for-service. If you're not in fee-for-service, you lose 2% of your payments, and only score us based on mortality, spending, and Medicare should do the tracking rather than having docs do it themselves. Medicare came out and said, we agree, we agree with these complaints about MACRA, there's too much emphasis on process measures, there's a focus on, we need to focus more on outcomes, we need to simplify, it's too costly, it's not motivating APMs. But despite all this, they said they don't support full repeal, they said that would need new legislation, unlikely to happen in the current political climate, so still a requirement, and they said this could be improved if we have better outcome measures, which is hard to do, but they would work with specialty societies, they would seek input and just try to have better measures, which in all fairness, they are trying to do. And just some related points, so this survey just came out, it's actually pretty well done, the National Physician Survey to see what physicians think of the whole system, variable sense of control, a lot of concern about unintended consequences, a lot of docs just weren't even familiar with the details, concerns about gaming the system, they disagreed with the weighting of the categories, felt we needed more education and better align it with our own views and what we do. And then I thought this was actually very interesting, just in the last couple of weeks, this headline came out, Medicare did hire a translator to demystify the MACRA models, that they were going to hire actually a third-party contractor to help physicians better understand this, improve compliance, try to put a lot of IT into this, they said the doctors are viewing it as a checklist of must-dos rather than a motivation to improve quality, and they want to remove a shroud of mystery around this complex algebraic measures and scoring. So kind of just reflecting on MACRA, so I mean, I think Medicare was very well-intended and this had a lot of potential, but I think what they wanted when the law came out was for physicians to be engaged, aware, that physicians would be thinking about the measures, thinking about this from a quality perspective, embedding it into their day-to-day practice, and they really wanted this to kind of change and revolutionize kind of what we do and be more value-based. And I think that's maybe had the potential, but I don't know if it's worked out that way, and now I don't know if it's really all that different from fee-for-service, we still need better measures, more and more doctors are being left out. With the group reporting, it doesn't reflect many of us, the thresholds are still really low so we can avoid the penalties, the new models aren't coming, it's not really pushing us into new models. So just my final slide, and actually as I was putting this talk together, this headline came out from my own practice just in our internal, just in the hospital's kind of brochures and advertising, these email blasts, they had this headline that went out to all the docs of the faculty group plan, we got a perfect score for quality, this was in MIPS for us, in MIPS we had a 100% score, and they said this was great news, but just as I was thinking about this, I mean, I kind of feel the reality kind of more like this, that the docs were real, just kind of the ostriches with their heads in the sand, I mean, I think if we were to poll the docs in the hospital across all the specialties, I don't know if many of us could say what our measures were, what our scores were, what the benchmarks were, how we fared, so I'm not really convinced it's ushered in this new era of quality. So I mean, it's still the law of the land, it's still here to stay for now, I don't have all the solutions, it's not, I think, an easy problem, I think it has a lot of potential, but it's still a work in progress, and I think it's one where the story's not finished, and we'll see where it goes in the coming years, thanks.
Video Summary
The discussion revolves around the challenges and misconceptions surrounding quality in healthcare, particularly in the U.S. While it's oversimplified to say that quality is ignored because it's not incentivized monetarily, the discussion highlights systemic issues and misaligned incentives that contribute to this problem. The data shows the U.S. having lower life expectancy and average cancer mortality rates, despite high healthcare spending. This disconnect suggests that spending doesn't equate to better outcomes, raising questions about quality efforts in healthcare. The story of John Snow's discovery during the cholera epidemic in Victorian London is used as an analogy to illustrate how people often resist change due to a lack of clarity in solutions, and situations are mistaken for people problems. The presentation also touches on the complexity of translating basic health practices into impactful outcomes, using handwashing and infection control measures as examples. Furthermore, the distinction between quality assurance (QA) and quality improvement (QI) is made, suggesting QA is often punitive and compliance-driven rather than genuinely improving patient outcomes. It advocates for making quality issues more visible and clear, aligning incentives with quality goals, and focusing on improvement rather than compliance. The talk concludes with a discussion of MACRA, a legislative attempt to shift focus from volume to value in Medicare payments, highlighting the challenges and mixed results of implementing such systemic changes in healthcare payment structures.
Keywords
healthcare quality
U.S. healthcare
systemic issues
life expectancy
cancer mortality
spending outcomes
John Snow
infection control
quality improvement
MACRA
Medicare payments
RSNA.org
|
RSNA EdCentral
|
CME Repository
|
CME Gateway
Copyright © 2025 Radiological Society of North America
Terms of Use
|
Privacy Policy
|
Cookie Policy
×
Please select your language
1
English