false
Catalog
QI: Low Quality Radiology Requests and the Effect ...
R4-RCP09-2021
R4-RCP09-2021
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Welcome to the Arsenic Quality Session entitled Low-Quality Radiology Requests and the Effect on Radiology Output in Clinical Care. My name is Frank Rubicchi from the University of Cincinnati and my email is listed on this title slide. I'd be happy to answer any questions by email. This is one of the many books and series on clinical decision support, science, quality implications, and economics. One of the purposes of this series in the quality session is to start to tackle some of the challenges that are listed on the right-hand side of this slide that were taken directly from this free download. We have six lectures. Dr. Thorwarth, CEO of the American College of Radiology, followed by Dr. Gaskin from the University of Virginia. Then a talk by Ben Gold from Change Healthcare. Following that will be a talk from Dr. Ueda from Brigham and Women's Hospital. Next after that will be a lecture from Dr. Shepala from the University of Cincinnati. And finally, a lecture from Dr. Flug from Scottsdale Mayo. Special thanks to Dr. Flug, who is our chair of the quality group and supported and has worked diligently in this space for the annual meeting. It's been terrific working under his leadership. With that, we will begin our presentations. Good afternoon. My name is Bill Thorwarth. I'm the chief executive officer for the American College of Radiology and appreciate the opportunity to talk with you this afternoon relative to clinical decision support and the PAMA legislation implementation and potential implications on imaging costs and quality. My objective is to provide you with a foundation, but I also must disclose that I believe that utilization management will be an integral component of value-based healthcare. And we've heard a lot about that at this meeting and the clinical decision support provides an evidence-based transparent alternative to prior authorization commonly used by the insurance companies today. So my objectives of the next 10 minutes are to briefly discuss the need for radiologists to take responsibility for the value of the care we deliver and describe the origins and evolution of the ACR as appropriate in this criteria, discuss the goals that PAMA legislative mandate and to illustrate the potential impact of implementation as demonstrated by the R-SCAN project that I'll describe in some detail. So let's look ahead. What will radiology look like in 2025 and how will we be integrated into value-based healthcare? I have to say that radiology is entering and all of healthcare is entering an era of accountability and to whom are we accountable? Clearly the patients first and foremost are referring physicians and other providers and increasing studies are ordered by other providers, payers who reimburse us, the employers who pay the premiums to those payers and of course the future radiologists to be sure we've got a viable specialty going forward. And what are we accountable for? We have to accept the fact that we're accountable for the entire patient experience. And again, we've heard an awful lot about that at this RSNA. We're accountable for access to care, geographically, temporally and across all populations in this era of health equity. But here we're gonna be talking about the appropriateness and the quality of the care delivered and of course we're also responsible and accountable for safety and clinical efficacy. Now there's a lot of public pressure and increased consumerism. Patients are paying a lot more out of pocket and therefore they're paying a lot more attention to the imaging that they're getting and whether it's necessary. The current rate of spending is unsustainable and this curve developed by the Robert Graham Center shows that the average healthcare insurance premium could intersect with the average household income that is 100% of household income going to insurance premiums by that year 2025. This really doesn't add up to high value with the value equation that you've commonly seen of outcome over cost and resulting money going down the drain, particularly when studies are not appropriate to be done. So how do we establish higher value? We have to add to that formula. We have to put appropriateness of any procedure, whether it be imaging or other healthcare procedure in the definition of that value because if the appropriateness of whatever is done is zero, there is no value regardless of the outcome or the cost. So here in this article by Richard Heller back in 2014, it defines the radiologist's interaction with the imaging process. A patient sees a referring physician and the first step in this is whether the appropriateness has been determined that this scheduled examination makes sense for the patient's clinical condition. Again, the total value that we add is a combination of the interpretive value and the non-interpretative value and we'll talk about that some more. So in the 2012 Institute of Medicine report, Best Care at a Lower Cost, they recognize that clinical decision support and decision support tools and knowledge of management systems can be included routinely in healthcare delivery to ensure that decisions are informed by the best evidence. In what we call imaging 2.0 paradigm, an order came into the radiology department but the radiologist didn't really interact with this until it was time to protocol that study. Images were then obtained, the radiologist interpreted those images and a report was issued and then it was on to the next case. I would suggest to you, this isn't taking care of the complete cycle of that patient care. So let's get to the concept of clinical decision support. There are a huge spectrum of possible tests that physicians and other providers can offer really leaving them in a quandary. So how do we get to the thumbs up mode of them taking or them deciding on the appropriate test to do? If any test is necessary. This is a picture of our former ACR president and chairman of the board, KK Wallace, when he was testifying before Congress that the ACR, and this was in 1993, would develop guidelines as to what appropriate criteria could be used in order to assist in that selection. And I'd suggest any of you want to see the full recording, use this website to go view that. As a result, the ACR launched into the development of the appropriateness criteria. So it's again, now almost 30 years old, 24 years of work in developing this, hundreds of clinical experts, primarily in radiology, but multi-specialty panels were assembled. 6,000 literature references have been used in order to build these appropriateness criteria and they're continuously updated by our panels. So how does this play into what we would call imaging 3.0? And many of you have heard this definition before, but the critical component of imaging 3.0, as it says in the center here, is our goal is to deliver all the imaging care that is beneficial and necessary and none that is not. And that's really what this clinical decision support is designed to do. So if we fill in the top of that circle, we have clinical decision support right off the bat, helping that referring provider determine whether any imaging is necessary and helpful, and if so, what might be the best test to do. Filling out the final component of the circle is obviously structured communication and reporting and appropriate follow-up recommendations so that there's actionable reporting to the referrer that brought that patient to you. Now, let me go to the radiology support communication and alignment network, the so-called R-SCAN project that the ACR got funded through the Center for Medicaid and Medicare Services. And this was a very large project that's funded over multiple years, such that we had over 100 practices that participated with over 3,000 radiologists who worked with their referring populations to try to promote high-value improvement activities under the so-called MIPS criteria and also, this was accepted by the American Board of Radiology and the American Board of Internal Medicine in order to get Part IV credit for their certification. So using projects in R-SCAN where the radiologists worked with a referring group of population, baseline studies were performed in order to see their ordering patterns prior to education, education on appropriateness, and then follow-up to see whether or not this was improved. The results have been reported, as I'll describe in just a moment, but this was one of four support and alignment networks, projects that were recognized by the Center for Medicare and Medicaid Services and presented at their February 2019 Quality Conference for generating results that showed a reduction in unnecessary testing. So here are a few of the articles that spun out of this, and there are many more on individual clinical circumstances, but this is an overall article that talked about the assessment of the R-SCAN network in reducing medical imaging overutilization, and again, it was a multipractice cohort, as I described. So the conclusion was that R-SCAN participation was associated with a reduced likelihood of inappropriate imaging and thus a promising tool to enhance the quality of patient care and promote wiser use of healthcare resources. A second publication related to this had to do with the cost containment and what cost savings were achieved by the R-SCAN network by reducing this inappropriate utilization. And again, in this graphic in the Journal of the American College of Radiology shows that, in fact, the estimated savings just on these few clinical indications, and they actually image for pulmonary embolism, low back pain, and ovarian cysts. Just with those three, there have been estimated $433 million per year saved with clinical decision support aiding more appropriate imaging utilization. Now, we do know that the PAMA legislation was passed in 2014. It was to be initially implemented in 2017. It's taken multiple years for CMS to get the infrastructure in place, but as of right now, per this CMS.gov issuance, this is scheduled to be implemented January 1st of 2023, and this presumes that the current public health emergency has ended by that time, so that a consultation for appropriate utilization for CT, PET scanning, nuclear medicine, and magnetic resonance would require a consultation of appropriate use criteria in the ordering process. Now, we'd like to make this clinical decision support part of medical training, and the ACR, in collaboration with multiple organizations and driven by Mark Willis, has developed Radiology Teaches, and in medical schools now, 57% of the medical schools in the country are using this teaching tool so that medical students come out with a better knowledge of how to use this clinical decision support tool. So, my thanks to you. We have multiple other speakers to add their perspectives, but it's a pleasure to talk with you, and I hope you'll look into this in order to understand the value of clinical decision support in appropriate imaging use. Thank you. My name is Cree Gaskin. I'm a Radiologist and Administrator at the University of Virginia. I'm presenting a case study about our use of quality metrics to expedite prior authorization at our institution. Prior authorization is one method intended to control healthcare costs, but it is associated with its own burdens, costing time and money for multiple parties, including the institution or department providing care, the healthcare insurance company of the patient, and the Radiality Benefits Management Company, or RBM, managing the program. Prior authorization programs also impact individual physicians and other providers, as well as their patients. The 2020 American Medical Association Prior Authorization Physician Survey asked, how often does prior authorization delay access to necessary care? And the great majority, or 94%, responded that it does cause some degree of delay. While this survey is not specific to imaging, it makes the point that prior authorization generally does cause delay in care. In the specific case of outpatient advanced imaging, there are intentional, systematic delays in imaging appointment times in order to allow time for the process of prior authorization to be completed in advance of the appointment. When peer-to-peer reviews are required, these delays become even greater. Naturally, delayed imaging appointments lead to delayed imaging results and thus delayed diagnosis and treatment. The same survey also inquired about the impact of prior authorization on physicians. 85% reported the burden to be high or extremely high. Again, this survey result is not specific to imaging, but it does make the general point that prior authorization causes significant burden on healthcare providers who often find the process of prior authorization to be time-consuming and frustrating, especially when peer-to-peer reviews are required. Clinical decision support tools are another method intended to help control healthcare costs and ensure the right treatment or diagnostic test is performed. These tools are a mechanism to insert professional guidelines or appropriate use criteria into the process of order entry in order to impact the selection of imaging tests. For illustration, a provider places an order for head CT and indicates that headache is the reason for exam. The system provides more detailed varieties of headaches to choose from, and after the provider makes a selection, the tool provides feedback to influence the selection of an imaging test. In this example, the best practice advisory that popped up indicated that the proposed head CT order received an appropriateness score of 5. This score is tagged as yellow, reflecting an order which may be appropriate, or in other words, is of intermediate appropriateness. The pop-up also provided alternative exams and their appropriateness scores coming in green, yellow, and red categories. Green scores of 7, 8, or 9 are good, reflecting tests which are usually appropriate for the chosen indication, while red scores of 1, 2, or 3 are bad, reflecting orders for tests which usually are not appropriate for the given clinical scenario, and yellow scores of 4, 5, or 6 are in between. The goal of our program was to leverage a clinical decision support tool integrated into the order entry process of our EHR to reduce burdens associated with the prior authorization of advanced imaging tests. We partnered with a private healthcare insurance company, which was Aetna, and their RBM, Evacor, to successfully reduce the overall work and time costs of prior authorization. So what is our solution? A provider places an order into the EHR, which has a clinical decision support mechanism integrated into the order entry process. A decision support score of 1 through 9 is generated. On a scheduling work list, the scheduler is able to see the decision support score and that the relevant insurance provider is involved. If the score is high, meaning green or 7 through 9, then they schedule to the next available imaging appointment at the convenience of the patient, even if that's same or next day. This works because there's an existing agreement that prior authorization will be issued and payment made in cases where the decision support score already confirmed the appropriateness of the order. Any order which did not achieve a high score is scheduled with a standard delay to allow time for conventional prior authorization to be completed. Finally, our prior authorization team accesses the website of the RBM and enters our institutional tax ID, prompting a question if the decision support score is high, 7, 8, or 9. If yes, then the submission process is truncated and prior authorization is granted instantly. Here are more details about our agreement. The program involves outpatient advanced imaging orders which typically require prior authorization, including MRI, CT, nuclear medicine, and PET. The orders must be placed by UVA providers using the decision support tool integrated into our EHR. Eligible orders require a high decision support score of 7, 8, or 9. The imaging study must be performed at our institution. Even though authorization is guaranteed, we still must submit it through the RBM website, albeit through the truncated process. We are able to schedule and perform the exam right away knowing prior authorization will be granted and payment made. UVA agreed to pay back any payment if an exam was later deemed to be inappropriate. We published our initial experience earlier this year in the Journal of the American College of Radiology. We reported approximately 1,000 exams which underwent expedited prior authorization between 2018 and 2019. We observed reduced administrative burden on our staff for all relevant orders deemed likely to be appropriate by the decision support mechanism. We were able to fill some same and next day imaging appointment slots, which otherwise would have gone unfilled. This in turn improved appointment availability for other orders because these appointments were scheduled in earlier slots which otherwise would have gone unfilled. And importantly, we eliminated peer-to-peer reviews for all eligible orders. Today, I can update our experience further. We now have several years of experience with thousands of orders saving time, money, and frustration. We're scheduling our imaging appointments earlier. For CT, we're scheduling about two days earlier, and for MRI, we're scheduling about three days earlier. These earlier imaging appointments translate into earlier imaging results and thus potential for earlier diagnosis and treatment. None of these orders underwent peer-to-peer reviews. There have been no requests for UVA to pay back any payments made into this program. All parties have remained satisfied with the program, so we've made no significant amendments. We have also upgraded our decision support mechanism to make it even more user-friendly for ordering providers. They can now initiate orders using a free text reason for exam, and artificial intelligence can facilitate the decision support interaction. How did we implement this program? Our radiology leadership initiated a relationship meeting with a private healthcare insurance company and their RBM. We shared our institutional experience with decision support, including actual decision support scores for advanced imaging orders on their beneficiaries so they could compare our results with theirs. They found that they approved all of our orders with green scores, and they sometimes even approved orders with red scores. They concluded that our decision support mechanism did a solid job deciding if the exam was likely to be appropriate and thus low yield for the effort involved in the traditional prior authorization process. With this conclusion in mind, we reached an agreement regarding how to implement a program for mutual benefit. Implementation required minor configuration in our EHR to make it convenient for our schedulers to see the healthcare insurance provider and the decision support score associated with each outpatient advanced imaging order at the time of scheduling. The RBM had to modify their request for prior authorization website in order to intake the decision support score and in return issue expedited prior authorization. And finally, we used email to notify our ordering providers of the new program and provided basic training to our relevant staff. We held recurrent monthly meetings with the vendors in order to get the program started. Eventually these switched to a quarterly and now annual cadence. In summary, our academic medical center reached an agreement with a private healthcare insurance company and their RBM to reduce the burden of prior authorization of advanced imaging tests by leveraging a clinical decision support tool integrated into the EHR. A key component of the program is that eligible orders require a high decision support score reflecting that the exam is likely to be appropriate for the clinical scenario. For eligible orders, we have successfully reduced our time and money spent on prior authorization, prevented peer-to-peer reviews, and reduced our time to advance imaging appointments and results. As for the future, there is potential to reproduce this program at many institutions because most already have clinical decision support technology in place in order to be compliant with the advanced imaging requirements of the Protecting Access to Medicare Act. There is excellent potential to expand the program to include additional willing private healthcare insurance companies and RBMs. And finally, there is potential to expand the model beyond advanced imaging to include other diagnostic tests and treatments that also require prior authorization. Thank you for your attention. Hello, my name is Ben Gold. I'm a healthcare technologist. Today, I'm going to be discussing electronic prior authorization in the present and future. So first, let's talk about prior authorization and the health plan. So from the highest level, we can summarize the health plan expense model into two big categories, medical loss ratio and administrative loss ratio. MLR is what is spent to pay providers for reimbursement of services, engage members, risk adjustment, utilization management, quality programs, and make sure to keep utilization management in the back of your mind. And administrative loss ratio is pretty much everything else. It's the operations of the business. So as a member and as a purchaser of healthcare, like your employer, or if you're on the exchanges, we tend to want as much money as possible put into MLR and as little money as possible put into AMR. Because MLR is what benefits us as recipients of a benefit. So let's talk about the health plan business model. And this is a pretty simplistic view of how it works. But in short, the health plan generates revenue from premiums, which is money charged to the purchaser of healthcare. For the most part in the U.S., it is employers. So it might be an academic medical center for some of the folks working here, or universities, or private companies. It could be the state. And those premiums minus the expenses, which we just covered, the medical loss ratio expenses, so money spent on care, and the administrative expenses equals your EBITDA or your profit. So whether you're a non-profit or for-profit firm, you generally seek to improve your operating margin and run a healthy business. So from the firm perspective, if you're a health plan, you basically have three choices to change profit. You can increase premiums, you can decrease expenses, or you can do both. So let's talk about what the implications of either raising premiums or decreasing expenses entails. If a health plan increases its premium, it can invite competitive pressure from other health plans. Because we have to remember who buys health insurance. It is employer groups for the most part. It's about half of all medical benefit purchasing in the country. So when your employer on a yearly basis looks for medical benefits in insurance carriers or health plans to deliver those benefits, they do care about price. And they do care about the benefit design and what value that employer gets for that price. So just merely continuing to pass along the costs or to raise prices will invite other insurance companies or health plans to undercut and compete in the market, which is why all the firms will attempt to drive price down towards the inflection point. So the second thing a health plan can do is it can decrease expenses. And there's really two levers to decrease expenses. You either decrease utilization or you decrease reimbursement. We're not going to cover the latter of decreasing reimbursement because that is a different set of activities and mechanisms. We're going to focus today on utilization. And I want to keep in mind that both of these activities are part of MLR, that part of the health care expense that we generally like to see in a disproportionate ratio to the administrative loss ratio. So let's talk about utilization management. So broadly speaking, health plans, when they look at utilization as the expense lever that they're trying to minimize, there's basically three broad categories those expenses may fall into. If we're talking about expenses from MLR, fraud, waste, and abuse. You also hear this a fair amount when you're talking about government health plans and federally sponsored health plans. So the key activity to curb waste is utilization management via prior authorizations. There are other programs to handle fraud. There are other activities to handle abuse. But waste is typically constrained and affected by prior authorization. So I'm going to give you two definitions of prior authorization. One, which is probably more legalistic and textbook, but isn't that useful unless you're in the industry. And then second is probably a more common or layperson explanation. So prior authorization is a business process that the health plan is going to use to confirm member eligibility, benefit design, and benefit eligibility, medical necessity for that proposed or requested benefit, and then that the furnishing of that benefit meets the parameters of network design and the provided, sorry, the contractual obligations between the provider and payer. Okay, that probably didn't mean a lot to most of the folks who are watching this presentation. So let's try and translate what this means. Really what utilization management via prior authorization is trying to achieve is it's trying to determine for any service requested, if they decide, if the health plan decides to do prior authorization, is it medically necessary and will it be performed within the contractual expectations between the health plan and the provider? So we can distill that even simpler to does the member need it according to evidence-based medicine? And are we going to do it in a way that we have agreed to when we established our contract as a health plan and a furnishing slash performing provider entity? Okay, so let's talk about the market that is driving prior auth using the PEST analysis. So there's going to be four dimensions that we're going to look at this, political, economic, social, and technological. So on the political front, there are two large pieces of legislation, both in proposal and in execution, called the Senior Timely Access to Care Act and the 21st Century Cures Act with specific focus on the info blocking role. These two rules in conjunction, the Senior Timely Access to Care Act will attempt to reveal all of the rules and expectations that a prior authorization, either through medical necessity or network design rules, are going to be subject to. And then second, the 21st Century Cures Act is going to use the advances in the technology box to make those revealed rules automatable, or at least potentially automatable. With the unsustainable growth of health care expenditure, it kind of doesn't matter who's president, it is always in the forefront of Congress and lawmakers to try and attempt to constrain health care expense. So this dovetails nicely to the economic analysis, which basically is two key things we want to keep in mind. From a peer-reviewed literature perspective, Dr. Schrenk wrote an article in 2018 outlining that 30% of health care expenditure is waste, i.e., types of services that could be eliminated or constrained via utilization management programs, i.e., prior authorization. Second, there is financial upside for both health plans and providers to adopt electronic prior authorization. So from a social front, there is another set of dimensions we need to pay attention to. Physicians are displeased due to the administrative burden related to prior authorization, and that needs to be balanced against the increasing year-over-year health care expense. But providers and payers are known to have sometimes an abrasive and acrimonious relationship. So lastly, when we look at what is potentially able to cut this Gordian Knot of increasing expense with frustrated parties on both sides, it's going to be the technology piggybacking off of the political regulations. So in the technology, there are a series of standards and technologies being promulgated. The first is FHIR via HL7, which is just a more scalable and advanced means to move data in and out of electronic health records. The second is the United States Core for Data Interoperability, which is going to ensure that certain data elements are standardized and how they get moved. The third is the DaVinci Project, which focuses on payer business process automation use cases like prior authorization. And then the growing investment and market maturity from a venture capital firm and robotic process automation. So what this results in is that we are seeing a market that is ripe for disruption. We have political levers that are basically redefining the rules of how prior authorization is adjudicated. We have mounting economic pressure suggesting that prior authorization may not abate, but we have to do something to curb medical loss ratio or health care expenditure. And we know that the current process as it works today with postal mail, snail mail, and fax is contributing to physician burnout and providers and payers generally are not finding creative means around this. And the best way to find disruption is for technology advances to redefine these business processes and create new event spaces. So I want to thank everyone today for listening. This was Ben Gold presenting the electronic prior authorization, the present and future. Hi, my name is Jennifer Ueda from the Brigham and Women's Hospital in Boston, Massachusetts. Today we will be talking about an introduction to clinical decision support mechanisms. I would like to thank Dr. Frank Rubicchi and the RSNA for the kind invitation to speak today. Over the next 10 minutes or so, we will review a brief introduction of clinical decision support mechanisms in medical imaging. We will also go over quality theory, including the Deming cycle. And we will also look at theories of human behavior modification in a clinical setting. We'll very briefly look at a historical development of CDSM quality projects and imaging requisition quality benchmarking. And we will follow up the current state of quality projects implementing CDSM. So the overall objective of clinical decision support mechanisms is to improve patient outcomes and provide cost-effective healthcare by supporting active implementation of evidence-based guidelines for effective use of medical imaging resources. So this is quite a mouthful of an objective. It is based on the American Board of Radiology non-interpretative skills. And what's really important to glean from this are four things. So we are improving patient outcomes and cost-effective healthcare using evidence-based guidelines for effective use of medical imaging. So how do we go about doing that? Well, in 2014, the Protecting Access to Medicare Act was passed in Section 218B. That was a new program to increase the rate of appropriate advanced diagnostic imaging services. And they focused on CT, PET, nuclear medicine, and MRI. And for these advanced medical imaging services, ordering providers must consult a qualified clinical decision support mechanism to evaluate the orders against appropriate use criteria. So what exactly is appropriate use criteria? Well, they consist of a set of specific clinical scenarios and a non-comprehensive list of imaging tests that may be considered in these specific clinical scenarios, along with ratings for each of the imaging tests in the context of various clinical scenarios and supported by literature in the supported by evidence in the current literature. To date, the largest collection of imaging appropriate use criteria has been developed by the American College of Radiology, also known as the ACR Appropriateness Criteria. The ACR-AC scores are based on a one to nine scale. So scores one through three are designated as red and usually not appropriate. Scores four through six are yellow and may be appropriate, whereas scores seven through nine are color-coded green and are deemed usually appropriate. There are specific examples where clinical scenarios are not available for certain patients, and these are actually color-coded as gray. So what does overall success of a clinical decision support mechanism look like? Well, here's our starting point on the left side of the graph. After successful implementation of clinical decision support, our desired endpoint is that all of the ordered studies are actually green, so they're all usually appropriate, and this is based on evidence in the current literature. And the remaining yellow, red, and gray studies are actually minimal or even zero. The path to our desired endpoint certainly is not linear, but this, again, is our desired endpoint. So qualifying provider-led entities, or QPLEs, are institutions that apply for this designation, and they are designated as QPLEs by the Centers for Medicare and Medicaid Services. And once they're designated as a QPLE, they have the ability to create appropriate use criteria. As of 2019, there are 21 institutions that are QPLEs, and they are listed here. So what does a clinical decision support mechanism look like in the clinical setting? Well, the patient will arrive and is usually seen by a provider. This ordering provider, based on clinical scenarios, will enter desired imaging modalities along with the clinical scenario, and if their desired imaging modality is deemed as inappropriate or not useful, or not usually appropriate, this ordering practitioner will receive a best practice alert. So an example of this is if a patient presents for breast cancer screening and an FDG PET of the breast is ordered, a best practice alert will be sent out, as this study is usually not appropriate for breast cancer imaging. And the ordering provider will be questioned as to what they would like to proceed or modify their order. The ordering provider will then reassess the patient and subsequently enter more computerized order entries, including additional imaging or medication or consultations. So how do we go about improving requisition quality? Well, let's take a look at Dr. Deming's work. So we're all familiar with the Deming cycle, which is the fundamental basis of all quality improvement projects. So we have the plan, do, check, and act. The planning stage is when we plan a quality project, the do or the implementation phase where we actually implement a change. The check phase is checking on the effectiveness or the success of implementation, and the act phase is the act of assessment of the quality intervention. Over time, multiple projects usually undergo multiple Deming cycles to achieve increased quality improvement. So modifying and tracking behavior, let's go back in time and look at Dr. Pavlov and his dogs in classical conditioning. And this is where a conditioned stimulus, such as the sound or a light, is combined with a potent unconditioned stimulus, such as taste or smell, which results in unconditioned response, such as salivation. Pairing of the unconditioned and conditioned stimuli over multiple interactions, such as Pavlov's dogs, will result in development of unconditioned response when the subject is exposed only to the conditioned stimulus. And Pavlov's work laid the foundation for scientific study for behavior modification. This was further advanced by Dr. Skinner in operant conditioning, and unlike classical conditioning, the strength of a behavior is modified by associative outcome that is either rewarded or punished. Thorndike developed animal learning curves quantifying the effect of reinforcement on the speed of behavior learning, and Skinner took it to the next level with the Skinner box and the cumulative recorder. And so here's an example of the Skinner box and the cumulative recorder. So the rat, here's the sound emanating from the speaker. If the appropriate light is turned on, the rat will push the lever and is positively rewarded with food. On the other hand, if the wrong light goes on and the rat proceeds to push the lever, the rat will actually receive an electric shock rather than food. This cumulative recorder records all of the behaviors over time. Another success of operant conditioning is a study performed by Levinson and their group, focusing on pigeons and their ability to evaluate pathologic and radiologic images. So these pigeons could actually distinguish benign from malignant human breast pathology after training with food reinforcement and they could actually apply what they learned to new and novel imaging sets. And on the radiology side, these pigeons could detect cancer-relevant microcalcifications on mammograms. So the evidence to date about clinical decision support mechanisms, I would like to highlight two papers. The first one is this paper published in JAMA in 2015, where they showed that there was indeed a high level of GRACE studies where there was no appropriate clinical scenario. And after the implementation of a clinical decision support mechanism, the number of red or usually not appropriate studies decreased and the number of green studies or usually appropriate studies increased. And they found overall that clinical decision support is beneficial. The second paper is by Doyle published in 2019. And they showed that the number of targeted or low appropriateness scans significantly decreased by 6% and the proportion of red and yellow orders had actually decreased. However, there was no statistically significant change in the total number of scans. There are limitations to both of these studies, which will be discussed in the subsequent talk. So in summary, we've gone over an introduction to clinical decision support mechanisms in medical imaging. We've looked at quality theory, including the Deming cycle, gone over series of human behavior modification in a clinical setting. We've also looked at the historical development of CDSM quality projects and imaging requisition quality benchmarking. And we've also looked at the current state of quality projects implementing CDSM. With that, I thank you very much for your attention. Hello, my name is Leo Shepelev and I'm a radiologist at the University of Cincinnati. Today, I'm here to talk to you about the novel methods we have developed to evaluate imaging appropriateness, as well as some results of our large-scale analyses. The basic methodology presented here follows our Journal of Digital Imaging paper published earlier this year. Briefly, the major departure of this work from previous work included a departure from time on the X-axis and a more requisition provider-centric approach to assessment, where we track provider interactions with clinical decision support tools over time. We integrated data from 288 different United States institutions, incorporating over 7 million requisitions from over 244,000 providers. To assess requisition appropriateness, we followed the American College of Radiology rating scheme, where green requisitions were usually appropriate, yellow requisitions may be appropriate, and red requisitions are usually not appropriate. In most settings, the clinical decision support system would alert the user that a study they're requesting is not appropriate or would provide a feedback on the score, which would form a learning interaction. An additional category that can also help gauge the engagement of the providers in the use of clinical decision support tools is the gray rate, or the fraction of requisitions where a provider, for whatever reason, could not find a matching clinical scenario. This gray rate sat at approximately 14.5% in our data and was previously historically as high as 66.5% in other studies. So when you look at the changes in the fraction of appropriate green, maybe appropriate yellow, and not appropriate red requisitions over the index of the requisition rather than time, it turns out that on a macroscopic level, following the beginning of a user interaction with a clinical decision support system, these changed in generally favorable directions. For example, on average, within all providers' first 10 requisitions, which fall into bin one on this plot, the green rate was 6% lower than the 10 requisitions in the last bin. However, this data cannot be directly interpreted as the total number of requisitions in each bin varies over time, as you can see in the total plot, or over the index of the bin, as you can see in the total plot. This means that some providers drop out of the population, changing the population in the process, which should be controlled for for rigorous assessment. Here, we control for this issue by only looking at providers submitting at least 200 requisitions. In essence, this is the average of all providers' learning trajectory, as they experienced more and more interactions with, and presumably, feedback from their clinical decision support system. Clearly, statistically significant improvements are observed across the board, with the green rate improving by 3%, and red rate dropping by 3%. The dotted lines that you see next to each of the solid lines indicates the error estimate at each point. If, instead, we just rank providers by their career total number of requisitions, we will also see that there is a significant difference in the overall green rates between the providers who submitted 200 requisitions, and fall in bin 20, and those who submitted up to 10 requisitions, and fall into bin one. If the nuances of this analysis, and the switch from a time domain to a interaction domain, are a little bit too much to grasp at this time, please see a series of blog posts at this address for a more in-depth clarification. We have since updated the data to include over 20 million requisitions, and data up to the end of 2020 and early 2021, and a similar positive trend was observed overall. This trend held regardless of the various sub-analyses we performed, including stratification on age for adults, which we defined as 18 to 65 years old. The same was observed for the elderly, or more than 65 years old. The same trends were observed regardless of whether we were looking at a specific body parts evaluated. For example, here are body requisitions, you know, things assessing abdomen, pelvis, chest requisitions, neurological structure requisitions, including head and spine, or even more specialized and more noisy data sets, like breast imaging, all showed similar trends. Again, similar trends were holding for individuals specified as males in their charts, as well as those specified as females, and stratification based on modality, in this case, CT showed similar results with here, a 5% improvement in green rate, and MR. Albeit, you can see here that the MR data is a little noisier, with wider error bars, and smaller absolute differences. However, overall, the trend was preserved. This held also regardless of clinical setting. For example, here are the changes for emergency patients, and the inpatients also had a similar beneficial trend observed in their acquisition quality. The appropriateness for the MR data was also observed in the inpatient data. For non-MD providers, and MD requisition providers, was also experiencing similar changes. And despite some small differences at the outset of the assessment, in the first initial interactions of the non-MD and MD providers, both the non-MD and MD providers, after using their clinical decision support systems, ended up in a similar spot, in terms of their green and red rates overall. A specialty-specific analysis mapped to 49 major specialties recognized by the American Medical Association, allowed us to also compare their red rates and green rates across the specialties, with one caveat. And that caveat is that some specialties had far fewer acquisitions, and thus exhibited noisier data than the others. Additional nuances could be gleaned from this data. For example, if we looked at the general sub-analysis intersections, we could make statements like, specialists are more likely to request appropriate studies, and less likely to request inappropriate studies in their specialty domain. As you can see here, for example, emergency medicine physicians requesting MS case studies had generally a lower green rate than the orthopedic surgery physicians, and a higher red rate than the orthopedic surgery physicians. In addition to this, we were also able to look at the impact of clinical decision support on the overall inappropriate radiation administered to patients using the American College of Radiology radiation level ranking scheme, focusing specifically here on the three-badge level of one to 10 millisieverts in adults, where most CT scans sit currently. Here, we can see that the amount of radiation administration saved, or inappropriate radiation administration saved following clinical decision support implementation on a population level decreases. To give you a preview here, the fraction of inappropriate three-badge requisitions decreases with the number of provider interactions with the CDS systems. And we performed numerous analyses. However, at the end of the day, the CDC did not provide a definitive answer At the end of the day, decision support is all about the efficient use of resources. In order to assess that, we have to be able to apply our new proven analysis technology in a way that translates to money saved. The first way to do so is to look at the revenue stratified by payer and additional factors, including length of stay and rolled up hospital charters for specific types of interactions. This here is a glimpse of some of this data, albeit stratified by time, which as we said, is an inaccurate and suboptimal assessment. However, all our data begins as time-driven for a general quality check and progresses from there. As you have heard in earlier presentations, there are more local strategies to enhance quality and reimbursement, as well as more global approaches. The ensemble of the presented analytical methods and data can be used to evaluate the quality of medical imaging requisitions in both scopes and potentially impact requisition quality. In conclusion, based on our national scale analysis, clinical decision support improves the quality of medical imaging requisitions, regardless of data stratification. Of note here, our stratification allowed us to identify that United States patients 65 years old and older and thus covered by Medicare exhibited corresponding improvements in their appropriateness after clinical decision support implementation and decreases in the fraction of inappropriate studies. In the future, we'll be able to zoom in on specific individual interactions and identify things like top 10 most inappropriate and the most appropriate studies for each specialty, for example, to improve patient care. I believe that future sub-analysis using this methodology will uncover more useful insights into medical imaging appropriateness and will help precisely target interventions to improve the quality of care. Thank you very much for your attention. Hello, my name is Dr. Jonathan Flug. I'm an Associate Professor of Radiology at the Mayo Clinic in Arizona and also the current chair of the RSNA Quality Improvement Committee, which is relevant to the talk today. I just want to thank Dr. Rybicki and Dr. Mahoney for inviting me to speak as part of their course and to talk about the overall impact on quality regarding low-quality radiology requests. I have no financial disclosures, and as I mentioned before, I am currently serving as the chair of the RSNA Quality Improvement Committee. And I would just mention that the content delivered in this short talk here are not the official position of the RSNA, but are my own opinions and thoughts in this role. So I wanted to start by just defining quality in healthcare and specifically what that means in radiology. So the AHRQ defines quality healthcare as care that is safe, effective, patient-centered, timely, efficient, and equitable. And if we're thinking about the role and impact of the radiology requests that we receive in radiology and how that will impact the quality of our work, I think it really boils down to our ability to provide effective and patient-centered care. And this is a bit of a helpful quality map, if you will, looking at what we do in radiology kind of from a 30,000-foot view. And as this quality map shows, everything starts from a radiology perspective with us receiving the request for the order. And that's where the appropriateness of that request comes into play. We can do a lot to try and overcome what we think might be an inappropriate request or a suboptimal request, but everything we do downstream really depends on the information that we receive when that order is placed. And ultimately, thinking from a patient-centric point of view, the global outcome for the patient is really dependent on that input coming into us correctly for us to do our job. There's been a lot of discussion here about clinical decision support, and so I just wanted to give my quick take on what I think is known, and there's an excellent paper that was done by Dr. Rybicki that's screenshotted below for, I think, just for some background on this and this topic. But basically, what we know from studies that have been done is that baseline appropriateness from our referring clinicians is probably not perfect and leaves room for improvement. Clinical decision support improves imaging appropriateness, but even with that, there's still room for us to improve. So this tool in and of itself, just like any tool, won't solve our problem. We need to figure out how we can use that tool to improve quality and improve patient care. And when I think about what we still don't really know is, number one, what is the full impact of inappropriate imaging? We see lots of numbers talking about the cost of incorrect imaging, but what does that really mean to an individual patient? What does that inappropriate imaging have to contribute to a delay in diagnosis, potentially wrong treatments? What does that do for patient anxiety? The full impact of that is not known and we'll never know. And on the other hand, what we don't really know is what changes in the patient's outcome and their global perspective when we can change and improve the appropriateness of our imaging. And think about that. What does that mean for the patient, patient care? What does that mean for the radiologist? Because I think we would all be delighted to have the correct information as we're doing our work and feel meaningful in what we're doing. And what does that mean for payers and costs? And I think there's been some talk in some of the research about gamification of the system. So even when we see what looks like improved appropriateness of imaging, is that real and how do we tease that out? I like to think about waste, which is really what we're dealing with here from a lean perspective. So lean process improvement. And lean is defined as delivering quality products and services that are just what the customer needs when the customer needs them while using minimum of materials, equipment, space, labor, and time. And from an imaging perspective, what we're really dealing with here is waste. We are not giving the customer, the patient or the referring physician what they need. We're giving them a suboptimal product. It's not our fault, but it's what we're doing. And there's several different categories of waste that lean discusses that come into play here. So number one is motion. And I think a lot of times we have to do a lot of work to correct these inappropriate examinations and inappropriate requests. Waiting, so patients probably have delays in their care related to the wrong study being done. Overproduction could occur if we're using the incorrect protocols. Same thing with over-processing. And at the end of the day, if we're really thinking about patient-centered care, the product we're delivering may be ultimately defective if it's not what the patient needs to either obtain their diagnosis or the treatment they need. I think another interesting perspective to think about here with regards to the impact of low-quality radiology requests is to think about burnout. And in the research, there's been some discussion of burnout and some relevant fundamental dimensions of burnout that I think come into play here. Number one is a lack of accomplishments and ineffectiveness. And secondly, is depersonalization with associated cynicism towards the job. And I can't really say that I've practiced for the days of PACS, but there's a lot of discussion about how back in those days, referring clinicians would come to you, tell you about the patient, and you would know that you were doing the right study, the right test. And I think now we see a lot of requests come through that may seem inappropriate, and that definitely leads to cynicism in some individuals and can contribute to burnout. Other contributors are poor communication, which is really at the basis of this low-quality radiology request problem. The EHR and increasing isolation. As much as CDSM is tied into the EHR and helpful, the electronic health record is really at the basis of this problem. And oftentimes, many of us feel like we don't have the appropriate input and involvement in the decision-making process when it comes to what's right for that patient. So the type of infrastructure solutions we need to address this are gonna be tools that not only drive physicians to order what we think are appropriate studies, but really things that support collaboration with our referring clinicians, and then ways where we can measure appropriateness and provide feedback to those individuals so they can improve. Here's what I think about some future directions that I'd like to see from my perspective as the Quality Improvement Committee Chair. I'd like to see us finding ways to use clinical decision support in quality improvement more directly. Right now, there's a lot of talk about how do we implement and use this tool, but how do we really use it to think about improving overall quality? And how do we get across the board adherence to the best recommendations in imaging and minimizing waste? How do we not only use clinical decision support tools as part of our operations, but really plug it into our quality improvement infrastructure and think about it from that quality improvement perspective? And then really, as I said, how do we work with our referring and ordering physicians for these projects and to improve these? Because working in a silo will never totally be able to influence and improve outcomes here. And then lastly, how do we use clinical decision support tools to reduce burnout? How do we reduce the clerical, financial, and cognitive burden that happens as a result of these errors? Here are my references, and thank you very much.
Video Summary
The session on "Low-Quality Radiology Requests and Their Effect on Radiology Output in Clinical Care" highlights key discussions on the improvement of radiology practices through clinical decision support (CDS) systems. Frank Rubicchi introduces the session, outlining its aim to address challenges in clinical decision support, imaging appropriateness, and quality. Notable speakers, such as Dr. Bill Thorwarth, emphasize the importance of integrating value-based healthcare frameworks to enhance imaging quality and economic efficiency. Dr. Thorwarth argues that radiologists must take responsibility for the entire patient experience to improve accountability and quality in healthcare delivery.<br /><br />The session delves into the implications of the Protecting Access to Medicare Act (PAMA) and its impact on radiology practices. Discussions focus on how clinical decision support tools can prevent unnecessary imaging and promote cost-effective healthcare. The R-SCAN project, supported by CMS, exemplifies initiatives that have successfully reduced inappropriate imaging, saving millions of dollars annually.<br /><br />Dr. Cree Gaskin presents a case study on using quality metrics to expedite prior authorization at the University of Virginia, highlighting the burdens and delays caused by traditional prior authorization processes. Ben Gold discusses electronic prior authorization, framing it within the health plan business model and exploring its potential to streamline healthcare delivery.<br /><br />Ultimately, clinical decision support mechanisms are advocated as vital tools for enhancing the appropriateness and efficacy of radiology services. Speakers emphasize the integration of these tools into medical training and ongoing clinical practice to improve patient outcomes while reducing administrative burdens. The presentations underscore the need for innovation, collaboration with stakeholders, and continuous quality improvement to adapt to changing healthcare environments.
Keywords
Radiology Requests
Clinical Decision Support
Imaging Appropriateness
Value-Based Healthcare
Protecting Access to Medicare Act
R-SCAN Project
Prior Authorization
Electronic Prior Authorization
Quality Metrics
Healthcare Innovation
RSNA.org
|
RSNA EdCentral
|
CME Repository
|
CME Gateway
Copyright © 2025 Radiological Society of North America
Terms of Use
|
Privacy Policy
|
Cookie Policy
×
Please select your language
1
English