false
Catalog
Quality, Patient Safety and Performance Improvemen ...
R3-CIN24-2021
R3-CIN24-2021
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Good afternoon, everybody, and thank you for joining us in today's session. My name is Christine Burke, and I'm a radiologist at Brigham and Women's Hospital in Boston, Massachusetts. I'm going to be talking to you this afternoon about improving radiologists' performance by incentivizing quality and patient safety metrics. After setting the stage of why quality metrics matter, I'll tell you about our quality and safety performance improvement program and describe how it has evolved over time. But first, why do quality metrics matter at all? Beginning with the Institute of Medicine's seminal report, Crossing the Quality Chasm, leaders in medicine have been advocating for a shift from rewarding volume-based care to rewarding quality-driven health care for the last decade. Efforts to achieve this are ongoing and accelerating as value-based payment models are gaining traction. In the field of radiology, this is exemplified by the ACR's Imaging 3.0 campaign, which aims to help radiologists design strategies and implement tools to improve imaging appropriateness, quality, safety, efficiency, and patient satisfaction. A systematic review of the literature for radiology quality measures identified as many as 75 unique measures, such as patient access times, report communication timeliness, procedure complication rates, and patient and referring physician satisfaction. However, there are relatively few published examples of how to effectively use these measures in a sustainable quality improvement effort. So today, I will describe how we at Brigham and Women's Hospital use quality and safety measures to improve radiologists' performance, and do so via a multifaceted, department-wide pay-for-performance initiative, which we call Radiology Performance Outcome Measures, or RCOMs. This program was published earlier this year and features five quality metric targets listed in the table on the right. These metrics were designed by our departmental leadership team and informed by previously published quality improvement projects performed at comparable institutions across the country. Before implementation, each RCOM measure was tested to ensure it accurately reflected radiologist performance. Ample time was also allotted for the collection and response to feedback from the quality and safety team and departmental leadership before the launch of the program. Importantly, the measures chosen were ones completely within the control of individual radiologists. Performance in each of these five domains is measured daily. Five percent of each radiologist's salary is withheld and redistributed on a quarterly basis in one percent increments for meeting each of these five quality metric targets. So let's take a deeper dive and better understand what these five measures are. The first three pertain to timeliness of report generation and are a mix of individual radiologists and division-wide targets. These include preliminary to final report signature turnaround time, which was designed as an individual radiologist incentive. The target was initially set to achieve 80th percentile time of less than or equal to six hours. The second measure is the number of individual reports with a markedly delayed signature time of greater than or equal to eight days. This was designed as an individual radiologist incentive with the target initially set as fewer than four reports delayed per quarter. The third measure is the proportion of unread exams at the end of each clinical workday, designed as a division-wide incentive with an initial target of fewer than four percent of exams left unread at the end of each weekday. The third measure relates to closed-loop communication of critical results and is designed as an individual incentive. The target was initially to document closed-loop communication within the parameters set by our institutional policies greater than or equal to 90 percent of the time. The final metric featured our department's peer learning program entitled Worth Another Look or WAL. The initial target was set as a division-wide incentive, asking each division to create 90 peer learning submissions per clinical quarter. Performance in each of these five domains is measured daily and is displayed transparently on a web-based dashboard that's very easy to find on every hospital computer, with a direct link to the dashboard in each computer's start menu. This dashboard is built, maintained, and funded by our department using QuickView software and is viewable by every faculty radiologist in the department. It refreshes daily and displays updated performance metric results for all faculty radiologists in a non-anonymized fashion. This enables timely and transparent performance monitoring, both for individuals wanting to see how they're doing and for departmental and divisional leadership teams. Here's an example of the preliminary to final target measurement dashboard seen in a department-wide view comparing division to division, and here in a divisional view comparing individual radiologists to individual radiologists. In this way, it's easy for one radiologist to compare their performance to that of their peers. Here's an example of the peer learning and critical alerts dashboard in the divisional view, and here showing my individual performance last quarter. Importantly, at the top of the screen, we transparently tell radiologists what the target is and make it very transparent exactly which alerts weren't closed on time and how many estimated clinical days per quarter they've worked so they know how many peer learning submissions they need to put in to meet the target. When we studied the impact of this financial incentive program, we condensed the first three report timeliness metrics into a single measure for simplicity. We termed this the examination complete to report finalized time. This time decreased significantly across the department by nearly four and a half hours after the financial incentive was introduced, corresponding to a 21% relative decrease. Comparing subspecialty division by division, we found that of our 12 divisions, seven demonstrated significant improvement after the financial incentive was launched. Of the five that did not show improvement, all were already performing better than the department average at baseline. We similarly found increases in rates of peer learning program usage at the divisional level as a result of the financial incentive, with upwards trends in the number of alerts submitted seen in nearly all divisions, and no divisions missing our 90 alert per quarter target after the second quarter post-intervention. In the end, 64% of a single radiologist salary was withheld from the entire department across the department in the first quarter post-intervention, and this decreased to a 13% of a single radiologist salary withheld across the department in the last quarter of the study. One of the most critical features of our program is its flexible nature, allowing us to change the metrics and incentives as time goes on to continue to drive improvement. We initially chose liberal performance targets that most radiologists were already meeting in order to allow our faculty time to observe the program's compatibility with our departmental culture and values, to try out the program in a non-threatening way, and to observe the success of the change early on. Once people got used to the program and the RPOM's quality compensation framework was widely accepted, iterative evolution of the targeted benchmarks occurred and has allowed us to achieve continued gains in radiologist performance. Examples of these changes include updating the report signature 80th percentile target from six hours to five hours, increasing the target percentage of critical alerts closed on time from 90% to 92%, and changing the peer learning engagement metric from one in which we asked every clinical division to submit 90 per quarter to one where we ask every individual radiologist to submit one peer learning case per clinical day. We changed the peer learning incentive to one that affected radiologists individually because we found considerable variation in how radiologists used the peer learning program when the incentive was division-wide. The graph shown here demonstrates a bimodal distribution of submitters with some very frequent users in the right-hand side of the graph and other radiologists using the program very frequently shown on the left. We studied and subsequently published the effect of the change of incentive from division to individual on our peer learning program engagement. In this study, we demonstrated that peer learning tool usage continued to increase after the incentive was changed from a division to an individual incentive from 12.6 per thousand reports during the division-wide period to 25.2 per thousand reports during the individual incentive period. This amounted to an 11.5-fold increase in peer learning program usage compared to before we applied an incentive at all. Considering individual radiologists' use of the peer learning program, the individual incentive led to an increase in the number of alerts submitted per clinical day for 92 percent of radiologists in our department. The number increased from 0.6 alerts per clinical day to 1.2 alerts per clinical day. Altogether, 38 percent, 73 percent, 93 percent, and 89 percent of radiologists met the individual target during each quarter of the individual incentive study period. And we haven't stopped there. Other metrics our quality and safety group have considered include a one-hour turnaround time for STAT and outpatient urgent examination reports, use of a standard diagnostic certainty scale to communicate uncertainty in our reports, and increasing the quality of peer learning program submissions after evaluating whether the submissions offer a clinically meaningful learning opportunity using natural language processing algorithms. So long as the metrics we change to are accurate, transparent, and have leadership support, the possibilities are endless. So in conclusion, quality measures play an important role in healthcare and radiology today, particularly on the eve of value-based payment model dissemination. A pay-for-performance strategy can be successfully applied to quality and safety initiatives to improve performance. When developing a quality and safety measurement program, key features include leadership commitment, validated performance targets, transparent reporting via a dashboard, and measures that are within each individual radiologist's control. Finally, program flexibility is critical, as this allows for metric evolution and continued incremental improvement moving forwards. Thank you so much for listening to my presentation today. I hope you've been inspired by hearing about our program at Brigham and Women's Hospital, and I encourage you to reach out to me if you have any questions as you develop these programs at your own institutions. Thank you so much, and have a great afternoon. Thank you so much for coming. I'm going to be talking to you today about a mouthful, which is reducing ordering errors by simplifying the radiology ordering process in the electronic health record. And I think it's important to start our conversation by talking about jam. And yes, I do mean that fruity spread that's delicious on toast, because a study on jam has given us insights into how we should be approaching radiology informatics. In 2000, psychologists Sheena Iyengar and Mark Lepper published a study that seemed counterintuitive at the time. When shoppers were presented at the door with jam samples, on the day there were 24 jam samples, there was lots of interest in the jam display, more than on a different day when there were only six flavors of jam offered. However, when it came time to buy, it turns out that those who were shown fewer jams were ten times as likely to make a purchase. This phenomenon is called choice paralysis, which is reduction in satisfaction with increased choices. Choice paralysis reduces people's satisfaction with their decisions, even if they make good ones. This phenomenon of decreased satisfaction with increased choices has been proven in numerous settings, including convenience store snacks, the number of retirement investment options, ice cream flavors, and even jobs. It even has made its way into a best-selling book about sparking joy by removing the excess things from your life, only keeping the things that make you happy. Of course, having no choice at all does not make people happy. The actual relationship between choice and happiness, like many things, is not linear, but rather an upside-down curve, with peak happiness achieved with relatively few choices. So the solution to choice is to curate the number of options that people need to choose from, but making that happen is really, really hard. That's because this bumps up against another phenomenon, which is it's harder to take things away from people than to add, because people anchor to what they already have and have an aversion to loss. But you know that after the initial pain of loss, fewer choices will make folks happier. So your job as a change leader is to help people realize that that momentary sense of loss is worth it in terms of long-term happiness. I'm going to start with a little detour to explain how I got on the journey of procedure code simplification. Our department was working on a separate problem of delays in exam authorization of scheduled patients that, on analysis, was due to inconsistent workflows in the inpatient and outpatient setting. In the inpatient setting, radiologists were asked to finalize their protocols, while in the outpatient settings, they were asked to route their protocols to the technologist, who would then be responsible for finalization. Exam finalization was the trigger for the authorization team on the outpatient side to start the preauthorization process. Since radiologists rotated between the inpatient and outpatient settings all the time, invariably something would be routed when it should be finalized, or vice versa. And so the routed protocols ended up in different worklists from the finalized protocols, and if the error was not caught in time, a patient might get preauthorized for the wrong exam, which would then be caught too late, which might then require the exam to be rescheduled if the wrong authorization was obtained. The solution? Removing the likelihood of error by simplifying workflows and removing choices. Radiologists were told to route, not to finalize, for all settings, inpatient and outpatient, and more importantly, the option to finalize was just completely removed from their protocol window. Technologists had only one worklist in which radiologist protocol studies would appear, which would be the routed studies, and this one little change reduced the cognitive load and workload for radiologists, technologists, schedulers, and the authorization team, and because of that, improved the care for the patients. Removing this finalize button was a small IT change that made a huge difference, and this was my first taste in IT of how less could be so much more. Then the problem of procedure codes came along. In my role, I hear when referring providers are not happy with how radiology intersects with their work. At one point, a neurologist reached out to me because there was something wrong with the radiology order she was using, which sometimes led to ordering the wrong type of scan. I said, I know, right, because, you know, I'm an interventional radiologist, and I too am a consumer of radiology orders, and this is my world when I would search for CT abdomen. Notice that the slider bar, from the slider bar, that the choices actually go well beyond this one screen. And then she told me that, for her, it was even worse than that. When she is looking for an MR brain, she gets multiple differently named orders for just MR brain, and one of them was even wrong. The name would say MR brain without IV contrast, but then it would actually order an MR brain with IV contrast. Clearly we had a problem that was bigger than we thought, because we actually had two problems. The overall problem was that referring providers are seeing too many choices when trying to order exams. And there were two components to this. First, there are too many similar-looking procedure codes in our procedure code system. Second, it appears that the way some specialty preference lists were built, one exam code would appear under multiple different names. So I decided to tackle the first component first. My first step was to understand the extent of the problem. I asked the IT team to export a spreadsheet of all the diagnostic radiology procedure codes, and over a year ago, we had 2,215 separate diagnostic radiology procedure codes. Even worse, I was being asked to add new codes every week. Dialing down on the data, I found that over a quarter, the left-hand bar, of these procedure codes weren't even used over the last year, and over half, which were the first two bars, were used 10 times or fewer per year. From this quick and dirty look, it seemed that at least half the procedure codes probably weren't necessary. So how did we end up with so many procedure codes? When a procedure code is ordered correctly, it provides a lot of granular detail that ensures that the downstream processes will happen automatically. For example, different procedure codes have been used for different PET tracers. Adding pulmonary embolism to a chest CT angiogram communicates the timing of a bolus. A procedure code can be offered so that only certain imaging sites can be selected, for example, a CT endorhagraphy. The problem is that we have placed the onus of selecting the right procedure code from a large and growing menu onto the referring providers, folks who are not in radiology and who are probably the least equipped to make that decision. How often are those original procedure codes being ordered correctly? It turns out that technologists were changing an average of 1,341 procedure codes a week, or over 10% of our orders. In order to do that, they had to double check all procedure codes because there was no trust that the procedures were even being ordered correctly to begin with. And if we missed that error, if that code went through, exams may be incorrectly protocoled, patients may be scheduled for the wrong exam, particularly if that exam was scheduled urgently before protocol even happened, and patients might even be billed for the wrong exam. This creates a system that no one is happy with. There's a lot of rework involved at many levels when a wrong procedure code is ordered. And having so many procedure codes also puts a disproportionate burden on our IT teams who must link all of these codes to the right places in different applications, including reporting templates, clinical decision support, display protocols, and work lists. Drilling down on the codes, there seemed to be some low-hanging fruit that would allow us to retire some procedure codes with minimal pain. That would include retiring codes that we never used, such as MR with contrast codes, since we always used with and without contrast sequences in our MRs if we were going to use contrast, or codes with unnecessary descriptors like pediatric that we never actually used, or codes for clinical trials that actually had already ended. In order to start clearing the codes, we created a coalition of stakeholders within radiology and set up a work group. We started by setting some achievable short-term goals. For example, we used the goal of procedure code reduction as a value-based care project metric to identify 10% of our procedure codes to retire within three months. We achieved that goal and, in fact, identified 15% of codes within four months. We used the value-based care project to create processes for code retirement, which included identifying whether these procedure codes exist on clinician preference lists so that we can work with a specialty group to replace that code with the appropriate substitute procedure code. 10% is a nice proof of concept, but that will not get the referring providers to where they want to be, as that still leaves almost 2,000 procedure codes in our system. So how to make a large, dramatic change. This is where I connected with Raman Khorasani and Tarek Alkasab from Mass General Brigham, which is a health care system or two, combined health care systems that not only managed to unify the ordering practices of two highly complex and renowned hospital networks, but also were able to reduce referring provider orderables to the range of slightly over 200. That would be one-tenth of the orders that our provider saw. That would be a noticeable difference to them. So how did they make that change? First, they made the orderables as generic as possible, removing details like the laterality, which would actually cut down certain procedure code counts by a third if you have left, right, and bilateral, and contrast, which also would cut down procedure codes by a third because you have without, with, and with and without from the orderables. These details instead could be requested through selectable choices within the order, which then drives radiologist protocols and the creation of performables by the technologists. Those performables are then what allow for correct scheduling, scanning, and billing. This may seem like additional work for the technologists, but it places the work in the hands of those with the right expertise. And some of that work was actually already being done by the technologists since the technologists were already double-checking each procedure code that went through under the current system. Nonetheless, this is a big change. So to get buy-in on how to make this kind of process work at Stanford, we started with a very small pilot in breast MRI, which for one type of exam, breast MRI had 11 orderable exams. We mapped the information that we would need in a breast MRI order to, that we need to protocol a breast MRI order and the information we would need to get authorization, which is very specific for breast MRI, into selectable questions within the order and found that this workflow worked well and would allow us to very comfortably reduce down to one to two breast MRI orders. We're prepared to reduce in other divisions, but to do that, and before we pull the trigger on that, we're learning from Mass General Brigham how they have managed to incorporate care-select clinical decision support into their reduced orderables. And once that's been determined, we plan to continue on a division-by-division basis to reduce and streamline our orderables and update our clinicians' preference lists with those streamlined orders. Speaking of preference lists, when we were looking into that separate issue of procedure code duplication in specialty preference lists, we realized that the broad freedom that our Epic team was giving to the specialties and how they created their preference lists was causing a huge problem. If one provider wanted a brain MRI ordered a certain way, he or she would add that version of brain MRI with certain selections picked and comments entered into the specialty preference list, and then everyone in that specialty login would see that as a separate order on their search for brain MRI. No one, not even our IT teams, had realized that this was the outcome of this really liberal preference list strategy. So now there is a separate effort paired with the ambulatory Epic team's specialty workflow optimization workgroup to clean up how radiology orders are offered on preference lists while making it easy for our subspecialty partners to order the specific exam protocols they need through selectable choices within the order and personal preferences. The strategy will probably be that there is only one order within a specialty preference list, for example, brain MRI, and if individuals want their own individual preferences, they can set those and not result in duplication in other people's preference lists. It actually took several meetings and a bit of finger pointing to figure out why so many extra orders were appearing on our specialty preference lists because this problem crossed multiple domains within Epic. This emphasized to me that in order to really be able to make changes through IT, you really need to understand what it can do, not just what you think it can do or what you think it does. This usually requires reaching outside the organization to the vendor or another institution to see if it's being done differently elsewhere. Right now, we're slowly climbing the right side of the happiness curve, but as people see the improvements with the pilots and the small wins, we'll get to the point where we can reach the peak happiness with the choices offered. I plan to keep learning from others along this journey, and I hope this talk inspires you to identify a peak happiness target for your department too. I look forward to networking with you so that we can learn from each other as we go on our IT journey. Thank you. My name is Dr. Nina Kapoor, and I'm from the Department of Radiology at Brigham and Women's Hospital. I'm here to discuss how we can ensure the timely performance of clinically necessary radiologist follow-up recommendations to reduce diagnostic errors. I'd like to give you an overview of my talk today. So first, I'd like to talk about the many challenges regarding radiologists' recommendations for additional imaging. This phrase, recommendations of additional imaging, is something I'm going to say a lot throughout this talk, so I've shortened it to the acronym RAI, which has also been used in literature to refer to this idea. I'd like to also discuss the tools that Brigham has used to address these challenges, specifically in the Department of Radiology, and how informatics tools and data visualization have been used to improve our practice and what everyone can learn from some of our experiences. I'd like to begin by talking about a hypothetical case. Unfortunately, there's a case like this maybe once or twice a year at most institutions across the country, where a patient has an incidental finding on a radiology exam. The radiologist makes note of this incidental finding, let's say a lung mass, and recommends follow-up, let's say a follow-up chest CT, to get a better idea of what that opacity is. For various reasons, medical care is complicated, many teams can be involved, the follow-up is not obtained, that imaging follow-up is not obtained. And this can lead to missed or delayed diagnoses that can have very important clinical ramifications for our patients. And it leads us to have a lot of questions at the end of the day when something like this happens. Is the radiology report enough? Should the radiologist have made additional communication to the ordained provider to alert them that there was a clinically significant incidental finding? Do our patients have access to our radiology reports? Can they read them? Is the language understandable to a patient? Is it clear that there is a recommendation being made by the radiologist? And also, is there a way to track these follow-up recommendations to completion? Meaning that the radiologist makes a recommendation, can we track those recommendations to see if those patients come back for that repeat imaging? What happens to those patients? These are all questions that when we have these cases unfortunately arise, we're left with, and how do we answer that? That's what I'd like to get at today at this talk. There are also some bigger questions about standardizing follow-up imaging care that I'd like to address. So one can say, is there variability in the kinds of recommendations that radiologists make? Maybe we're all consistent, and in a perfect world, if we see an entity, we all have the same follow-up recommendations. Can we use guidelines to standardize our radiologist recommendations so that if there's a certain size pancreatic cyst, there's a certain recommendation that always follows that? And if that's not possible, if we can't use guidelines, and if there is variability between our radiologists, can we establish what's called a collaborative care plan between radiologists and ordering provider? When I say collaborative care plan, I mean that a radiologist makes a recommendation, and the ordering provider agrees to that follow-up imaging recommendation. So you have two providers who agree on a plan. That is the collaborative care plan. So to answer that first question, do radiologists vary in the type of imaging recommendations that they make? We studied this at our institution, and the answer is yes. Looking at thousands and thousands of reports, and using natural language processing to find follow-up recommendations, we've seen that there's up to a seven-fold variation between radiologists and the probability of RAI. In other words, the probability of making a follow-up recommendation. So radiologist A can see a specific finding, and radiologist B can see that same thing on the same image, and they both will have different management recommendations about what the next step is. This is important for various reasons. Number one, our ordering providers don't pick the radiologist that will ultimately read the patient's study. So they don't really have a say in which radiologist picks up the study, which radiologist to listen to, and it kind of can lead to a lot of confusion about what the next step should be if you feel like, as an ordering provider, you're not getting consistent recommendations. And this also leads to an additional challenge in which of these recommendations are clinically useful? Which of these should be monitored to resolution? Which of these should we really target the patients to say, you know what, your radiologist found this, you need to come back for these imaging recommendations? And which can we say, well, that's a soft call, and maybe we don't have to track the patients down to make sure they come back for that repeat imaging? And finally, what's the correct level of RAI, or what's the correct probability of making a follow-up recommendation? Perhaps 30%, I think many of us can agree, is too high, that if you, a third of your studies likely should not have follow-up recommendations. But does that mean 10% is right, or 12%? It's really hard to know the ideal follow-up recommendation rate. What's the best thing to do in this situation? But what our study shows is that there's variability between our radiologists. To address the issue of variation between radiologists and the recommendations that they make, many have suggested using imaging-based guidelines. And guidelines are an excellent way to help manage patients. But there's also some problems with guidelines. For instance, there can be multiple different guidelines that can exist for the same entity. In the example of pancreatic cysts, you have guidelines from the American College of Radiology, the European Study Group, the ACG, the American Gastroenterological Association. So it can be challenging as a radiologist or an ordering provider sometimes to pick which of those guidelines that you want to adhere to and there are some differences between the guidelines in terms of what they recommend. And we know that this problem of guidelines and guideline adherence is not specific to just radiologists. It's physicians in general, they don't always follow established guidelines. So again, sticking with the example of pancreatic cysts, we've seen that fewer than half of radiologists' recommendations are consistent with the 2010 ACR guidelines. And for another entity, pulmonary nodules, 34% of recommendations for pulmonary nodules were adherent to Fleischner Society guidelines. Only a third of the recommendations were consistent with well-established guidelines for pulmonary nodules. We've also seen in our institution that experts don't always agree. So we did a case study where we created an expert panel of physicians, including radiologists, pulmonologists, and thoracic surgeons, and we asked them to evaluate the 27 Fleischner Society guidelines for pulmonary nodules. We broke down the guidelines into their 18 discrete recommendations and we asked this expert panel, do you agree with these recommendations and do you agree with each other? The short version is there was an average agreement of only 48%. So only about half of the time did the expert panel agree with the Fleischner Society guidelines. And the goal was to try to create a local best practice and we realized that even with a really excellent, well-established guideline like Fleischner Society guidelines, there's still disagreement within our local experts with guidelines and with each other. So it can be challenging to use guidelines as a way to address this problem of variation amongst radiologists and the follow-up recommendations that we make. So it can be difficult to determine at a departmental level which follow-up recommendations are clinically necessary and which imaging guidelines, if any, should be enforced. And that leaves us with what's the next best step? What's the next thing that we can do to help our patients if that's too difficult? And we've decided to focus in our department on creating collaborative care plans between radiologists and ordering providers regarding follow-up imaging. So again, what is a collaborative care plan? It is an explicit recommendation made by the radiologist for follow-up imaging that an ordering provider agrees is clinically necessary. So two physicians, the radiologist and the ordering provider, both agree that follow-up imaging care is needed for the patient. We ask our radiologists that they document this recommendation separate from and in addition to the radiology report. And we've created a quality and safety infrastructure to make sure that these recommendations are tracked to resolution so that we ensure that patients are coming back for clinically necessary follow-up imaging. We've called this program ARC. And for short, that stands for Addressing Radiology Recommendations Collaboratively. We have a specific IT tool that we use to track these recommendations. And it's a web-based application that interacts with our PACS, our hospital paging system, and our EHR. And it's an automatic system that helps facilitate communication between our ordering providers and our radiologists. We use different colors to signify the severity of the finding. So red would mean urgent, life-threatening. Orange would require urgent face-to-face, preferable or telephone communication. And yellow represents a more subacute critical result. And then we've created the color blue to represent these non-critical follow-up imaging recommendations that we've been talking about. These recommendations for incidental and other types of imaging findings that are not critical, but still need to be addressed. We've streamlined this process for the radiologists as much as possible. So as I said, this tool is linked with our PACS system. So when you open an imaging study, with one click, it opens this tool that immediately populates all of the patient information, the medical record number, the accession number of the study. And then once you've opened the tool, which you would then click that you would like to make a follow-up recommendation. And then we ask our radiologists to be very specific in terms of how they fill out these radio buttons and make a specific recommendation for a modality and also be specific on the timeframe. So we're really asking our radiologists to get away from follow-up as clinically indicated. This can be monitored with short interval follow-up and say, I'm recommending a CT in six to nine months for this finding. And that really gives the order provider an actionable recommendation that they can interact with and say, I agree with this recommendation. And then we can go on to make sure that that recommendation gets followed through. Once the information has been entered in terms of what the recommendation is with a specific timeframe and modality, we have told our radiologists to send the alert to the order provider because they are responsible for the study that they ordered and any imaging findings that result in imaging recommendations that result from the study. But there are some instances, let's say in the emergency department where it may make more sense to send the alert to the provider or sorry, the primary care physician. So we've also made it very easy. Let's say you're dealing with inpatients or an ED setting to find the responding clinician, to find the ordering slash authorizing provider and also find the PCP for the patient if we have one in our system. And so you can enter all the alert details and then send the alert to the PCP if you feel like that's more clinically appropriate than sending it to the ordering provider. And if for some reason, this is usually very particularly helpful in our ED setting, the ordering provider is an ED provider and the patient does not have a PCP in our system, but this isn't really something to be dealt with in the ED setting. It's really something to happen as an outpatient basis. We've also set up a safety net team so that the radiologist, if there is no PCP listed in the system can send these alerts to our safety net team. And I'll get into briefly what the safety net team does, but basically they help track these patients. And if it seems like there's no provider in our system, at the very least, they will send a letter to the patient and the patient's outside network provider, notifying them of the radiology results and the need for follow-up imaging care. Once the alert has been sent by the radiologist, the ordering provider is notified by two mechanisms. One is an email in Outlook and also is an Epic in Basket message. So the provider will click on the link in the email and they're taken to our tool where they can interact with the recommendation and say, do they agree with the recommendation? Would they like to modify the recommendations? Maybe they think that six to nine months is too aggressive. They'd like to change the follow-up interval, time interval to nine to 12 months, for instance. They can do that. Maybe they know that that follow-up imaging is not necessary because that patient has already gotten an ultrasound for that ovarian cyst and everything's turned out to be benign. They can also transfer the recommendation. So again, in the example of an ovarian cyst, let's say I'm a musculoskeletal radiologist. I see something, I send it to the orthopedic provider after they got a hip CT for the patient. And the orthopedic provider says, so I really think that this ovarian cyst would be better managed by the patient's in-network primary care physician. I'm gonna transfer this to the primary care physician. The order provider can do that as long as the primary care physician accepts the recommendation. And all of that functionality has been built into our system. And so with this, this is how we establish our collaborative care plan. We've not only had the radiologist make a very discreet, specific recommendation, but we've also enabled our providers to let us know, do they agree with the recommendation or do they think that it's not necessary? In terms of scheduling the patient, the provider has a few options. So if they agree with the radiologist's recommendation, they can actually send the recommendation electronically through our system to our Radiology Central Scheduling. And our Radiology Central Scheduling office will contact the patient and also place the order in the Epic in-basket for the ordering provider's co-signature. So just saving a little bit of time and some clicks for the ordering provider. The benefit of our Central Scheduling office calling the patient is that we have a robust mechanism to make sure that there's multiple attempts to call the patient or outreach the patient in another way. And we schedule the patient in our imaging center so we ensure that we know that the patient came back because we can follow up that the patient imaging happened in our center. If the ordering provider does not want to involve radiology scheduling, that's completely acceptable. They also have the options of deciding that they're gonna enter the order in Epic and schedule the patient themselves. And they can also say in some situations where the provider, let's say, lives in another state and would like to get that imaging in their home state that they will hand off the recommendation to the non-VH provider. So those are various options that our ordering providers have. An important point to make is that our schedulers aren't calling the patient and relaying news of a lung nodule or any kind of other undefined lesion or malignancy. There is a delay built in of several days, one to seven, that the ordering provider can click off on and we've communicated to our ordering providers so that they know they need to contact the patient, let them know of the findings and pick the appropriate lag. Let's say they're gonna get to it tomorrow so the scheduling office can call the patient in three days. They're gonna click the three-day option that you can see. So we've contacted or communicated rather extensively with our ordering providers so that even though the system is in place, they still have to contact the patient, let them know of the findings. And then our schedulers will reach out to the patient and work through on scheduling the patient and follow up in that way. After the ordering provider receives the alert, they have 30 days to acknowledge the alert. And as I mentioned previously, if they send it to our central scheduling team, the central scheduling team will place an order, transcribe an imaging order for co-signature by the ordering provider, saving them a few clicks and making it a little bit easier for them. And then also contact the patient and schedule the exam so we ensure that the patient comes back. And there's a set process and multiple outreach attempts that our central scheduling team does to make sure that there's a very well-established workflow for these recommendations. Once the patient is scheduled, the IT tool that we have also checks our imaging repository or PACS to make sure that the follow-up imaging test has been performed. And if it has not been performed, there are monthly reminders to the ordering provider to indicate that this patient is scheduled for a follow-up examination. Now that ordering provider can at that point say, yes, I still need this recommendation and follow up with a patient, or they can also deem that follow-up recommendation not necessary anymore. Let's say, you know, they know that the patient already got their imaging in another place. So these alerts can end in one of multiple ways. One, the ordering provider says that it's no longer needed or the patient eventually comes back for their follow-up imaging. There's also a safety net team that if there is no longitudinal provider in our system, or let's say that longitudinal provider leaves our system from the time of the initial acknowledgement to the follow-up time period, the safety net team will send a letter to the patient and that outside hospital provider, let's say the outside hospital primary care physician, alerting them of the finding and the radiologist recommendation. So in other words, there's a team of people to make sure that every recommendation is closed and either the patient gets the follow-up imaging or it's deemed not necessary, or at the very least, a letter is sent to the patient and their outside hospital PCP saying, this is a recommendation that was made for follow-up imaging and we'd like you to follow up with your doctor. In order to make sure that the AHRQ initiative is going well, we use various data visualization tools and data informatics tools to monitor the program. So here, what you see is the number of AHRQ alerts created by each division per month. The reason why you see predominantly blue on the left-hand side of the graph is because we rolled this out in our thoracic imaging division, specifically for pulmonary nodules as a pilot project. And then we took that feedback and then incorporated it for the rollout in various other divisions. And as you can see with the most recent months, we have about 13,000 or so of these alerts created every single month. And we're on track to have about 14,000 or more of these recommendations annually. Now, all of those benefits to the patient cannot happen unless the radiologist uses the AHRQ tool, unless they input their follow-up recommendation into the AHRQ tool. So in order to encourage radiologist usage, we've created these feedback reports using all of our data monitoring and informatics to really show the radiologist how often they're making follow-up recommendations and how often they're using the tool to identify room for improvement. So what you can see here are snippets from our feedback report that we create for each division in our department. So here's an example of the feedback report for the abdominal radiology division. Each bar represents an individual radiologist in that division. In the top left, you can see of all the percentage of reports that they read, what percentage has follow-up recommendations. And we determine if the report has a follow-up recommendation using a natural language processing tool that we've trained and validated that picks up this follow-up language. The bottom right shows the raw numbers of all of the reports that they read. Again, the green corresponds to the raw number of reports that had follow-up recommendations. We're also showing that each radiologist for all of the follow-up recommendations that they make, again, using the natural language processing tool to figure out which reports have follow-up recommendations, how often are they using the AHRQ tool, which is the blue tool, which we're really recommending them to strongly use, how often are they using another critical notification tool that we have, that's what's color-coded yellow. So if it's a follow-up recommendation, but it's more associated with a critical finding, that's when it would be appropriate to use that other tool. And then how often are they making a follow-up recommendation but neither tool is used, and that's the black color-coded area. So we've set a very low threshold, very achievable threshold in our opinion, so 40% of the follow-up recommendations should use some additional report notification tool besides the radiology report, and we strongly encourage the radiologists to use the AHRQ report tool for their follow-up recommendations. And we're monitoring usage, so again, each bar represents a radiologist in the division and we're monitoring them and encouraging usage and encouraging them to meet this minimum threshold. We also show this information in a slightly different way. With the pie chart on the left. So what we're seeing here is we're identifying the radiologists who have the greatest room for improvement. We show basically who are the radiologists that make the largest number of follow-up recommendations without an AHRQ alert. So again, if you're trying to target the individuals to meet with to champion this program or figure out what potential roadblocks there are, you would know that you would potentially wanna meet with the individual in green, the gray, the brown, and these are the individuals with the highest raw number of alerts that they're creating that aren't using the tool. And each radiologist is also given clear examples using their own report text of all the times they make a follow-up recommendation but they did not use the AHRQ tool. And these discrete recommendations specifically taken from their individual report text I think is really helpful because it shows the radiologist this is what we consider a follow-up recommendation. And this is potentially where there can be room for improvement of can the recommendation be tightened up, made more specific, and then be applicable to the AHRQ tool. So I think these individual specific report texts including that really helps to make the reports more, feedback reports more actionable. So in conclusion, ensuring timely resolution of clinically necessary follow-up imaging is important to patients, radiologists, and ordering providers because it can decrease delayed and misdiagnoses which can have a significant impact and significant patient consequences. We can use technology to support quality and safety initiatives to improve communication between radiologists and ordering providers, establish collaborative care plans regarding clinically necessary follow-up imaging, track these plans to resolution, notify providers if there is a delay in imaging care, and also improve radiologists' reports and recommendations using data visualization and feedback reports. Thank you, everyone.
Video Summary
In a session led by Christine Burke, a radiologist at Brigham and Women's Hospital, emphasis is placed on improving radiologists' performance through quality and patient safety metrics, shifting focus from volume-based to quality-driven healthcare. At Brigham, a program named Radiology Performance Outcome Measures (RCOMs) was introduced. This pay-for-performance initiative incentivizes radiologists based on five quality metrics such as report generation timeliness and patient follow-up recommendations, monitored via a transparent dashboard. A case study demonstrated significant timeliness improvement after financial incentives were introduced. <br /><br />Christine also discusses challenges in simplifying the radiology ordering process by reducing the complexity of choices using lessons from the "choice paralysis" phenomenon. Simplifying diagnostic codes has been a priority, as unnecessary options burden both healthcare providers and IT systems. Simplified processes have shown improved efficiency and patient care outcomes.<br /><br />Efforts also focus on ensuring follow-up imaging recommendations are effectively communicated and implemented, using informatics tools like AHRQ at Brigham to track and prompt necessary follow-up imaging. These initiatives highlight the critical role of technology in improving healthcare processes and patient safety.
Keywords
radiology
quality metrics
patient safety
performance improvement
Brigham and Women's Hospital
Radiology Performance Outcome Measures
choice paralysis
informatics tools
healthcare efficiency
RSNA.org
|
RSNA EdCentral
|
CME Repository
|
CME Gateway
Copyright © 2025 Radiological Society of North America
Terms of Use
|
Privacy Policy
|
Cookie Policy
×
Please select your language
1
English