false
Catalog
QI: Performance Metrics: Must Haves | Domain: Qual ...
M1-RCP0821-2025
M1-RCP0821-2025
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
and welcome to this morning's session. We're so happy that you are here with us and also those who are joining us virtually. So it's a bit of a different meeting this year. So this is performance metrics must have. And I want to just remind you guys to scan the QR code and answer the questions that come up and this you can get credit towards the QE quality essential certificate. We have interesting topics in the lineup of wonderful speakers and the very first person we have is Dr. Bettina Seawood coming to us from Beth Israel Deaconess, Boston, Massachusetts, and she'll be talking to us about new metrics. Thank you very much, Janelle. It's a great pleasure to be here. In this presentation, I would like to take a look at our current metrics with you, then briefly reflect on the metric selection process and then make a suggestion for nine new metrics that we want to consider. So in the recent ACR quality meeting, there was a workshop with 77 participants and together we collected these 52 metrics that are used throughout the country and departments use between eight and 20 metrics to ensure that they are performing well. But when we're thinking about metrics, we obviously need metrics so that we can improve because if we don't measure things, we don't know where we're actually performing at. We also need to take into account, particularly with time constraints that we're now under with our short staffing, that we really only measure things that we can make use of and improve. And then lastly, we have to make sure that we measure things that we care about because every metric is a mission statement. We are declaring to the world, to our department, to people we work with, this is what is important to us. And the opposite actually may also be true to some extent, that if we do not measure something, people may misinterpret that as that that is something that we don't care about. So let's reflect again on the radiology imaging map and see whether there are steps where potentially we can fill some gaps. So when a study is ordered, we really want to make sure that that study is appropriate and also that we include considerations of equity and ordering. Similarly, when a study is scheduled, timeliness is very critical. I think we're already taking great care of that and doing an excellent job. When the patient's imaged in the department, we want to make sure that this is actually the correct examination that is done, on site that the correct protocol is used. At the time of interpretation and reporting, we really care about that the correct diagnosis is made and also that the report contains information that the referring physician needs. The result communication step, again, I think we're doing an excellent job there, but patient engagement would also be very beneficial. And lastly, and very critical, is follow-up management. We want to make sure that all of the recommendations that we make are actually put into practice. So let's take a look at all of these, starting with imaging appropriateness. So inappropriate imaging increases healthcare costs, morbidity, and mortality. This is important to us. It is a little bit challenging to measure, and I think we've been in somewhat of a holding pattern for many years now because decision support tools have shown that they will address this, and they will be mandatory maybe as early as January of 2023 so this has been a little bit on the back burner. Hopefully this will be addressed soon. Equity is critical. It's one of the six goals of high quality healthcare of the Institute of Medicine, and it is critical to high reliability organizations. Many studies have shown, for example, this one in lung cancer screening patients, that African-American patients experience lower rates of screening when they are referred to a screening center and also a longer time to follow up. So if we have a lung cancer screening program, that may be something we want to measure, or we could also look at the percentage of safety incidents in patient with limited English proficiency because that will be an increased number in that patient population. Correct patient site and examinations is something that we really want to take care of because it's actually considered an SRE if we do a wrong patient or wrong site procedure, and there are risks associated with that. Also have to take into consideration here patient dissatisfaction and loss of trust when we do make an error. And this occurs about one to two times a week in our institution, and there is a very simple remedy for this, which is a two-step verification process with two people looking at the patient's ID badge and at the requisition. This only takes 12 seconds to do and has been shown to be very beneficial. In terms of our reporting, what has been very well accepted by referring clinicians is a contextual structured report, which is a dedicated report template for specific disease entities. The advantage here is that these are oftentimes developed with the referring clinicians together, so by definition, this template will have all of the information in it that the referring clinician needs. And these templates for us are really checklists, so when we are using this template, we are ensuring that all the information is in there. And then there is diagnostic accuracy, and this is linked to outcomes, and we really have very little data on how accurate our reports are. There is some data in screening programs because they are monitoring that, but we would benefit in terms of our programs and our provider learning and improvement a lot from knowing how we are actually doing. And we could consider looking at this in a setting of critical diagnoses where the radiologist input is really mandatory. For example, ischemic bowel in abdominal imaging, this is something that may only be brought up by the radiologist in patients who present with abdominal pain, so we want to make sure that we are really the people who bring this up, and we may want to take a look at that. And then there is patient engagement. Many of us have worked on patient engagement. It is known to improve outcomes, and there are various ways in which we can do this. So in-person or virtual consultation visits have been shown to be beneficial, also including our email address or telephone number at the end of a report so that the patient can contact us. This helps patients understand their medical condition better, and it decreases their anxiety, and in general, patients enjoy the contact with us and with our expertise very much. And then of course there is follow-up recommendation management, which is critical. It has been shown that a high percentage of follow-up recommendations are actually not performed, as high as 53% in a study by Ben Bonke. And what's even more surprising may be that patients who undergo lung cancer screening, where they specifically come for a screening protocol that involves many follow-up examinations, a third of those are actually not followed up. And also, 40% of patients who present with a ruptured triple A knew about this rupture, about the triple A before, but they were lost to follow-up imaging. And there is a big initiative by the ACR at the moment where departments can participate in initiative to make this a reality in our daily work, and if you go to their website, you can sign up to become a site. In interventional radiology, we have an opportunity to bring our expertise to bear when we review pathology results, and then make recommendations for either a repeat biopsy or for repeat imaging, follow-up imaging in a certain time period. And this is also helpful for patients, because by doing that, we are able to decrease the time to diagnosis. And about 2 3rds of discordant biopsy results are eventually shown to be malignant. And one other thing we may want to consider in our current environment is a metric on the radiologist's work experience. 79% of radiologists reported one symptom of burnout before the pandemic, and of course, currently this will be even higher, because we are experiencing a short staffing crisis that we have not seen before, combined with increased workloads. This has already shown at many local levels and also nationally to increase the number of SREs and incidence reports that hospitals are seeing. So we know that the wellness of radiologists is critical to keep our quality high. So in conclusion, metric submission statements and are critical in leading change. Please consider that if we're not measuring something that may be misunderstood that it's not important to us. New metrics can help address currently little monitored steps in the radiology workflow. And we can further improve patients' outcome by becoming more involved in patient-facing domains, such as patient engagement and patient management. I would like to thank you for your attention. I'd also like to thank my wonderful colleagues who contributed to this presentations and the participant of the workshop at the ACR meeting this year. Thank you very much. Thank you, Janelle. Good morning, everyone. So I get the pleasure of displaying the visual side of metrics and I have a few disclosures here. So my goal is to include the typical indices, which was beautifully laid out by Bettina and discussing the various ways that it can be displayed and how to monitor and promote operational strategies. So these are the KPI process steps and I wanted to highlight the data sources. So this is the marriage of quality and informatics, bringing in the important information to display the correct KPIs and determine them. The frequency, so how often do we display this? Is it real time? Is it synchronous, asynchronous? Obtaining the data, is it a push or a pull? And also developing methods for these data presentations. Now, what do we measure? It's really important to look at the known gaps, strategic goals, the compliance requirements. There's so many things that you can look at. Financial, I didn't even include like academic and trainee type of dashboards, but these are all very important for the service line of imaging and to move things forward. Now, the categories, as you can see here, like quality safety, stakeholder satisfaction, so we're really, you get those surveys to patients, employees, referring physicians, operations, efficiency, utilization, timeliness, and obviously you have to recoup return on investment. Now, the strategy again for looking at this is breaking it into what do you want to measure? You know, there's structure, there's process. I don't want to go into the details of these. These were already talked about and will be talked about by others in this talk. And then outcomes, you can see here, the electronic health record, the RIS system is one of the most important systems. In medical imaging, we get a lot of information including the wait times, the volume, utilization rates, unread case volumes. That might be a question that you might see on the QR code. The PACs, per-event reporting and direct observation, so things like hand hygiene and hand washing. Surveys, a hospital reporting system, so we're inundated with multiple sources and disparate sources of this information. And how do we put this together and make some sense out of it? So these presentation methods that I'd like to present, so you've seen some of the charts and spreadsheets and graphs, very basic. You can do an Excel. Heat maps, you know, quick visual glance of what's going on both globally and locally. The dashboards and talking also about balanced scorecards. So trends, color coding, you know, green, yellow, red, the traffic light is a common way of displaying when things are going awry and can quickly tell the department and leadership how things are going. And then the reporting cycle and trending, as I talked about earlier, you know, do you want this real time? So unread case volumes by division or modality, time to next exam, daily, case volume by type, appointment availability, monthly, you know, financial indicators, compliance with hand hygiene, universal protocols, case complications, and then yearly. So yearly financial results, physician productivity, and patient satisfaction. So the value and use of this data is highly influenced by the timing of its availability. So many of these things, whether you have a database that, you know, collates everything 24 hours versus real time is very important. There's a lot of software, I will not get into that, but there are a lot of ways your business analytics team will use and different softwares that they'll use, and that's really up to the institutional level. So these are examples, for example, this is from my children's hospital. This is October to November. So this is a monthly dashboard of looking everything of case, you know, care management. So this is ED setting. You can see different things are used here. So trending in graphs, you can see here turnaround times for things like ED admissions, you know, same, the stays, the long stays versus short stays. And then you can see length of activity here. And then also the heat maps of weekly activity and when it's busy. So you can see at a quick glance, you have multiple streams of information that's presented very nicely here. And then these are, again, transport. So patients that are either brought in or sent out and the reasons for exam, you know, reasons for these transfers. It's very, very helpful. I'm sorry that it's small print, but I'd be happy to elaborate further in the Q and A. This again, just trends, you know, we've seen these trends and I'll show you some examples of, obviously we're in the middle of a pandemic still, unfortunately, but you can see very quickly whether we're successful at vaccinations, positivity rates. And, you know, this is real time and asynchronous and synchronous data. This is again, COVID hospitalizations and COVID cases here in the Chicago land. And you can see here, this is up to November 13th. And, you know, we're seeing that the ebbs and flows and very quickly we can understand fully vaccinated versus unvaccinated and trends there. Heat maps. So, you know, global, nationwide, understanding US COVID rates. These are, you know, displayed by the CDC and other organizations. So does your institution or does your radiology have that ability to do that? And you can see here, this is an example from MGH looking at heat map of equipment age and useful life. So very quickly you can understand what systems like both location, modalities of when they'll go out of service or operational needs that have to be fixed over time. Reading room. You know, many of us have these dashboards or screens in the reading rooms that tell you, you know, how many unread studies there are, wet reads, x-rays, bone CT, bone MR, things like that. So these are nice real-time dashboards in the reading room that give physicians and radiologists real-time understanding. Predicted outpatient wait times. So this, again, from both a staffing need and also understanding, you know, patient satisfaction and understanding what, you know, the next availability. And now you're seeing people build in AI programs that take into account multiple parameters to right size or right fit the department to the metrics. This is, again, another example looking at nuclear medicine. So days to 40%, 60%, 80%. So which, what are the free slots looking at, you know, over time? And so all of these are very important to understand what the utilization rate is in the department. How are you getting people into the department? And are you serving patients well? And what areas need essentially more staffing or more open slots or protocol changes? So many things come out of this. And I'm talking briefly about the balanced scorecard. So this is a management concept data presentation method. So this is one, it's not necessarily dashboard, but it's a dashboard that brings in the institutional strategy and goal. So you have the ability to drill down, you know, with these systems. And these KPIs reflect success factors that the business that link to the strategic decision. Like I said, it marries your strategy, getting to the gap to goal data at a glance. Now this is that, how the balanced scorecard gets synthesized. So why do you exist? You know, what are the values, the vision? So doing like SWOT analyses, the strategy, what's the game plan, the strategy map, translate the strategy into action. And then that develops that balanced scorecard. So you can see here, the financial, the operations, the quality safety and the stakeholders. So all of this gets blended into this balanced scorecard, which eventually gets you to this perspective and then looking at the KPIs. So this is the ultimate goal of what we're trying to achieve using these dashboards, bringing in the vision and strategy of the department and looking at what is important and what will move the bar forward over time. So just parting advice is that with these dashboards and these KPIs is you wanna provide a clear picture using visualizations. You wanna promote user involvement, including user input. And that even includes even today, like patient input on how they're treated through surveys. We're seeing a lot of that, you know, with the additions of burnout, of understanding staffing and how they're feeling and understanding, you know, the throughput on a day-to-day basis, combining relevant data from multiple sources. So that's a huge challenge, especially as coming from informatics background, selecting the correct metrics and enabling secure and convenient access to your dashboard. So we didn't talk about the whole privacy side of things, but that is a very important piece is where you're displaying this, who you're sending this to and who are the appropriate consumer consumers. So thank you for your time. My name is Dr. Stein and in this talk, I'm going to be talking about peer learning metrics. Peer learning has gained significant traction in radiology practice as an alternative to traditional score-based peer review. Recently, there was formal recognition of peer learning by the ACR as an alternative approach to meeting the physician quality assurance program requirement of accredited facilities. Although for several years, the ACR has accepted active documented peer learning programs as meeting requirements for physician quality assurance, the language of the requirements until now did not specifically allow for a peer learning program. This was big news over this past summer. A new peer learning pathway was approved and minimum requirements were put into place. This milestone has finally been reached through the dedicated work of many leaders in the quality space and radiology over many years and more recently through the formation of the ACR Peer Learning Committee under the able guidance of Dr. Jennifer Broder. As a member of that peer learning committee, I've had the unique opportunity to lead the effort in collating committee opinion and formulating the minimum requirements and metrics required for documentation to meet the new pathway requirements for accredited facilities. I thought for this talk on peer learning metrics, it would be useful to review the new ACR Peer Learning minimum requirements released over the summer and describe their derivation and then provide some examples from peer learning metrics from the practices that transitioned successfully to peer learning. Let's look at the new peer learning pathway requirements. There are two categories of requirement for the peer learning pathway, a written peer learning policy and annual documentation. Looking at the written policy requirement, the beginning was culture. The written policy should be devised with an emphasis on supporting culture of learning and minimizing blame. The goal explicitly of the peer learning program must be improvement of services by relying on the establishment of trust and the free exchange of feedback in a constructive and professional manner. The policy must also define peer learning opportunities and how they are identified. The cases should address actual or potential performance issues, including both discrepancies and great calls. And the cases should be identified during routine work, case conferences, event reports, or other sources rather than a review of randomly selected cases. Description of the program structure and organization is required. And this can be done by defining the roles of the position and opposition leaders and by describing the responsibilities along with the amount of time or FTE dedicated to managing the peer learning program. It also requires to define the workflow of the peer learning opportunity submission, review and communication with the interpreting radiologist as appropriate and designation of the peer learning submission for group sharing. Defining targets, and we'll talk about this and then come back to it a little bit deeper in just a moment. The written policy must define targets including expectations for minimum participation by radiologists in peer learning submissions and in learning activity participation. It should also set for itself minimum standards for peer learning program activities that ensure enough opportunity for practice members to review and learn from the content. These activities can be online or in person or other virtual learning formats and we'll go more into that fully in just a moment. The written policy should outline a process for coordination with appropriate practice and administrative personnel to translate findings from the peer learning activities to dedicated quality improvement efforts. And lastly, the policy should include a statement of commitment to sequester peer learning activity content from individual practitioners performance evaluation. You may include participation data from the peer learning program in your evaluation of professionalism but performance data must not be created out of peer learning data. In addition to the written policy, the peer learning program accomplishments should be documented annually and the annual summary should include total number of case submissions to the peer learning program, the number and percent of radiologists meeting the targets as defined by your practice policy and a determination of whether the peer learning activities met your minimum standard as you defined it in your practice policy as well as a summary of related QI efforts and accomplishments. We'll talk more about the annual documentation shortly. Let's look carefully again at the targets that need to be defined in the new practice policy. This box is the language taken straight from the new ACR peer learning pathway policy which can be found posted online at acr.org. The targets need to be defined for minimum case submissions by radiologists, radiologists participation in learning activities and minimum available learning activities for your program. The language here is deliberately big and the targets of case submission and learning activity participation by radiologists can be defined by your practice according to your minimums as straight numbers or as a percentage of and that's up to the practice to define and also the available learning activities are up to the practice to define in terms of frequency, monthly, quarterly, subspecialty or not subspecialty and also the format in person, virtual or online modules that can be done on your own time with the targets again defined by the practice. The reason for this large amount of latitude is to allow practices of all types, academic, private, large, small, subspecialized or general to be able to adopt peer learning according to what is valuable for their practice and in accordance with the culture of their practice in order to facilitate learning and growth. Now let's look carefully at what's required to be measured and documented annually in the peer learning ACR pathway. Again, this box is the language taken directly from the new policy which can be found online. This annual documentation should include the total number of case submissions to the peer learning program overall and the number of percent of radiologists meeting the targets that you defined in your practice policy. Part three is a general determination of whether you have actually met the requirements or the standards that you defined in your practice policy and part four is a summary of the related QI activities translated from peer learning cases. There is no minimum requirement for QI projects emerging from peer learning. This is deliberately so there is not too high a barrier for all types of practices again to shift into the peer learning pathway but certainly translation to quality improvement efforts will be performed by many practices and tally the system level wins and is really considered elementary and fundamental to peer learning. Now that I've reviewed the new minimum requirements and explained why there is so much latitude in the requirements, the metrics and the language, I will move on to share the metrics of several peer learning programs that transitioned successfully. In this example of peer learning rollout in an integrated health system, Dr. Richard Sharp and colleagues shared the metrics they obtained with the rollout of their peer learning program as compared to their traditional score-based peer review which preceded peer learning at Kaiser. You can see in the graph that after implementation of peer learning, there was a statistically significant increase in the average number of learning opportunities submitted per month, with submissions increasing from approximately three discrepancies per month to 36 learning opportunities per month, which was a 12x increase. 86% of radiologists submitted one or more learning opportunities, and the mean number of radiologists participating monthly increased from 5 to 35. A tally or a minimum count of improvement projects is not required by the ACR, but many programs will try to translate peer learning activities to quality improvement, and this program demonstrated the value added of their peer learning program and increased translation to quality improvement initiatives with a baseline of 5 improvement projects and increase of up to 61. This slide summarizes some of those systems improvements that were achieved, which, if available, should be included in the annual documentation for your peer learning pathway requirement, although no minimum requirements are in place if you have no documented translation to quality improvement efforts, that is not a problem for the ACR peer learning pathway, although it is fundamental and elementary to peer learning to translate to quality improvement. Lastly, the mean monthly learning opportunity distributions to radiologists significantly increased from 18 to 352, and unsurprisingly, radiologists earned significantly more CME credits after the implementation of peer learning, with peer learning online modules shown here in gray and CME conferences shown here in yellow. In the last minute or so, I'll share just one more example. This one is from a large academic medical center, 100% specialized, with 100 faculty, a large residency, and multiple fellowships. In 2018, the shift to peer learning meant a requirement of two submissions per radiologist per month. That's how they defined it. And similar to the prior example, the switch to peer learning yielded a large increase in case submissions that offered learning, with a number of meaningful cases entered averaging per year over the initial four years of score-based peer review at 355, with over 1,600 cases in the first year of peer learning counted, which constituted a 453% increase. The department also classified learning opportunities by division and type of event, detection, image technique, communication, etc., which may be helpful to find patterns that can be intervened on. In summary, in this talk, I reviewed the new ACR peer learning minimum requirements, focusing on the metrics required for accreditation. I showed two examples of metrics gathered on the rollout of peer learning in comparison to traditional score-based peer review at those practices. Thank you kindly for your attention. This will be a little bit of a combination of the prior talks, especially the first two. What I hope to do today is describe the differences between an operational plan and a strategic plan, describe some of the differences between scorecards and dashboards, terms we've heard used throughout the talks today. And provide an example on how to operationalize a strategic goal. At the start, strategic plans should be forward-thinking. They're usually informed by the mission and vision of the organization. This is defined by stakeholders, but is usually carried out and finalized by the executive leadership of an organization. They're really setting that future forward direction. Strategic plans typically last for five to ten years, although there has been some trend in the industry to move those even faster, to have shorter horizons. Each strategic plan should answer three questions. The first one is, where are we now? Unfortunately, many times that represents that volcano that's erupting. But we want to answer the next question, where do we want to be? And then finally, how do we get there? So those are the questions that help to move us forward. This is an example of sort of the glossy print strategic plan from my organization. It's built around four pillars, care, community, cure, and culture. And we'll use this a little bit as a framework of how we move through the rest of the talk. Strategic plans are a little bit different than operational plans. Operational plans are more tactical in that day to day or year to year. How do you get things done? The operational plan goal is to enable the strategic plan. This is usually, though, going through a multiyear process. The operational plan is determined by managers, and it's completed by frontline staff. So let's look at our organizational strategic plan and how it moves then to a departmental operational plan. So we have the different operational plan goals in each pillar of the strategic plan. I'll highlight a couple of them. Under our care pillar, a goal in our department is to improve and standardize portable imaging processes and radiography. In our community pillar, it was to open an expanded outpatient location. Cure pillar, we wanted to develop and optimize new imaging protocols. And in our culture pillar, to increase the engagement of quality peer interactions. So as we move to the operational plan, each operational plan will have detailed goals. These are, again, glossy print goals. And so most of the operational plan goals will have some sort of SMART aim. I want to point out, you can spell SMART with an M, not an O, for all my Ohio State fans out there. SMART goals are specific, measurable, attainable, relevant, and time-based. However, many times you'll have the questions answering something slightly different. What, who, when, and how? And sometimes that how question is how much. Sometimes it's going to be, how do we get this done? Often the what, the who, and the when are easy. And those are the things that are written out in that detailed aim. And so an example. One of those operational plan goals was to increase engagement and quality of peer interactions. So if we move to our more detailed goal, it's increase the percentage of excellent and very good interactions between radiologists from 66% to 90% by June 30, 2021. So the what is in there. It's increased the percentage of excellent and very good interactions between radiologists from 66% to 90% or when is in here by June 30, 2021. Often the who is not defined in the goal. It's made up of the team of people putting it together. In many of our instances, we'll build a team around the project. And so those become defined. In this example, this is the team of people who is doing the work. And then the how is going to be the rest of the talk. So the principles that we as a department espouse as we're building our operational plans and building our strategy is having transparency, a data-driven approach, and an inclusive team. We try to have those principles really permeate through everything we do. I think it's highlighted on our website. So this is our hospital website, but our departmental-facing portion of that website. And all of our metrics are available, visible to our staff members throughout the department. Down at the bottom, we have our operations tab. Right next to it, we're going to have all of the data, and I'll highlight some of that a little bit later. If we move into that, we publish our operational plan, as well as the progress that we're having on that operational plan. We have our measures and our aims defined in there, as well as the team of people doing the work. All of that's there. As we move through into our operational plan, you'll see that there are four different types of projects often. Operational planning projects, infrastructure projects, improvement projects, and implementation projects. So going back to our operational plan, putting those icons next to each operational plan goal. Again, you'll see a mix of these things. So execute the FMEA mitigation plan for our brand-new tower in the hospital was an implementation-type project. Again, the outpatient location is an infrastructure project. Develop and optimizing imaging techniques was an improvement project. And finally, developing an action plan is a planning-type project. So to do these types of projects, we have a number of tools. These are things like project plans, run charts, data reports. Some of the tools are more observational, and I'll highlight a couple of those. And then surveys. So a project plan differs from the other types of plans that we're talking in that they're project-focused, they're very detailed, and it becomes a list of tasks. These tasks have time estimates for their completion, and we often highlight key milestones. Those key milestones act as gateways to help us make sure we're staying on track, on target. The project plan enables the operational plan, and it's created by the project leader. So a project plan often looks something like this, if you roll it out, where you have time going across the chart, and your different tasks are identified by bars. The width of the bar describes how long that task will take. These tasks can be overlapping. I'm showing them in order, but they don't have to be in order. Many times, multiple tasks are happening at once. The next type of tool is a run chart. You've seen a couple of run charts today already. Run charts have time on the x-axis, the measure that you're tracking on the y-axis, and then you have lots of measured data points that you're seeing over time and tracking. There are a number of other items on a run chart. We have a direction we want to see the change to happen. We will have a goal line. For us, those goal lines are in a green line, a solid green line that go across the chart. We have a median line that tells us if a change is happening or not, and that change is dictated by process control measures, and oftentimes, you'll have another couple of dotted lines. These are control charts or control bars so you know when your process is in control or out of control. We do publish our run charts on our website, whether it's at KPIs, quality improvement projects, or operational goals. Those are all available for, again, everyone to see. The next type of tool is reporting. Reports tend to be data that you need to answer a specific question or to measure some tasks that you're not really building a run chart around. We have a lot of reports that are available on demand. We have a tool that we've built internally. This is an example showing data for all of our radiologists, study volume with turnaround times as that green dotted line going across. All of our data can be exported to Excel. It's all anonymized depending on the user, so you see your own data in a radiologist-type volume one, but you don't see other people by name. As Safwan mentioned, we also use a lot of heat maps. Heat maps tend to be a great tool for us to visualize volumes over time, and so this example shows different volumes at different hours of the day, days of the week, over time indicated. Again, these help us to answer questions in real time and make decisions to help drive our improvements. Some of the observational tools. One of the first tools to use is to set up a process map. Process maps identify the different steps of a process. Starting points and end points are ovals. The different steps are listed as rectangles, and decision points are in diamonds. Most process maps are going to be a lot more complex than something like this. We use this type of tool when we're trying to declutter a process or make it simpler. This helps to figure out where we have decision points that are unneeded or steps that are wasted. Spaghetti diagrams are a little bit more difficult to build and can require some advanced tools, but they help us to identify flow within specific confines. Survey-based tools are great for understanding perception or people's feelings, and we use them quite a bit to measure how we're doing, especially on those type of items. And so getting back to this operational plan goal to increase the percentage of excellent and very good interactions between radiologists from 66 to 90% by June 30, 2021, the how is through a survey. And so we send a survey every several weeks to the radiologists within our department asking them many questions, not many, five or less, but the key one is how would you rate your interactions with other faculty radiologists in the last two clinical working days? That's where our metric comes from, and that's where our data comes from a survey like this. We also have free text response questions in there. We use those free text responses to give us data on what the problems are, why people are having poor interactions or good interactions. We then are able to use tools like a Pareto chart to break that down into buckets. In our department, some of those things that lead to negative interactions are the schedule, workload distribution, interpersonal interactions, interruptions, and IT issues. If we focus on one bucket like interruptions, the radiologists were frustrated that we have lots of ways to be communicated within the department. We have telephones, cell phones, secure text messaging, PAX instant messenger, pagers. All of those methods meant that they were constantly checking something instead of doing their work on the PAX. And so we were able to take that recommendation, get rid of all of our types of communication methods except for two that we thought we can't get rid of, that telephone and secure text messaging. The radiologists also complained we had a lot of spam phone calls within the department asking for political contributions or telling us we were going to go to jail if we didn't pay people money. And so those type of spam calls we tried to get rid of by changing phone numbers and implementing a phone tree so that all those type of phone calls coming into the department could be screened by a reading room assistant and hopefully eliminate unneeded phone calls. We can measure our progress on all of the buckets on one of those run charts where we're looking at our change over time based on that survey metric. The final part of that strategy is how do you make things stick? And so we have a couple of tools that can help us to make things stick or to see if the changes that we have made do stick. These include things like audits, scorecards, dashboards, and turning things into a system. So an audit is an inspection of a process and it serves as a system check. These tend to be things we do occasionally. It will often be a quarterly audit or a yearly audit just to make sure our process is on task. It's usually pretty manual, someone gathering and reviewing data or manually doing a check within it. You can do some of that through data pulls. Scorecards represent a static snapshot. They're used to monitor key performance indicators and compare performance to a target. These tend to be things that are measured infrequently. Something like our balance scorecard is displayed on our website, again, available transparently. We have our desired direction, our color indicating if we're meeting or exceeding our targets or not meeting them at some times. They represent those key things that Sefwan mentioned earlier. And then the data is there for everyone to see. We also have stoplight style dashboards for other key performance metrics, and if we click into those, we'll see the run chart that goes behind it. A dashboard is real-time data, so these are things that help us to monitor our operational performance or that day-to-day decision making. And these are things where we're updating continuously. This is our dashboard that's hanging in our reading room when the monitor is not dead, which it is now. But it shows us things like our current unread radiographs that's pulled from our PACs, and so we can see at any second how much work is going on in the department or needs to be done. Other elements like where people are scheduled go on that, the types of conferences. And some of the other metrics like our turnaround times. Finally to systematize, those are things that we want to add to a standard process. Adding things to our electronic medical record is often preferred, but we can't do that first. Other electronic systems, for example, the phone tree, would be a way to systematize a change. And so in conclusion, we try to implement our strategic goals in a transparent, data-driven, and inclusive manner through planning, infrastructure, improvement, and implementation projects using tools like project plans, run charts, data reports, observations, or surveys, and then try to make those changes stick through auditing, scorecards, dashboards, and systematizing our solutions. Thank you. Now I will present on interpretation turnaround time. My name is Janelle Scott. I'm coming from Brooklyn, New York, Kings County Hospital, SUNY downstate. So my short talk today is going to focus on three main objectives. One is to discuss the importance of benchmarking and quality and performance improvement in radiology, to report the factors influencing our read or interpretation turnaround time in 2020 according to the grid, and also to discuss additional data points that may transform data to actionable information. Before we go any further, I just want to describe a scenario that I encountered, and I'm sure many of you may have encountered the same scenario. Our new ED chief was really concerned that all turnaround time was negatively impacting patient satisfaction, her left without being seen rate, and the ED dwell time. The left without being seen, these are very important KPIs in emergency medicine. The left without being seen rate is the number of patients or the percentage of patients who leave the ED after being triaged that are not seen by a physician or a physician's assistant or a nurse practitioner. The ED dwell time is the length of time a patient sits in the ED waiting to either be discharged home or to be admitted to the hospital. So she was saying that we were taking a bit too long. At a prior institution, it was about two hours, and at that time, we were taking upwards of three to four hours sometimes. So we wanted to know how can we deliver more value to our patients? And as radiologists, we really need to be asking ourselves this question every day. How can we deliver more value to our patients? And particularly as we shift from, you know, volume to value-based care, this means that as radiologists, we may have to re-engineer some of our processes and how we actually do the work to deliver more value, which ultimately, our ultimate value is timely information that's actionable and that may impact the patient's outcome. So this is a question that we had, and we wondered how we compare to other institutions. We certainly had her perspective, but that's not really objective, and we wondered how would we compare to other institutions of similar size and scope. We really wanted a benchmark, and what benchmarking is, it's more than just a comparison of the outcomes. It's a process of comparing a practice's performance with an external standard, hopefully to encourage discussions among organizational leadership and frontline professionals to stimulate the cultural and organizational changes that are needed to improve efficiency, quality of care, patient safety, and of course, patient satisfaction. And for this, the GRID, which is the General Radiology Improvement Database administered by the American College of Radiology, is quite useful. There are over 600 participating facilities that vary by location, practice type, teaching versus non-teaching hospitals, a lot of variables, and participants are able to compare themselves to other facilities of similar characteristics. So the GRID committee in 2020, we looked to see if there were any significant relationships between the return on time per modality and the following variables, like the study volume. We looked at participating facilities with less than 50,000, 50 to 150,000, and over 150,000. We also looked at the number of radiologists. We looked at the location, urban, setting, rural, or suburban. We looked at the trauma level designation, if there was one, patient type, emergency patient, inpatient, ambulatory patients, and also multiple versus single reader. A multiple reader, of course, if you're coming from a teaching institution, the resident does a pre-read and the attending finalizes that read versus if an attending reads a case on his or her own. So we were looking to see what of these factors, if any, impacted the return on time. And what we discovered is that the only real variable that impacted the turnaround time was the patient type, meaning if they were coming from the emergency room, the inpatient, or the ambulatory setting. The emergency room patients had significantly lower turnaround times compared to the other patient classes. And that's not really a surprise because, like I said, the emergency room efficiency is really tied to all efficiency. And even if you look at the trauma level designation or stroke centers, they're mandated by regulatory agencies to have certain turnaround times for accreditation and certification. So return on time has been given a lot of emphasis over several years. Back in 2002, they did a survey of academic emergency medicine chairs and a whopping 49% of the respondents were dissatisfied with their return on time. Only 39% reported a return on time of less than four hours, and only 2% of daytime reports were completed within one hour. Because of the increasing emphasis on this metric, the availability of PACs and voice recognition, we see that between the years of 2009 and 2012, this dropped precipitously by 54.5%. So we see that the ED turnaround times dropped from two to four hours to about half an hour to two hours. The inpatient and outpatients dropped from 24 hours to about four to eight hours. So we really had a significant improvement in the return on time. However, the question still remains, is this really the best metric to use to judge a facility's performance? If we can see here, there's a lot of steps in the imaging cycle, and focusing specifically and only on the return on time really places a value of the entire value chain on the reading radiologist. And as we can see, there's several other possible metrics that we can look at and we can interrogate, and there are more opportunities to improve the imaging efficiency, that is the time that our studies ordered and actually performed, than the reading or the reader efficiency, which is the time that the studies performed to finalize. So if we go back to our scenario, when we looked at our data, we were actually within an hour of the return on time, which is within the expected range for an institution of all size and all volume, where we saw that the bulk of the patient weight was between the time the study was actually ordered and performed. So we formed a performance improvement team, including, of course, the ED nursing and physicians, attendings and residents, radiology technologists, the lab, and also transportation. And we looked at our entire process, and what we saw is that there were several bottlenecks almost at every single step in the process. One particular bottleneck was that all our patients were getting labs before contrast enhanced studies, which was absolutely not necessary according to the American College of Radiology contrast guidelines and also evidence-based medicine. And having that happen was a significant bottleneck because we had to wait for nurses to draw the blood, the sample has to go to the lab, and there are several bottlenecks within those subprocesses that were really negatively impacting our turnaround times. So that was a significant area for improvement and one that we focused heavily on. So what happens is that how care is organized and structured influences the process, and these both influence the outcome. But a better state would be for the outcome, or desired outcome, should really impact how we structure our care, how we organize our care, and what is done. And it will be great for facilities who are sort of comparing themselves to other facilities to have more information on some of these top-performing institutions. How are they organizing their care? How are they delivering their care? And then maybe start to try to mirror some of those processes instead of just looking at the outcome and not necessarily knowing how they get their outcomes. So in conclusion, the report turnaround time does remain an important KPI in radiology, but it is important for us to examine the entire value chain, to look for opportunities for improvement, and to better inform our performance improvement efforts. And also we have to balance the burden of collecting additional data, that's structural and processed data from facilities, with the utility of having metrics that allow for benchmarking beyond basic demographic characteristics. So I would like to thank the members of the ACR Grid Committee for this presentation and for the data analysis as well. So thank you very much.
Video Summary
The session focused on performance metrics in radiology, particularly highlighting the importance of benchmarking and redesigning processes for improved efficiency and patient outcomes. Dr. Bettina Seawood introduced new metrics to address gaps in the radiology workflow, emphasizing the importance of measuring aspects such as imaging appropriateness, patient engagement, and follow-up management. She shared a practical approach to ensuring accurate diagnoses and effective communication, while stressing the importance of equity in healthcare.<br /><br />Dr. Stein highlighted peer learning metrics, which have gained recognition as an alternative to traditional peer review, showcasing increased participation and learning opportunities. He offered guidance for implementing effective peer learning programs.<br /><br />Dr. Janelle Scott addressed benchmarking and read turnaround times (RTAT) by analyzing factors affecting RTAT across different patient types and suggesting a broader focus beyond mere turnaround time to include the entire imaging cycle.<br /><br />The session underscored the need to drive culture change, utilize efficient data presentation methods and adopt strategic goals that integrate patient satisfaction and operational efficiency. Emphasis was placed on utilizing tools like dashboards and balanced scorecards to monitor and adapt strategies, thus enhancing radiology's value and impact on patient care.
Keywords
performance metrics
radiology
benchmarking
process redesign
patient outcomes
peer learning
read turnaround times
data presentation
patient satisfaction
RSNA.org
|
RSNA EdCentral
|
CME Repository
|
CME Gateway
Copyright © 2025 Radiological Society of North America
Terms of Use
|
Privacy Policy
|
Cookie Policy
×
Please select your language
1
English