false
Catalog
QI: Radiology Dashboarding 2.0 | Domain: Radiology ...
W3-CNPM09-2023
W3-CNPM09-2023
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
the introductory lecture for dashboarding this morning. This is really an exciting area. There's a lot of it which are happening in our departments and there's been a lot of expansion over the past couple of years. So why don't we go ahead and jump in. So what this talk is gonna focus upon is we're predominantly gonna focus upon key metrics to incorporate within your radiologist dashboards. And in addition to doing that, it's very difficult to talk about dashboards without talking about data flow from the underlying source to ultimate visualization. And I'm also gonna show examples of dashboards from two different health enterprises. So this talk is gonna talk not only about the literature, but it's gonna encompass experiences from an academic health system with a partially integrated community radiology practice. And a lot of the examples are gonna be from a fully implemented dashboard. And the reason which I am bringing up our data scientist name right at the beginning is radiologists, we have a tremendous amount of input. We decide what's gonna go on in the dashboard, but it's gonna be our informatics team and data scientists who are truly building this for us. So I'd like just to thank Eric Gagnon and Anthony Arbiza at Penn State for their efforts and Don Honehurst and Amanda Davenport at Nemours for their efforts. This is a gradually expanding topic. And when I recently went back and did a PubMed search of radiology and dashboards and metrics, there's about 20 articles in the literature right now. And it almost is coming in periodic waves. About every five years, there's interest in this topic. And it keeps expanding. And in 2023, we're in another one of these waves again. So here we are at RS&A discussing this. Just as an introduction, looking through the literature, one of the most difficult portions of dashboarding is there is a ton of choices to decide what works best for your institution, what works best for your dashboard. Our group has focused on operational dashboards. But there's a tremendous amount of quality and safety dashboards. Hence, we're in the quality session here. There's gonna be a tremendous amount of financial. Dr. Zygmont is gonna discuss educational dashboards. Dr. Davis is gonna discuss equity. And then the other thing which you realize is there's just a tremendous amount of overlap. So one of the larger problems is just choosing what you want. In fact, there was an article in 2016 which went through and looked at different KPIs for dashboards. And they discovered 92 of them in their practice. And I believe there's actually even more than that. Additionally, there was an article which discussed this about 10 years ago. So we have a little bit of time now. And the group discussion in the AJR article, the roundtable discussion, was that dashboarders need to have essential information, reasonable information, and also actionable information. If you're collecting the data, you want to be able to act upon it. And from previously leading our data science team, the thing which I've discovered, and it's difficult to emphasize enough, is not only does the data need to exist clearly, we need to be able to access the data. We need to have easy access to the data. It needs to be frictionless. And then the underlying data needs to be reliable also. So before I get to some examples, let's talk about the literature a little bit more. Of those 92 KPIs which were identified by one group, by Karami's group, they chose to focus on eight of them. In the AJR discussion, there's a large emphasis on pragmatic dashboarding. And some of the examples which were given under the quality and safety were things such as infiltration rate, first available slot, callbacks, report turnaround times, and RVUs per FTE. There is a survey which was published in 2013 also amongst academic radiology departments which focused upon financial indicators, productivity indicators, and access indicators. And the things which came up within this context most commonly were going to be examination volumes, examinations per modality, report turnaround times. And the other thing that they discovered was that in greater than 50% of radiology departments that dashboards were also looking at professional RVUs per FTE. And I'm sure that's even greater at this point. Additionally, there's a more recent paper which was published in 2017. And the most common utilized metrics in that series were hand hygiene, report turnaround time, RVU productivity, patient satisfaction, and peer review. So again, dashboards are all over the place. They cover lots of different areas. There is some consensus within the literature focusing upon key metrics such as report turnaround time, RVUs. And what I'd like to do at this point is I'd like to just show some examples and talk about some institutional experiences also to help put this all in perspective. This is a busy slide. I just want to highlight a few key points on this. Over on the left hand, what we're demonstrating is our data sources. And when we were building our dashboards, we brought in data from five different sources. We brought in data from the radiology reporting system was one of our most reliable sources. We brought in from the radiology information system. We brought it in from the electronic medical record. We brought in financial data from the financial office within our enterprise. And we brought in radiologist scheduling data. So you have to decide what data you want to bring in. And then I'd like to thank everyone again who had recognized earlier about what happens in the middle of the diagram. If you really want to get down into the weeds on this, there is an online supplement, a recently published JACR article which goes into this in greater depth. But that's beyond what the scope we're gonna cover here. And then what we did is we put this in a central database. We used a Postgres database. And then this was displayed as far as business intelligence, graphics within a Grafana dashboard. There are lots of different choices. You can use Qlik, you can use Tableau. There's lots of commercially available products. But that is what we chose within our institution. This is certainly probably familiar to everyone in the room but there are some things I wanted to highlight here. These are report exam volumes which are essential information. This is a COVID curve. And what we did is we pulled our baseline data. We saw COVID go down and then we saw the exams afterwards. The things which I wanted to point out here was we had this information not only total but we were able to break it down by modality. It's displayed in weeks here but you can break it down by days. You can break it down by months. All these are fully operational interaction dashboards and it really gives you good sense of what's going on within your department. For the next slide, let me start off by recognize that RVU is certainly a very imperfect data source and it does not necessarily capture all the efforts by radiologists. But from a pragmatic standpoint, this is something which hospital administration looks at and we did incorporate it substantially within our dashboards. So I'm showing you two dashboards here. One is our standard dashboard. And the way which we did this is this data came directly from the financial office. And then what we did from a financial, the data from the financial office, we corrected this to basically the radiologist's clinical effort, their CFTE if you will, because we did not want to penalize. We made the choice that we did not want to penalize the other academic missions within our institution but you can decide that on your own institution also. So we would adjust it for the radiologist's time, we would adjust it for their clinical time and we could look at radiologists individually. We didn't like doing that but preferably what we do is judging the divisions within the department against national benchmarks. And the national benchmarks we used were both ARAD data and MGMA data. So we had a pretty good sense of what the divisions were doing and how these were falling within national benchmarks and we wanted to keep them within our organizational goals. We didn't want people reading too fast, we didn't want people reading too few studies. There's a sweet spot and we use this to kind of keep it within our organizational sweet spot. The big problem with the top graph or basically the top dashboard was we could adjust this but all our data in this, our underlying data source was inherently delayed. It was coming from the financial office. So by the time you reach the end of the month, by the time the data was released, we could be running up to six weeks behind. And we wanted a better sense of what was going on in real time. So what we did is we developed an entirely new dashboard. We went into the radiology reporting system and we linked the exam code from the radiologist reporting system to the CPT code. The CPT code to the RVU code. And then what we did is we brought the RVU code out and linked them to the radiologists and the radiologists division also. So we actually developed the bottom dashboard. It was a predictive dashboard. It was predictive analytics. We knew or we had a pretty good idea of what was gonna be coming out of the financial office up to six weeks ahead of time also. So those were metrics. Another key metric which is throughout the literature which we also use substantially was report turnaround time. And obviously this is very necessary for our service line agreements. And again, we could adjust this by modality. We could adjust this by time of day. We could timeframe. But the area where this came up most often was probably similar to many institutions, the emergency department. And this was particularly applicable to our overnight when we had our resident staffing model. And what we realized is we were very close to our service line agreements, but we weren't quite there. We were just looking at CT and MR. So as our health system expanded, as the academic system started acquiring some of the surrounding community hospitals, as we started acquiring radiology groups, we started bringing on additional faculty coverage overnight. And one of our dynamic dashboards which we have here was from the start of this was we could see the reports which were signed between 10 p.m. and 7 a.m. which were just a little bit, this happened to be thoracal abdominal CT which I'm showing you here as an example. So there'd be some outliers. Maybe people would stay a little past 10 or get in a little bit before seven. But as soon as we implemented the radiology group, we could see the number of reports which were being finalized overnight also. I'm gonna defer to Dr. Zygmunt on education, but we did have full educational parameters built within our dashboards, particularly for resident volumes, how much they're reading. And we had this broken down by location also. We had it broken down by emergency which is in blue, inpatient which is in orange, and outpatient which is in green. And the question is, just for example, why is row number three so much smaller or less studies? This happened to be one of our very experienced senior integrated IR residents who just did a couple weeks of diagnostic service for years. So the data was underlying and validated. And I would like to just emphasize one more example with data. It gives us a lot of capabilities. As we were bringing in radiology groups, as we were bringing our community radiology groups, it allowed us to see optics of what they were reading and it allowed us to create virtual divisions within the health system. So what we would do here is we did another set of linking. We would link our exams to their respective divisions no matter who read it. For example, a head CT would automatically go to neuroradiology no matter who read it. An abdominal pelvic CT went to abdominal imaging no matter who read it throughout the system. So what this allowed us to do is it allowed us to start integrating the two groups. And our final results here demonstrated that we had approximately two neuroradiologists contributing to the academic core, one breast imager contributing to the academic core, one abdominal imager. And this actually corresponded very well to our staffing model, so we were happy with that. I'm running low on time, so I'd like to just get to one final thought. When you are developing dashboards, one of the key questions to ask yourself is where do the dashboards live? Do they live at the departmental level or do they live at the enterprise level? And there's trade-offs with this. At the departmental level, you're gonna have a high level of responsiveness, you're gonna have a high level of customization, but then it's coming out of your departmental budget. If it's at the enterprise level, you're gonna have a greater level of infrastructure support, but you might have variable responsiveness. On that note, here are some resources if you would like to learn more about this. And I'd like to thank you very much. My name's Matt Zigmont. I'm an associate professor with the Department of Radiology at Emory. I used to be one of the associate program directors. We have a large diagnostic program. I chaired our clinical competency committee. I'm now chief of service at our large level one training facility. I'm here today to talk to you guys about how we use our educational dashboards for improving performance in our education program. The next 15 minutes, I'm gonna break it down into four parts. I wanna spend just a little bit of time telling you about what some of the requirements are from the ACGME. Keep in mind this, if you guys aren't in an educational role or have a educational leadership position, this is applicable to other areas of professional practice as well. You could use this for FPPE if you find yourself in a chief of service role having to evaluate your faculty. So it applies to trainees and faculties. I'll cover a little bit what our dashboards, how do we collect the data. It's a framework very similar to what Mike showed. And so we can get through that pretty quickly. And then what I wanna spend the majority of the time on is looking at the case study of how we collect our resident physician activities at our largest training institution. You can follow along at the bottom here. I'll mark at what point we are. I'll wrap up hopefully if I have time with some of the lessons learned. So for ACGME requirements, as you guys know, accredited U.S. programs are required to collect data that track and show the development of your residents from the time they start to the time they finish. We have to demonstrate progress in six core competencies for diagnostic radiology. They've expanded sub competencies into 24 different categories. It's a large undertaking and it scales considerably when you have larger residency groups. Those of you looking at the questions, milestones are competency-based development outcomes and we can demonstrate them progressively. These are the six core areas. Top three are based around your medical practice. They involve patient care, medical knowledge, and practice-based learning. The bottom three are some of the more non-interpretative skills centered on professionalism, interpersonal skills, and communication. In addition to collecting data in those areas, as an ACGME program, we're required to give feedback. And the ACGME defines two areas of feedback, formative and summative. The way I think of these is formative is your daily, ongoing, immediate feedback. You have to have a way to tell residents how they're doing when they're on the clinical service. And then summative evaluations are assessing their learning and comparing the residents against the goals and the objectives, respectively, of both the rotation and the program. And you use these summative evaluations sort of in your annual reviews that you do to promote your residents to the next year. More immediate feedback, it allows residents to target areas of weakness or things that they need to work on, allows them to learn from their mistakes in real time, and the summative tells them how they're doing overall, more global perspective, and they get performance feedback less frequently there. So dashboards are a great way and an important tool to the education team to basically keep your finger on the pulse and know how your residents are doing. It's a quick visual way to display key performance metrics and to check in and make sure that the residents are on track. This is a great summary in radiographics on how this particular group set up their dashboarding infrastructure. And some of the key things that they said, and Mike covered this as well, it's important that your dashboard is automated. It has to be dynamic and has to have some degree of interactivity so that you can drill down into the details and make decisions from the data. The framework that they discuss, and this is very similar to Mike's slide, but it shows that you've got an activity or data source. It's translated in some way to a summary. It goes into a database, and then you're able to create your dashboards, look at reporting, and develop analytics. And I'll show you a little bit of how we do that in our case study. So what data is collected? When you think about the education team and how they collect data, I put it into three big buckets. Basically, you have one side, which is the interpretive side. How are the residents doing with their clinical skills? And that encompasses both diagnostic and procedural activities. And then you've got a whole host of sub-competencies that focus on non-interpretative skills. The matching rubric that we have for our clinical competency committee looks something like this table. Of course, it's much more expanded. So for example, you could have an activity where you have resident call performance. The data comes straight out of the electronic health record or whatever data source you have for tracking call performance. And it goes under the sub-competencies for medical knowledge and patient care. And you can assign that and grade the residents at your clinical competency committee evaluation. Similarly, if you have other areas that are providing data to your education team, for example, at Emory, we've got a contrast reaction simulator. It's run through our learning portal. And this is directly fed into our dashboard for our system-based practice number five. So this is just the framework that we use. I'd like to walk you through our case study. This was a real win for us. So a few years back, our baseline state was that we had a large training program still doing a lot of independent night work at our level one trauma center. It's also a comprehensive stroke center. We were upgrading the PACS. And we had a third-party piece of software that was supposed to be tracking feedback for our residents. And it was not working very well. It was a manual process to aggregate the data. The feedback was given out manually. Often there were delays to the residents. They weren't getting real-time feedback on how they were doing. And oftentimes you'd have a resident, they'd finish their night float rotation, and they wouldn't get feedback for weeks to months afterwards. So not a great example of a good system for formative evaluation. We also had limited operational oversight. And we really weren't able to verify what the demand on the overnight teams were. And our residents were spending a large percentage of their time at this institution. So what we decided to do was blow this up and start from fresh. We removed the third-party system. And we wanted an integrated process that basically stayed within our electronic health record and stayed within our dictation system. We put the feedback macro straight into our dictation system. It's at the bottom of every report. It can't be ignored. It'll protest if you don't fill it out. And then we had custom fields that were generated both in PACS and in the electronic health record that would tag the feedback so that the residents could basically see a work list of all their feedback in real time. So as soon as the report signed, it would pop up on this particular work list for them. And they would know if they had a significant miss. The nice thing about putting the custom fields into the electronic health record, we could query it all on the back end. And this was part of our operational data set. And we could distribute this data to whoever needed it where they could do analysis, create their own dashboards, and then act on what they had. So what I wanted was full visibility over everything our physician practice was doing. I wanted to know the time when the order was placed, who ordered it, where it was ordered from, what were the demographics of the patients. We collect all of the exam features. You know, where is the study done? What is the procedure that was done? All of the associated things that Mike was talking about, including the assigned CPT codes, the work RVUs that are expected so that you can do predictive billing. And then when the exam was started and ended, and then the whole process on the physician side. When your on-call resident signed the report, when they opened it, you know, how long it took them from the time the exam ended to when they generated their first prelim. We had other metrics. We know when the trainee the next day would pick up the report and dictate it. And then we knew when the final was out and what kind of feedback was given. So we basically, in one database, were able to capture everything that the residents were involved in and capture all those steps in the workflow. So from that, you could generate whatever key performance measures you wanted. You could look at turnaround time. You could look at how the system was performing at an individual level, but also the system level. You're able to quantify work effort, look at the intensity of certain shifts, and how the teams are doing. You can also look at how the residents are being supervised, what type of work they're doing, what type of study exposure they have, and then how they're doing when they're on independent call, what kind of miss rates they have, and how often you're calling back. Here's a dashboard that we use for the education team. This is the feedback that is given by the faculty. So the rows on the left are the individual faculty, and it allows for the education team to check the faculty and make sure that they're giving consistent feedback. And you can identify outliers, either faculty who call back major misses more often than most, faculty who have no callbacks for major misses, that's also a red flag, who may give incidental feedback more often than most, and then also faculty who may not give feedback as often as most. We, as an institution, we're able to give feedback about 95% of the time, which is really great. These outliers that I circled are actually some of our nocturnists, and their workflow creates sort of an anomaly in the data. At the same time, we can look at the studies that the residents struggle with the most, and we can target these for educational initiatives. We know that our residents, especially the R2s, when they get to grade E call, and they're doing spine trauma, they're not as comfortable as they probably should be. And so this is an area that the education team is putting together. We'll hit this harder in the curriculum. We'll make sure that those residents starting call have more exposure, and we can check their exposure to spine cases before they start independent overnight call. We use turnaround times not sort of as a punitive measure, but more as do we have the right coverage? And I like to look at the heat maps for our nighttime work, just to make sure that the teams aren't being overwhelmed, and that our patients are getting their final reads, or their first reads, in a timely fashion. 2022 was a bit of a slowdown for us, and we were covering everything really well. This middle box here, really there shouldn't be any independent call work, but sometimes studies roll over to the nighttime. You can see here in the third quarter of 2023, we're starting to get hotspots right after the end of the workday, and then in the early morning hours, especially on the weekends. And this all corresponds to the volume that we see. We have to start layering in additional coverage here, and this is just because we've had excess volume. We've added beds. We're now a 1,000 bed facility, and I installed four scanners on our team, so they're feeling it. So this helps us adjudicate what they're telling us in real time, and allows us to adjust accordingly. I can check work intensity, especially for our on-call residents. This is sort of the range of work that the residents do on-call. Keep in mind that this is on a per-day basis, so some of these shifts are longer than others. But our residents, for the amount of CT that they read, this is adjusted for work RVUs, which is not a perfect example of work intensity, but it's one of the easiest surrogates we have. You can actually see that the residents have been sheltered from a lot of our system expansion, and we've kept their call experience relatively stable over the last three years, which has been good. Here's a big win that we had. We started staffing our trauma service at night. This is the turnaround time from the exam end to the final report, and we now get 75% of our trauma studies finalized in about 120 minutes. That's not first report, that's final report. Trauma service is super happy with this. Residents have a lot more supervision, and they're gonna see those feedback marks in their work queue a lot sooner, rather than the next day it comes sort of on the same shift. So they get to see it in real time, and they can go back and look at it while it's fresh in their mind. A couple lessons learned, like Mike said, you have to identify your resources. There's a lot of pre-existing resources at institutions that you can use. We're using stuff that's already prepaid for. It's available to us. You just have to identify the data team. You have to have some specialists that can tag your stuff, and you have to create the right structure. Thanks very much. Well, thank you so much. I appreciate you all coming out today to talk about dashboarding. I think we've had some really great lectures around the structure of dashboards so far, so I'm actually not going to delve into that much in my talk. We're going to talk mostly about equity and equity metrics and how can we start to approach putting these types of metrics in our operational dashboards. A lot of times when we talk about equity, we talk about it as separate from other entities within the scope of practice that we have daily, but we believe, or I believe, that it really needs to be integrated into every decision that we make, especially when it comes to operations and education and health outcomes for our patients. So the way that I'm going to view this over the next couple of minutes is really as health equity as a systems problem and therefore needing systems solutions. I know that we spend a lot of time at this conference talking about equity and what that means, but I just want to spend the beginning of this lecture really talking about some of the definitions around health equity and what I'm seeing it as we approach this conversation. So when we talk about health disparities, what we're really talking about are our vulnerable patients, and what do we mean when we say how do we equitably address vulnerable patients within our population? Understanding how to approach this conversation really requires a nuanced understanding of what vulnerability actually means in the context of health care and in the context of the patient who's coming in to see you, and really that just requires a lot of deliberate appreciation of a situation, of a context, of an environment that creates a space where patients are not always in a position to fully care for themselves when they enter into a health care situation. And when I think about this, I really think about my grandfather at the end of his life, where he had a complete reliance on health care workers and the family around him, and I think about the deliberateness that my family had to enter into in order to make sure that as he navigated the emergency department, as he navigated outpatient appointments, as he navigated long-term care facilities and inpatient stays, that care wasn't dropped because of certain vulnerabilities that he had at that time. And so that's what we're really trying to address when we talk about health equity. And the drivers of vulnerability within our health care system are complex, and they're often interconnected, and they include things like socioeconomic status, language barriers, age, mental status, mental health, race, physical ability, veteran status, especially within the United States as well. And these risks to vulnerability can be serious and can range from anything from falls, delayed diagnoses, neglect, abuse, and in some cases, death. And so there's a lot of research that has come out really centered around this that I think gives us a good frame of reference to start to look at, how can we address our vulnerable populations within our systems? And a couple of those have been produced within our radiology literature. These are outside of radiology literature because I think that as radiologists, we have to be aware of the entire context of the health systems that we work in. But we know that PSA screening, for example, is lower in black men, even though the incidence and the mortality is higher in that population. We also understand that the research behind the treatment for prostate cancer largely excludes that population as well. So we have to think about it in those contexts. We also understand that pregnancy, morbidity, and mortality rates, especially in the United States, have not improved much for anybody over the last 30 or so years. But it is significantly worse for Hispanic, black, American Indian, and Alaska Native women, where the mortality rate in those populations is four times higher than a white or an Asian population. And there's an age component to that as well. So as these populations get over 30, that gap gets significantly worse. And we understand that education doesn't matter here either. If you are a black woman with at least a college degree, your mortality risk is 5.2 times higher if you are pregnant. And so a lot of what I'm trying to do right now is just drive awareness in the conversation. We even see this trend for lung disease and lung cancer, where black and indigenous populations are often underdiagnosed. They often do not undergo appropriate surgical treatment. And in the Latino population, they often lack treatment altogether. Specifically in the radiology literature over the last couple of years, we've seen a lot of outcomes research centered around how radiology can help focus on health equity. And so these are two articles that came out in a dedicated issue in JACR in 2022. Dr. DeBenedictis's team did a nice overview of health disparities in radiology and looked at a range of vulnerable populations as it pertains to things like breast cancer screening. lung cancer screening, colorectal screening, emergency imaging visits, and the utilization of emergency like CT scan, for example, in the ED, as well as advanced interventions. And then we also have work out of Vanderbilt, which focused on veteran care. So we understand that lung cancer screening is not great across the country. It's even worse for our veteran population. But it's even worse for veteran populations who live in rural areas. And so these are areas where radiology can start to insert ourselves as arbiters of health equity as well along the way. And as kind of a final definition, what we're talking about here really is equity, which means we're talking about outcomes. What are we going to see on the other side of that? And a lot of times there's some confusion between what equality means and what equity means, so I want to just put this here for that purpose. But equality is really treating everybody the same regardless of their specific needs, meaning individuals or groups are given the same resources, which is a good goal to come to, but we understand that's not always going to give us the appropriate outcomes at the end of the day that we're looking for. Whereas equity, on the other hand, recognizes that a person or a group has different needs, and we need to address those needs specifically in order to come to similar outcomes for everybody at the top of this. And so why does it matter? I think a lot of it matters because we care, right? We want a healthy society. We want an equitable society. We want to reduce our health inequities as much as we possibly can. But also it's important because the government says so. The Joint Commission says that this is part of what we have to be looking at going forward. In 2023, under the patient safety goals, they did put an improved health equity piece in here, and a lot of it centers around being able to identify somebody who is going to lead health equity work within your organization and then take that into an actionable step. I'm really just going to focus on the analyzing quality and safety data to identify disparities because that aligns with the dashboarding conversation that we're having today. And so my background is very operational, and it's in quality improvement as well, but I just want to take a moment to talk about what do we mean when we're saying measures, because a lot of times we're actually going to have to develop these measures internally for our populations because unlike turnaround times or volumes or things like that, they're not as standard within radiology. And so we have structural measures, process measures, and outcomes measures. And really when we talk about structural measures, it's what do you have in place in order to get to the goal that you need to get to. And when we talk about process measures, I think these are actually our most important measures because it's saying what are we actually doing to get to the outcome that we want. And eventually we'll get to that outcomes measure, and patient outcomes is what we're really targeting in a lot of these conversations. We also want to look at ourselves and our internal staff as well, so we're looking at things like hiring and retention rates. We're looking at things like promotion rates and patient returns and no-shows. We had a really great scientific session in this room right before this where they talked about patient no-shows and how that was stratified based off of social factors that we see in here. So the Institute for Diversity and Health Equity has created dashboard domains for equity, diversity, and inclusion in partnership with the American Hospital Association, and they include data collection and stratification, cultural competency, diversity and inclusion at the leadership level and the governance level, and strengthening community partnerships. And so this is the way that they recommend that we approach health equity dashboards going forward. For data collection, it's really looking at the percent of your workforce who's trained to collect this type of data. So real meaning race, ethnicity, language, preference, data collection. And there's a lot of other things that we can collect in here that we've mentioned before as well. For cultural competency, we're looking at how many of our employees, how many of us have actually completed this type of training, and what that impact of that training is on patients. So really looking at the percent of patient and family-related complaints as it comes to cultural competency, and also looking at our HCAP scores pre- and post-intervention around that. For the diversity and inclusion at the leadership and the governance level, it's really looking at how are you partnering with your community organizations to align strategic priorities as it pertains to what your community needs in terms of health equity, and then the percent of leadership that is diverse. And this says diverse and inclusive, but I just want to point out that just because you're diverse doesn't mean that you're inclusive. So I think that next point is also important is how many are the last point on this box is also important, meaning how many of those of your diverse leadership are actually engaging fully within the organization, and how are you going to measure that? And then finally looking at your community partnerships. So how many community partners do you have that align with your strategic priorities? And so a lot of these are very much structural measures, and there's a little bit of outcomes measures in here, but what can we do specifically as radiologists as we start to think about incorporating health equity measures into our daily practice? And for us, I think it really is we can have a big impact on screening and screening examinations and ensuring that we are driving initiatives that actually lead to getting those, capturing those patients that we're potentially missing for these. Scheduling and no-show rates are also important. We understand that scheduling is difficult for patients. No-show rates happen for a reason. People aren't just not showing up. They're showing up because they have work, because they have child care issues, because they're afraid they can't pay for the examination on the other side of it. And then emergency imaging, I think, is a good place to start around utilization in certain populations. One of our trainees also talked this morning about emerging health inequities, and I think that that's something that we have to be very cognizant of because we, in this conference, have talked a lot about artificial intelligence, for example, and what is going to be the impact of those type of tools on these populations going forward. And that's a personal interest to me because, as Vice Chair of Informatics, we're very engaged in that conversation on a daily basis. This is a paper that came from Curt Langloss's group where they looked at where are these models actually trained, and we know that most of these models are trained in California, New York City, and Boston, because that's where a lot of this research is happening. What do those populations look like? How do these models work on populations that don't look like these populations? As we're out and we're checking out all the vendors and all that kind of stuff, how would a small rural hospital be able to retrain data so it actually fits on the patients that they're actually looking at, as opposed to an organization like ours, which has a lot more resources so we can retrain if needed? So that's where inequities in this space can start to arise, and so we have to just be very cognizant of that. The other thing that I want to point out is that large language models are very popular right now, and this is a paper that we just published in Radiology where we look to simplify those reports. So we know that because of the Cures Act, patients have access to their reports. Do they understand their reports? What do they mean? Are they able to access their reports? So we're able to leverage large language models to simplify reports to a reading level that makes sense for the general population. But then we ask the question, how are the large language models trained? And maybe they have some bias in them as well. So we actually asked, I am a blank patient. So I'm a black patient, I'm a white patient, I'm an Asian patient. These are kind of just major groups that the U.S. Census uses. We put them in there, and we noticed that ChatGBT actually changes the reading level based off of your race. And I'm not necessarily going to be using ChatGBT and telling it what my race is, but we also know that there's a lot of information about us within the Internet. Everything is kind of out there. So how can it actually start to use other information about us as we're using these tools as well? So this is unpublished research, but it's just a question that we were asking. I'm not saying this is good or bad. I'm just saying that it's something that's out there. And so as we're starting to use emerging technology, we have to understand what the impact is. And so we have to start to ask questions about equity and how we're going to track that within our organization. And this is just a tool that I've used in a lot of the nonprofit work that we've looked at, where we start to, as you formulate questions, you want to understand who's involved. So what population are you actually trying to look at? Who's going to be impacted? Really thinking about what intended outcomes are and what unintended outcomes are, and then being able to create metrics to capture all of that so you can see it early. And then, in general, you want to make sure that everything that you're capturing is aligning with your organization. And all of this is about outcomes, so you want to align it with whatever change process you have a metric around that. So this isn't much different than regular quality improvement. It's just putting an equity lens on it to say this is what we actually want to look at. This is another paper that we put out last year, and it's really just about preparing, which I think that a lot of us in this room are prepared for because we have that quality background. Equipping, which is a conversation that we've had earlier around dashboarding and how to use your dashboards, how to pull that appropriate information in. A large barrier that we have is that we don't capture a lot of this information from our patients, so being able to understand how to build that resource up, and then making sure that it's an organizational support effort going forward. So with that, I'm going to stop. Okay, I'm going to try to clean up here and follow up on kind of what Mike, Matt, and Melissa have talked about. So more broadly, why do we use dashboards? My general thoughts on that is to raise awareness, to raise awareness about operations, education, equity, whatever it may be, a dashboard when well-designed should help us with our situational awareness. They can occur at multiple levels. On a broad level, they can occur at the health system, and we also use them for the department. I'll talk specifically about that. I'm also going to talk about this concept that I met with an AI startup this week, and he used the term cognitive burden, which I love. And I'm going to show you slides that I'm not really happy with because I think there's a lot of information there, and I think that that results in cognitive burden and hurts our ability to make decisions. So I'll talk a little bit about future directions, and I want to frame it in the following way. If you were to ask Jeff Bezos how many Xboxes were delivered to eastern part of Kansas in the holiday season 2022, I bet within seconds he could tell you that. Then he could also probably ask his team how many do we expect to sell in the next quarter or this holiday season. So when we think about dashboards, they should be specific, they should be actionable, and that has been discussed, but they should also be able to predict what we're going to do, whether or not we're going to provide poor service to the African-American patient who needs to be streamed for a PSA, whether we're going to understaff our large hospitals for a night where it's going to get busy. So talk about dashboards. Where does that analogy come from? A car. So here's an older car with an analog dashboard. So you see a bunch of lights come up, and that might be a little bit frightening. And you're like, okay, well, what does that mean? So you focus on this particular one. That's a little bit better because it's in your face, and you can see what's going off. Well, what's even better than that? It tells you specifically which tire and what the problem is. And when a dashboard is done well, that's what it should do. All right, so health system-level dashboards, these are some screen caps. For my institution at the University of Virginia, again, automatically, oh, my God, what's going on? Oh, but they've color-coded it, so it should be easy, right? Kind of. So, you know, you'll see a bunch of information here, a bunch of jargon that administrators use. I'll focus on one that we were asked about, and I was asked to present on at a meeting, and I didn't really understand what it meant. But provider-initiated cancellation bump rate, right? A lot of words there. But, you know, you can dial down into that, and you get even more extraneous data, which could be helpful, but again is a little bit overwhelming. I'll tell you, basically, it's when a doctor comes to a conference and doesn't realize it and says, okay, cancel all my patients next week, right? Obviously, that's not good care. It's something that the health system wants to reduce, and then we can break it down within our divisions. Clearly, there are going to be some divisions that have direct patient care and clinics and others that don't. Another one, everyone loves to talk about our turnaround time, and I think it's because it's easy to measure. I don't think it's a good metric to say how we're delivering care, but my general story with that is if you go to a fancy steakhouse in Chicago and you're willing to wait an hour for your steak, but are you willing to wait two or three days for that steak? Probably not. Your hunger will have subsided, and you will go elsewhere. But if you go to McDonald's and you wait an hour for a cheeseburger, you're going to be a little upset. But there's different levels of quality. So imaging turnaround times don't really capture that to the degree that we want, but it's something that we can measure and we can work on. You can also see trendline data. We have a daily huddle at the health system level that has a bunch of roll-up metrics. This is nice, at least in terms of situational awareness, and it's a little bit cleaner. We also focus on something called our balanced scorecard, at least some business school buzzword here. So there are three categories that we've been asked to kind of generally comment on from a department of radiology, so I'll focus, as a quality and safety person, on what we say is delivering the safest care possible. So we have three categories, a little bit cleaner here, patient-followed injuries, that makes sense, I can understand that, CLABSI insertion rates and infection, I can understand that, and then the specter that's always in the room, report turnaround time. So when we talk about a metric like patient falls with injury, we try to show it graphed out, but what we really want to know is what is that information telling us, right? So my job as a person representing the data is to bring life to that information and to make sure that people aren't too lost in the graph and the slide but understand what we're trying to say. So not only is it telling us, okay, well, a couple of patients fell. Were they identified as a fall risk? No. And what can we do to improve upon that and what have we done to improve upon that? I'm not going to read the bullet points to you, but that's part of what the discussion is. So CLABSI, another common one, very hard to get things to zero, but when you're in quality and safety talks, the classic question is, how many patients is it acceptable to die from an infection that was acquired in a hospital? How many patients is it acceptable to have a hip fracture that otherwise was not identified as a fall risk? The answer is always zero, and it should be zero, but we're humans and we're fallible, so we have to constantly focus on that. So here are some of the techniques that we've applied, even though these rates are very low, to improve upon that. So report turnaround time. This one falls a lot on my own division. Everybody who's an abdominal imager or body imaging knows the surges in volumes that we've experienced, and we look at a report turnaround time final signature within 24 hours as a metric that we focus on. It's nice to have data from prior years to see, is this a trend? Again, getting towards the situational awareness, is this something that we need to get out in front of? And some of the techniques that we've done to try and help that. One thing that we've been very valuable to have a partnership with is to partner with a company that can help build specific dashboards for our department, and it provides more data analytics. These are also a little busy, and I'll work with our vendor, but it uses Tableau, and it allows us to bring in the information that we need. So RV productivity, as Mike discussed earlier, also kind of a confounded metric, but it is something that is important. I show my own information here, so don't embarrass anybody. And seeing how you're comparing to the 60th percentile, which is what our health system, what our dean focuses on, it's important. And what's nice about this dashboard is that each one of our providers can log in individually to see exactly how they're doing, so they don't have to wait for their annual review. They can see in real time each month, how am I doing? Oh, I took a bunch of vacation last month. I went to a conference last month. But from a division chief level, I can pull that up one more level, and I can see how the entire division is. And then at a vice chair level, you can see how a department is doing. So you can expand or contract the data to be at the level that you want, and that's what an effective dashboard should do. So here again is report turnaround times across the department, and you can see from a leadership standpoint where we can see outliers, and then we can explore why it is that those individuals are outliers. And then the key, and this is more of a leadership point than a dashboard point, is not to just circle the outlier and say, okay, that's a problem, but to really inquire why it is that that individual might be an outlier and could there be something that better explains that. And then finally, critical results reporting is a key metric that we are judged on. You'll see not only the key metrics but how we benchmark against national average and what our own personal goals are. So I hope one thing that you've seen from these slides is, okay, that's a whole lot of information. How do I go about digesting that? How do I go about making sure this is specific, actionable, and predictive? So Melissa touched on this a little bit, but what are future directions? There's a lot of talk about large language models here at this conference, and understandably there's a lot of excitement across the country on how they can be applied in a variety of ways. I don't subscribe to the latest version of ChatGPT, but one of my residents did. So last week I said, how have ChatGP generate an image of what an ideal dashboard would look like in radiology? And this is what it came up with. It says here's a sample dashboard designed for a radiology department. It can look at things like wait times and a pie chart for scans for body parts, key performance indicators. It got all the buzzwords correct. And I think this is good, and I think this is going to be a future direction where companies employ these large language models to get real-time answers, and in a conversational fashion. Like I go back to that original question I posed that Jeff Bezos asked, how many Xboxes did we sell in eastern Kansas or the city of Topeka or wherever? And that kind of conversational information, rather than just being bombarded with a huge amount of information, is going to be the way that we use dashboards to raise awareness and make better real-time decisions. But also on that weekend we had an interesting case of a ruptured gallbladder. I'm an abdominal imager. And I said to my resident Sam, I said, Sam, you know that the new version of ChatGPT can actually generate images. He said, oh really? Okay. Well, see if it will generate an image of a ruptured gallbladder on CT scan. So apparently when you put that prompt into this particular large language model, it returns it can't create medical images for you. I'm like, okay, just give me a schematic of what a ruptured gallbladder. I'd love to not have to draw this up myself and use it in a lecture or a short conference next week. And this is what it drew. So this is not a gallbladder. This is a stomach. And I love that it's trying so hard to try and come up with terms that sound very much like a physician or a doctor would use. The anatomist, a goblonic fissure. The intralumonic fruit, which is nice. The gasophromic tiscue. And you've always got to be careful about the pirifording fluid. Because when the pirifording fluid is excessive, it can really lead to bad consequences. So the reason why I show this tongue-in-cheek is if you ask these large language models questions and you don't have subject matter expertise in what they should be returning or likely returning, you could get taken for a bit of a ride. So buyer beware with that. So, yes, when dashboards are clean and clear, they can improve real-time decision-making. You've got to make sure that you're asking the question either at the individual, the department, or the health system levels. And the emerging tools that we are having, like the different types of GPT and all that, other kind of large language model, can really impact how we can use dashboards in the future. And that's all I have. Thank you.
Video Summary
The video discusses lectures on dashboards in radiology, touching on key aspects and challenges of integrating dashboards in healthcare systems. The focus is on using dashboards to track metrics such as report turnaround times and RVU productivity, emphasizing their significance in operational, educational, and equity contexts. Highlighting experiences from institutions like Emory and the University of Virginia, the talks explore data integration, effective visualization, and predictive analytics. Challenges include the cognitive burden of complex dashboards and the necessity for them to facilitate real-time awareness and decision-making. Discussions also extend to the role of dashboards in addressing health equity by integrating specific measures to monitor and improve care for vulnerable groups. The presenters emphasize that dashboards, when well-designed, should provide actionable insights into operations, education, and patient equity, thereby aiding health systems in making informed, real-time decisions. The potential future role of AI and large language models in enhancing dashboard effectiveness is addressed, albeit with caution regarding their current limitations in understanding complex medical queries.
Keywords
radiology dashboards
healthcare integration
report turnaround times
RVU productivity
data visualization
predictive analytics
health equity
real-time decision-making
AI in healthcare
RSNA.org
|
RSNA EdCentral
|
CME Repository
|
CME Gateway
Copyright © 2025 Radiological Society of North America
Terms of Use
|
Privacy Policy
|
Cookie Policy
×
Please select your language
1
English