false
Catalog
QI: Satisfaction All Around! Providing a Better Ex ...
T1-CNPM14-2022
T1-CNPM14-2022
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Let's go ahead and get started. We're gonna talk about patient satisfaction, what to target. And so our objectives for this session are to understand potential practice implications of patient satisfaction, have a little bit of exposure to some of the methods, to the approaches we can use to identify predictors of patient satisfaction, and we'll discuss a few common issues and themes that are found to be relevant to patient satisfaction in radiology. All right, so why is patient satisfaction important? Well, we all care about patients, so we put patients first. I think there is some good emphasis on patient-centered care and getting patients more engaged in their care and that can improve outcomes. Sometimes a practitioner might want to attract or retain patients. And whether you like it or not, satisfaction has been used as a measure of healthcare quality and it can affect reimbursements. Certainly there are criticisms. For example, people have raised the question of whether patient satisfaction correlates with high-quality care. And there's always the concern that sometimes emphasizing patient satisfaction can incentivize certain undesired practice patterns. And so we've seen, for example, correlations between heavy opioid use and high patient satisfaction. And that sort of has led CMS to take out that question on pain management. And similarly, an emphasis on patient satisfaction could disincentivize things that might be good for the patient, but that might make them uncomfortable or less satisfied with a visit. So I think when we're talking about what to target, it's important to keep in mind both the pros and the cons and the potential criticisms. All right, so we can sort of see how consumer satisfaction is measured in general. We can sort of take some examples from the business marketing literature. You probably have encountered customer satisfaction scores. When I took an Uber here yesterday, 10 seconds after I left the car, I get a pop-up saying, okay, rate the service on a scale of one to five. And there is some evidence that, okay, companies can track this because it sort of has some correlation with business metrics like revenue. I think the conventional belief is that about 1% change in this CSAT score, the percent of positive, satisfied customers, that can lead to somewhere around a 4% increase in revenue. And some people swear by the net promoter score. There's sort of a question where they ask the consumer how likely you are to recommend the company or service or good. And you can take the portion of promoters, the high-rating consumers, and the customers that sort of neutrally sort of ignore them and the detractors here, you can take the difference between those percentages and you get a net promoter score. And so people feel that, okay, in the business literature, this does correlate a little bit with how a company might improve market share. And healthcare is kind of different. You have network effects, but it doesn't work as well. This meta-analysis sort of said, okay, well, you can look at the NPS score in healthcare, but it is useful when you combine it with other metrics, but by itself, it doesn't work as well. But if you sort of take these examples from the business literature and sort of do something similar in healthcare system, you can always ask after each encounter or after random selection of encounters, you can ask a patient what they think and they can ask a series of questions. And we have to talk about HCAHPS since it is one of the first very standardized methods, one of the more standardized instruments to gauge patient satisfaction. It's based on, it's meant for hospital-based inpatients, but because there's been a lot of work on this survey tool to standardize and make transparent, as in publishing the results, making those results publicly available for the past 15 years it allows for some type of care transparency and the patient can see what hospitals rank well on that metric. There are other HCAHPS tools as well, some that are more focused on outpatients or an amateur surgery, there are various certified survey vendors that might assist with data collection and analysis and provide certain surveys for various institutions. Some departments may make their own custom specific surveys. And most of the data and literature seem to be using surveys. There are some studies that look specifically at complaints or specific feedback. They can do qualitative studies of focus groups. But most of the talk will discuss surveys. And here's an example of an HCAHPS survey that is all publicly online, HCAHPS online. You'll ask a series of questions, about 29 in this survey. And they usually reference a specific encounter, a specific hospital encounter, and ask the patient to rank various issues usually on an ordinal Likert-type scale. And they usually have a few overall rating questions and say, okay, what is the overall rating you assign to this hospital experience? And they'll ask something of a recommend score. Would you recommend this hospital to your friends or family? When you get to the question of what to target, it can be a really difficult question. And the short answer is, it depends. And because a lot of times, there are papers that look at quality improvement findings, but oftentimes a lot of quality improvement and local decisions, you sort of have to base it on local data. You have to make sure the metrics are valid. You have to understand the data, understand the drivers of patient satisfaction. And sometimes you may need to incorporate population-specific or patient-specific, practice-specific considerations that might be different from those that the practice or populations are covered in a benchmark. And you also have to decide where patient satisfaction fits within the broader goals of the institutional practice. So in deciding what to target, there often are a lot of options. And an operations engineering professor once shared an insight that he had from experience in the management consulting sector that sometimes companies have a very hard time deciding what metric to work on, to focus on. If you had one value or metric or measurement that you want to optimize, maximize, or minimize, sometimes it's very difficult to sort of assess. A company might want to maximize profits, but they might have to decide, okay, do I maximize next quarter profit, next year profit, cumulative 10-year profit? There's so many ways to phrase the question. Or you might just minimize the risk of a catastrophic loss. Or you might not even care about profits. Or you might be in a growth phase where you focus on growing the customer base. So some companies sort of do that. They focus more on new subscribers or new daily users or something like that. And they don't care too much about profits. So it is a complex question of deciding what to target. So if you do a QI initiative and you want to target satisfaction, you sort of have to decide, okay, what is the time horizon? What do you want to focus on? Do you want to look at overall satisfaction scores? Do you want to have specific targeted domains that you want to improve on? And you have to consider the constraints. Sometimes QI projects don't have an unlimited budget and you sort of have to make sure you improve patient satisfaction but you don't want to compromise patient care or have a huge impact on revenue or earnings. So when you have this metric that you want to study, you can sort of analyze performance. And there's some ways to do that. And one of the straightforward ways is to compare to some benchmark indicator. And the advantage of that is it's simple, it's straightforward, it allows a quick comparison to see how well you're doing. But it has some disadvantages. For example, the ideal benchmark, it might vary with access or population parameters. You look at some, let's say mortality might be a good quality benchmark to consider. But if you look at the overall US population, about one year mortality is about 1%. And you can say, okay, well, my mortality and my rate of ultra-practice, my mortality rate is better than I'm doing well. But you might have some geriatric practices that might have lower scores than pediatric practices. And it's hard to say, okay, well, is comparing to this benchmark a really effective way to decide how well we're doing? And sometimes that might not offer useful data on what to target. So oftentimes, I think it's helpful to try to understand the data, build a quantitative model, try to identify all the explanatory variables that are involved in any satisfaction metrics. And look for confounders as well. For example, age in the example we just discussed. And then you can sort of, with this quantitative model, you can sort of quantify the impact of various driving forces. And so there are various quantitative methods to do this. There's a lot of literature. Sometimes you might wanna include some type of probability simulation to sort of see how your metric improves with that. There are linear and nonlinear models. I just wanna have a brief methods slide here because I'm gonna sort of describe in a minute the meat of the talk, which is what are the factors that affect patient satisfaction? And I wanna have a brief methods slide, sort of, some of this might be pretty basic for many of us who do number crunching on databases, but sometimes I have med students or junior residents come in and sort of wanna explain, okay, well, what is our purpose in doing this? Well, I'm just gonna have a few brief slides here. When we collect data, oftentimes we collect some outcome variable of interest. In this case, we're looking at satisfaction metrics. And then we want to collect other data as well that might help explain the data, various covariates. You can describe these as factors or columns in your data set. And you wanna do some type of linear regression. Or we can have a satisfaction, satisfied versus dissatisfied outcome. And you have a binary variable. It's basically the same, except you just have a binarizing variable here. And so when I explain to my medical students or residents when I start a project, okay, well, sometimes it's helpful to do this so we can analyze, we can look at the coefficients and say, okay, this factor is important or it has this impact on the outcome. And so I think in a nutshell, some of the approaches that we use in analyzing our satisfaction data set to get an answer, okay, what's causing the data, what things we can focus on or improve on. And a tip here is to just make sure you don't have too many columns to look at. We run into this issue a lot with machine learning or deep learning in general, where you have millions of parameters and you can easily overfit the data and not get useful information. So if that happens, you can do things like principal component analysis just to reduce the number of factors to something that's manageable and easy to understand. And so that's basically what I sort of tell medical students and residents as they start some of their QI projects here. All right, so the meat of the talk will sort of be in the next few slides where I'll sort of discuss some of the findings, some in the literature as well as some findings that we have in our own experience and our own QI projects and papers that are on patient satisfaction. So one of the things we had done is we took about the last nine or 10 years of patient satisfaction data and we sort of took a deep dive into that. We wanted to see, okay, what affects patient satisfaction, what we can do to model it and understand the quantitative relationship between various things that can influence patient satisfaction. And one of the things we found is that modality mix certainly does affect patient satisfaction. In this graph that shows dissatisfaction rates by modality, you can see that mammography tends to have about half the dissatisfaction rates as MRI or radiography or CT. And so it sort of reflects, okay, there could be some underlying challenges or some modalities might be more difficult for the patient. And we had like a whole two special issues in the topics of MRI just discussing ways to improve the patient experience or things that can improve patient experience in MRI. So some modalities, it just might be intrusive and uncomfortable for patients, so we just have to keep that in mind. And it might reveal things, okay, maybe we do want to focus a little bit more on those experience, things that tend to lead to dissatisfaction. There might be something that you could sort of, you could initiate more target initiatives to focus on modalities that tend to have more dissatisfaction. We found that age affects patient satisfaction as well. And we found that, well, most of the times when patients are dissatisfied, the dissatisfaction rate can be as high as 17% in young working age patients. So in the 20 to 40 age group, these are significantly higher dissatisfaction rates than those that are sort of in, for example, in the 70 to 80, 70 to 90 age group. Even compared to the 40 to 60 demographic, it is still significant. And it can be between a number of issues, generational issues, it could be the way we approach discussions with the patient. And this sort of is something that is also reflected in other areas outside of radiology. For example, looking at just hospital care in general. Even when you stratify by health status, that if you separate those with good health status and those with low health status, you can still see this age dependence where satisfaction tends to be higher among older individuals and lower in the younger adults. So how does this help in terms of deciding what to target? Well, you don't necessarily wanna exclude younger patients from practice, that doesn't really help. But it could sort of offer, are there certain communication strategies or certain things that we can do to specifically help improve dissatisfaction rates within this demographic? We also do various regression analyses to identify what questions tend to be most impactful for radiology patient satisfaction. So we took a similar data set and we just sort of looked at two outcome variables where we look at overall rating of care and also likelihood of recommending. And these are sort of general overall assessment type questions that the patient will sort of conclude that describes how they like their experience. And we can look at each question and see how those questions tend to rank and quantify how impactful each question is in contributing to these overall metrics of the overall rating score or the likelihood of recommending. And we noticed that the top questions, the most impactful ones, the ones that have the highest odds ratios here, tend to be the ones that are related to the response to patient concerns and complaints and the sensitivity to the patient needs. And so they both are the top two questions in deciding and assisting the patient in concluding whether they like their experience or are likely to recommend the practice to others. They're somewhat similar to what might be seen in the medical literature in general as well. So if this one particular study looked at various points of contact ranging from how soon phones are answered, how easy it is to get an appointment, and they try to correlate, okay, which of these measures most strongly correlate with retention loyalty, how likely they are to stick with the practice, or advocacy loyalty, how likely they are to recommend the practice. Then you see, okay, well, the overall satisfaction score, not unexpectedly, correlates really well with patient loyalty as a good summary metric. But patients also tend to care, in general, about how thorough the exam is, whether the provider answers the question or shows appropriate concern. So these sort of are in the top four when you look at what points of contact, what types of contact tend to correlate well with how likely a patient is to stick with a practice in medicine in general, not just in radiology. There are several studies that have looked at empathy and communication and how it impacts patient satisfaction. In the radiology-specific patient satisfaction literature, there have been some studies that looked at empathy and communication, and they found that when you look at these surveys and when patients rate radiology experiences, sometimes they rate the factors relative to empathy and communication more strongly than those related to clinical competency metrics like safety and quality, and it's sometimes like an 80 to 20 ratio. When you look at things like, we've done a study at our own institution where we implement a staff training regimen where selected MRI outpatient technologists and other staff will undergo a relatively rigorous course in advanced communication skills and how to get patients to be more relaxed, and we found that in addition to improving business metrics like equipment utilization, it also improves patient satisfaction in the outpatient MRI setting. And here's an example of that from that study where after implementing the staff education regimen, and after having randomized centers getting this regimen or not getting it, we see that the centers that are, the outpatient sites that are randomized to receive the training, improve their satisfaction scores, and the ones that did not sort of had a very slow decline in satisfaction scores in both the scores and the percentile rankings. So when you decide what to target, well, again, when I mentioned that it depends when you're trying to figure out what to target, it really, it does depend in the sense that sometimes it is worthwhile to look into improving staff communication, but it could be expensive. It might cost money. Is it worthwhile? And so you have to sort of weigh the advantages and disadvantages. And one of the things we found is that volume growth, which sometimes you think might correlate negatively with patient satisfaction, we found that there are certain situations that volume growth can coexist with increasing patient satisfaction. There's outpatient sites, and we see where those are growing in volume. We see that, okay, well, some of those also increased their satisfaction scores. So it's not necessarily a negative relationship. So things like wait times also tend to impact patient satisfaction in radiology. There's some radiology-specific studies on patient satisfaction that found that shorter wait times certainly do correlate with higher patient satisfaction in outpatient MRI. When you look at free text complaints, people have found that wait times count for close to 12 to 22% of those free text complaints. So it does seem to be a major issue, but when we talk about what to target, an easy way to target wait times is to just have fewer patients, but good luck getting an administrator to sign off on that easily. You sort of have to do a balance. But what we found is sometimes it doesn't have to be that way. You can look at, when we look at outpatient sites that increase patient volume, sometimes their perceived wait times might actually be, on average, five minutes less than those that have decreasing volume at the same time period. So it's quite possible that higher volumes might not necessarily increase the wait times or perceived wait times, and there are a lot of ways to address that. You can keep the patient engaged, you can have educational materials during their wait time to reduce that perceived wait time. So I'd just like to conclude by saying, okay, well, we've covered a lot of ideas and topics in literature that suggest, okay, patient satisfaction does have certain practice implications. We talked about a few techniques on quantitative modeling, regression analysis, the sort of deep data dives to sort of understand what could impact patient satisfaction that could assist decision making. And we've looked at a few results in literature that describe wait times and stack communication, empathy, these things can represent patient-centered items that can improve patient satisfaction in radiology. And with that, I will stop. Thank you for your attention. Today I'm gonna talk about what referring providers want from radiology, which was a pretty daunting topic to be assigned, and so I thought the best approach I could take was to tell you the lessons that I've learned from the five years that I've administered a referring provider survey at my institution. So why did we administer a referring provider survey at Stanford? Well, let's go back to 2018, pre-pandemic. I was, it was February, I was just starting in my role in performance improvement, and I was told, hey, in three months, we need to release a referring provider survey to clinicians. And I was like, why, what, what's going on? And apparently, unbeknownst to me, for the last few years, we actually had this requirement for funds flow that we get some metrics from a survey of physicians. And so a survey had been created, it was hard to get the data, it was through an application that wasn't one that the physicians had easy access to, and so I was told to find a way to make it easier for us to control the data flow coming in. So 2018 in May, I did my first referring provider survey. And so over the last five years, we've seen, we've continued to give the survey. We've seen that the percentage of our referring providers has grown over the years, the percentage who complete the survey has been pretty stable over the years, even during the pandemic. And from administering the survey, we've learned a few lessons for those of you who are thinking of doing the same thing about what to, how to even, how to even offer a survey to the referring providers. So referring providers are not quiet about their opinions about the survey, which they will even offer in the free text of the survey. So one thing that we got, and this even came through email, was don't send referring providers reminders if they've already finished the survey. So the first time I sent this out, I had a common link, I sent it to this mail serve list, and then we just sent reminders maybe on a weekly basis over a four-week period. This was infuriating, infuriating some providers. They would say, I already did the survey, why are you bothering me? So there's an easy way to fix that, which is to have your database of providers, and most survey mechanisms now allow you to create a unique link that you send to each provider, so you only send reminders to people who have not yet completed the survey. And having this database of providers also lets you have, from your medical staff office, the demographic information, so there are a lot of questions you don't need to ask in your survey because you already know those details about the person who's responding to the survey. The other feedback we got was this survey is too long. This particularly happened after 2019. You can see how long the median time to completion was in 2019. I'll let you know soon why that one was that long. But ideally, you want the survey to be under five minutes. How do you know how long the survey takes? I can tell you that in 2019, when I tested the survey, it took me less than five minutes to pretend fill it out, but that's because I came up with the questions. So it turns out the best way to figure out how long it takes is to have people who didn't come up with the questions test it, so I asked the Performance Improvement Committee at Stanford to actually test the survey as if they were a provider to give me feedback on how long it took them to complete the survey. So what happened in 2019? Why did our survey take so long? I did something that I think you can only afford to do maybe once every five or 10 years at your institution. I added questions to answer to test the hypothesis I had, but that were not questions that one would necessarily need to ask of providers year on year. And our hypothesis was that depending on the practice type of the provider, these providers had different needs from radiology. For example, our hypothesis was that primary care providers would not need imaging answers as quickly, well, they'd actually have a broad range of need in terms of timeliness of getting their results, that they might not need as deep specialization, that it might be okay to have a less specialized radiologist provide those reads, that they may not image as frequently, and that the breadth of imaging order is as wide, where specialists want specialists, they're more focused. And for emergency care, of course, time is of the essence, but they may not care as much about how specialized the interpreting radiologists are. So we wanted to see were these assumptions true? And so we asked some questions in our survey in 2019 to answer those questions. And what we found was that when we looked at the results batched by the declared roles of these providers, there is a significant difference in the timeframe in which exams need to be performed. Naturally, those in an emergency setting need their exams to be performed much more quickly than primary care. But also, primary care felt a need for exams to be performed more quickly than the specialists. A lot of the specialists had timed needs for their imaging, and it wasn't quite as urgent. We also found that there was a significant difference in timeframe in which after the images were acquired, the exam reports need to be generated. Again, emergency setting, not surprising, they need those reports to be generated quickly. And more quickly than primary care, more quickly than specialists, not much of a difference between primary care and specialists in that regard. Also, probably not surprising, specialty clinics require a higher level of radiologist specialization. They already are quite specialized, and they want a specialist level interpretation of their imaging. They care significantly more about that than primary care or emergency providers. We also found that in terms of volume of exams ordered per day, emergency providers ordered more exams per day than primary care providers or specialists. But this is where specialists cannot be lumped into one big bucket. We noticed, and we batched our specialists and found that there are certain groups of specialists that ordered more exams per day than other specialties, particularly orthopedic surgery, medical oncology, and neurosurgery ordered more exams per day as a group than the bucket of other specialists as a group. We also found that primary care and emergency providers order a greater variety of exam types, and they also order from a broader variety of modalities. That is probably why when people complain that they have a hard time finding an order, you tend to get those complaints more from primary care providers and the emergency physicians if they're not using a special workflow that pre-populates the appropriate kind of imaging order into that workflow. In terms of exam interpretation, so how, I'm sure you've all run into a specialist who says, I just need the images, I don't need you. Let's just make sure I get the images on time. So we wanted to see how comfortable are specialists with just looking at the images themselves and acting on their own. We found that specialists and emergency providers were more comfortable interpreting their own exams. None of them were trending toward, like everyone's saying, they were very comfortable, but there was significantly more comfort in interpreting their own exams than primary care providers. So primary care providers very much lean on us radiologists to provide them with those interpretations. One thing that did not differ between any of these provider groups was the value of direct interaction with radiologists. So even the specialists and emergency physicians who felt they could interpret their own exams, everyone thought it was very important to be able to interact directly with radiologists. So what was the take home message from this little research study from those extra five minutes of questions for our providers that year? What we found was that one size definitely does not fit all. And depending on your provider mix, and we all will have a different provider mix, you may need to tailor the degree of specialization to accommodate the needs of the provider. And you also may need to tailor your imaging and reporting turnaround time metrics to accommodate the provider needs. I know that when we build our work list, we build it based on specialty, we build it based on exam type. Maybe stat takes some role in that. We definitely look at source ED inpatient versus outpatient. But there is other information in the HL7 data that may be important, which is what clinic did this patient come from? And based on that, that may require a different type of report or a different tempo of completing that report. So what else have we learned over the years from the, so that was one time point. I think it's great fun to be able to ask these questions. I do not recommend doing something like that repeatedly with your providers. So what have we learned over the years for the rest of the survey that we, other surveys that we have since launched? One is we've kind of made our survey more and more efficient year by year by narrowing the questions down to the ones that we felt were actionable. So we've kind of looked at the surveys over the years, and there's some questions that we ask, and we realize, you know what, there's not much we're doing with this information, so why are we even asking them to take an extra five or 10 seconds of their life to answer this question? For those valuable questions, we do try to keep them the same year on year so that we have the ability to track progress over time. And we offer at least four areas for people to free text because we find that free text is where people really reveal what matters most to them. So just a brief, just a brief summary of how our survey looked in 2022. We start, we know the specialty of the providers. We kind of know, based on the medical staff office, but we don't know, because some providers practice in different locations, where do they practice the most, and also what imaging location within our environment do they tend to send patients to the most? So we ask those as questions that we find are valuable. We then ask them for their opinion on a five-point Likert scale of overall radiologist services. So these are services that we think are kind of universal across our departments. Radiologists, not division-specific, like speed of access to results, critical results communication, process for secondary interpretation, residents and fellows courtesy, and the knowledge and skills of our residents and fellows. And we've actually been able to track those metrics over the years, and what we have found is that we actually do pretty well on all those metrics, and there was one consistent area for improvement, which was our process for secondary interpretation. So that became a major improvement project that we completed over the past year. So hoping that this number bumps up for the 2023 survey. We also asked division-specific questions. So in areas that we thought a division had more influence than the department's radiologists, such as the courtesy of the radiologist within the division, the knowledge and skills of a division's radiologist, the quality of reports from that division, it's easy to contact a knowledgeable radiologist, the technical quality of the studies, and the multidisciplinary conference participation of that division. We thought that those would be better mapped to divisions. So by having them pick the most frequently used division, I could then separate out the data and provide that data to our division chiefs. And what we have found, because in the past we would ask the same questions for the department and the division, we found early on that really there wasn't any difference. So let's not ask these questions on the department level, let's just stick to division. We also ask what their overall satisfaction is with a division. We allow free text comments about the division. We also allow them to say which division they interact with second most frequently, but we don't have them answer this full set of questions. We just ask them what their overall satisfaction is and request free text that we can then provide to the division. And what we found helpful is to kind of map out year on year what divisions are most frequently interacted with. Because this is gonna really, these divisions have a strong influence on how referring providers view our department. And body imaging by far and away, both as the division interacted with most frequently and second most frequently, it always wins year on year. Not surprising, they cover a broad range of exams. But you can see how things have shaken out with the other specialties over the years. We previously had included interventional radiology amongst these divisions because it is its own division within our department. But then we realized the questions made no sense. So we decided to have a separate section just for interventional radiology. But not only that, but break it out to adult and pediatric interventional radiology and neuro interventional radiology. Because these are three separate groups of interventional radiology practices. And we offered questions specifically about IR and again offered the ability to free text comments about IR. And then really the most important question from a funds flow point of view, but also a really important part of our survey is overall satisfaction and change from the prior year. So we asked them on a five part scale how satisfied they are overall. And also how their experience has changed from the previous year. And the previously overall satisfaction was how our funds flow was targeted. But we decided to make change from prior year because our argument was we want to show that we are improving year on year. It's challenging to do that, but that's what we want to show. And so what we have found is that with the exception of you can see 2017, we actually had to stop the survey early. We had a very bad IT crisis that year. It was reflected in our ability to provide to our referring providers due to technical issues. And so, but otherwise we had pretty stable performance in terms of overall satisfaction year on year. We also have found that people have, for the years that we've tracked this metric, said that we are improving year on year. We did some work with our statistician and found that if our score was 3.2 or higher, it means that with statistical significance we are showing improvement year on year because a straight 3.0 means no change. And so we have, the providers are indicating that we are improving every year. And so the most important, but probably the most difficult information to digest because you can't just run some graphs off of it in Excel, is the free text where you ask two questions. One is what are you most satisfied with about Stanford Radiology? And it was gratifying to see because this wasn't always the case, that communication was the source of the most satisfaction followed by the expertise of the radiologists, that was not surprising to us, the quality of the reports, and also reporting turnaround time. That wasn't always the case with us either and that shows a lot of the work that we've done over the last five years. And so here are some of the comments that we've gotten about what people are most satisfied with about Stanford Radiology. They love working with the fellows, residents and attendings, their knowledge, their approachability. These are comments that you feed forward to the people who are doing the work because it's amazing, right? This is what people see and love when they work with our department. And then what would they most like to see improved about Stanford Radiology? Scheduling and access time. It's a huge problem for us. It's probably a huge problem for you. We probably all share the same shortage of technologists and if you don't, please send some of them our way. And also still some issues with communication. IR is another access issue. It was great to see that number four was nothing. Some people said they didn't wanna change anything about us. We still are gonna try to improve but that was just an awesome comment to get from people. And so these comments are drivers for improvement. For example, access. People want to image their patients within the desired timeframe. We haven't been able to do that in certain modalities. They also want patients to not have to travel far to get our expert imaging. And so we have major projects this year to address both of those issues. Communication. A lot of this communication isn't necessarily like, can I talk to a radiologist? But just operations. For example, if there's a delay in the authorization process, issues with scheduling or reaching the patient, they're saying, hey, we're not being communicated when there are issues upstream of imaging being obtained or a report being generated. Some of the communication issues where it was clear that those providers weren't aware of the reading room assistant program that we've had available for a very long time, there were concerns about failure to communicate protocol changes, something that we can definitely work on. And also their preferred methods of communication weren't necessarily the ones that radiologists were most familiar with. We also found the survey valuable because there were some things that we thought we had solved and then we found that there was a definite gap between what we perceived as a solved issue and the reality of the providers. For example, the lack of awareness of the reading room assistant phone number. It is on the top of every report on our electronic medical record in a different color at the very top. And we were like, how do you miss that? Turns out, that wasn't true if the providers accessed the report in their in-basket. You actually have to separately put print groups into the in-basket. What shows up in the radiology tab doesn't necessarily show up in the in-basket. Likewise with the ED view, you have to build print groups into their view. Until we got this and dug deep, we had no idea that was going on. We also realized that despite our attempts to push out information, that there are still providers not aware of the secondary interpretation order, which is so easy to do now in our EHR and the process by which we do that. So that tells us that we need to do more outreach. And also the authorization process. People didn't realize that some of their complaints about the authorization process were because they instinctively ordered exam stat which bypassed our normal workflows. And so we're actually now offering information at the point of ordering so that people know what's gonna happen if they choose stat when they didn't really mean stat on their orders. So in summary, what was really gratifying from these five years of surveys is what we've learned is that what people appreciate most are the people, right? Are people's expertise, are people's approachability. And the areas for improvement are operations and process, which is great. Like you can improve operations and process. Getting the people right is the challenge. And so far they're telling us that we're doing that. So hopefully this inspires you also to do a referring provider survey. It is a lot of work, but you get a lot of information and it just makes you grow every year. Thank you. So I'm gonna talk to you about how we've been working to improve interactions across our division. And we've been doing this work for several years. So there's a lot of time that's elapsed in these slides and also a lot of work. And so the first set of interactions I'll start with is our radiologists and frontline staff. And maybe you don't have this as an opportunity in your hospital, but it certainly was coming to show that it was an opportunity for us. So these are some verbatim comments from some frontline staff. Feeling blamed for a protocol being completed incorrectly. Feeling like radiologists aren't held to the same standards of behavior. Most radiologists never include who they are when they answer the phone, which is very frustrating. We have lots of sites of care, so we do a lot of telephone communication. Not all radiologists are approachable. When you're a new hire, radiologists don't interact with you or it's really hard to get to know them. And you can see in that one comment that people are afraid to ask for help, which is pretty concerning. So we started this process to try to understand where are we as a division? So we surveyed our frontline staff and our faculty. We didn't wanna assume that this was just a frontline issue. We wanted to understand, do our radiologists feel the same way? And so we asked, in your last two work days, how would you rate your interactions? Realizing that last two work days is something we added, because if you had a bad interaction six months ago, it's sometimes hard to get over that interaction. So we were really trying to get an understanding of where are we at today? So the good news is, our radiologists, 96% of them said that their interactions were very good and excellent, so that was great. But our frontline staff didn't feel the same way. Maybe not surprising to some of you. So some comments from those initial surveys. This is a radiologist comment. I can't remember a time when a technologist was rude or unhelpful. The most negative thing I can say is that I have sometimes gotten the impression that the technologist was unhappy with what I'd asked. But it was a tone of voice impression, and even then, not a strong one. One of the things I like most about working here is the high quality of frontline staff. Energetic, smart, committed to helping patients. If there's anything that I'm doing that is offending them in any way, I definitely want you to know so I can fix it. Then the frontline staff. Radiologist X is by far the worst radiologist to work with, because he, she is so condescending. She, he always has the attitude when I work with him, her, that she, he is somehow better than me. It makes it painful to work with him, her, and I'm always dreading when she, he is on the schedule. This is anonymized. We showed these comments at faculty meetings, so we tried to make sure that our frontline staff felt that even if they put in the comment that there was not gonna be any retribution. Sometimes I'm not sure if the radiologist will be in a good mood or not, whether because of me calling or because of other distractions, and whether that bad attitude will be taken out on me. For some radiologists, I cross my fingers before calling, praying they're in a good mood. I don't call to be annoying. I call for the patient or because heaven forbid, I need help. And then lastly, there's the A team, the B team, and the we don't want them on our team. For all of you who are radiologists in the room, I'm sure you are all on the A team and people love working with you. But we had to ask, why do we have this difference? And we had to really acknowledge that we have a power gradient that exists in our culture. And what I mean by that is that the person perceived as being disrespectful has a higher title or role within our organization. And you don't need me to tell you how this can really impact not only our experience at work, but also patient safety and outcomes. So we set out to form a team and say, we really, we gotta figure out how to work on this. And we invited people with all different kinds of personalities and different ways in which we work with each other. We had every modality represented, different types of frontline staff roles. And we asked for this team to come together and really have candid and open, safe conversations about what we needed to fix. And one of our team members came up with this acronym that we adopted and called ourselves, but we were the RESPECT team, which stands for Radiology Employees Striving for Productive and Effective Communication. And the first thing we did was talk about, what do we wanna see more of in our interactions and less of in our interactions? So we had lunch, we really encouraged people to speak openly, and I won't go through this long list, but we had a lot of things that we wanted more of and less of. I think what was interesting in this conversation was that we really didn't know what RESPECT meant to our technologists. We didn't know, no one really felt they were coming to work and being overtly rude or saying things that were mean, but it was the way in which people were made to feel. So it was tone, body language. Our radiologists wanted less interruptions, which is part of the challenge of, if you're constantly feeling like you're interrupted, it's hard to answer the phone and be positive all the time. So we set out a goal, and we were gonna try to increase the percent of very good and excellent interactions from 65% to 90%. So we took the first few surveys and tried to bucket, that verbatim feedback is hard, but we tried to really get to themes. And so you can see there were lots of reasons for negative interactions in our department, but it really came down to three areas. We had a lot of in-person interactions, telephone interactions, and then every July, we have new sets of trainees come to our department, which lends itself to new challenges. And so we needed to look at how our fellows are onboarded. So we broke into three teams, and we really empowered people from those three teams to go out and start doing PDSAs. So I helped lead a group, as well as two of our other faculty members that were co-leading the work with me. And the first set of interventions we looked at in the fellow group were the orientation. We really recognized that July 1st comes around, they're really thrown into our department with not a great opportunity to get to know everyone, and especially our frontline staff. So we started a PDSA around observations. So we had our fellows off the clinical schedule and just going and observe certain technologists just to get to know them. We, over time, adapted this to where we actually expanded this to some more areas. We made it longer. We gave them some questions to take with them about ask one thing that they like to do outside of work. Do they have pets? We don't really want them to know so much about what the techs, we're not expecting them to know what the tech does, or be able to do the technologist's job, but we really wanted to get to know them on a personal level. And so we've adopted this as part of our fellow orientation. We've been doing this for the last several years, and this is a fellow comment from one of our initial surveys, is that we're often just a voice in the void to one another. It's easy to get angry or say dumb things with little to no consequence. But if it's someone you've seen in person and you've had a real conversation with, you may think twice. The next intervention was how we answer the phones. And not everyone looks like her when we answer the phone. So we talked about as a fellow, we really need you to say your first and last name, and that you're a fellow. Sometimes it felt awkward as a new attending came on and you didn't recognize that person's name, or they would just answer the phone as radiology. And so our techs were in this area of like, are you a fellow? Are you a, who am I talking to? And that led to frustration, especially if you're in an outpatient site and all of your interactions are over the telephone. So this is something we adopted as a part of orientation for our fellows. We talk about why it's important and we need you to say your first and last name, and that you're a fellow. Knowing that the observations really helped increase those in-person interactions, we started to trial a fellow meet and greet when they started. This was just an opportunity for all of us to get to know our fellows. Again, on a personal level, on the other side, asking our fellows where they came from, are they excited to be in Cincinnati? What questions do they have about coming to Cincinnati? And so we've adopted this. Admittedly, with the pandemic, we had to pause on all these fun in-person things, but we've been able to bring this back. Moving on to telephone interactions. The first thing we trialed were scripts. So we had a small script on all of our reading room phones. We quickly abandoned this because our faculty did not wanna use a script. That felt really awkward and I don't answer the phone that way. So we said, okay, and really started to focus on this feeling of constant interruption. Now, how do we get to a place where every time the phone rings, it's not so many seconds, but we actually stretch it to minutes between interruptions? So historically, when we had someone call the reading room, it rang to every single phone in our reading room, which subsequently no one answered because you never knew it was for you. So if our reading room assistant was on the other line, the phones would just ring. So we implemented an automated call distribution. So all the phone calls now come into a queue. Our reading room assistants field those calls and if in your pod the phone rings, it's really intended for you. So this is something we adopted and we continuously adapt because the phone system is never fully ready to go. It's always got something that needs to be improved, but it's definitely better than it once was. And then lastly, looking at in-person interactions. So we got a lot of feedback when you come into the reading room and you say someone's name, this is still what you see. Often I'm not acknowledged. No one says, give me a second. And so that feeling of, I'm irritating, I don't know if they actually wanna answer my question. So we ask people to turn around, make eye contact and maybe smile. I think it's hard, but it really goes a long way. And even if you say, give me one second, I'm finishing something up or can you give me five minutes? I think it goes a long way, but when you come and you don't feel like they really, that they are wanted in terms of, I'll help you answer your question. It definitely made an impact to our technologist. So we adopted this. There was also confusion about who to call during conference time. And so you would call someone and they'd like, I'm not supposed to be answering that call right now. You need to call this person. So there was a lot of bouncing back and forth. So we set out assignments of who's covering both our morning and noon conference. We also trialed something around rating in-person interactions. So this group wanted to trial, after a technologist would go into our reading room, they would rate that interaction in the moment. So if it wasn't excellent or very good, you see some common reasons why our techs were saying that they were having negative interactions around not being acknowledged, scowling at someone, inappropriate tone. But we had to abandon this because no one was filling out this paper. No one wanted that paper to be found in the QC. So we did incorporate something like this in the online survey that allowed it to be an anonymous piece of feedback, but also allowed for some more targeted one-on-one coaching. And then knowing that the fellow meet and greet went really well, we started talking about how do we have just a snack hour? Once a month, have an hour where people can come, have a snack, talk to each other. So this is something we adopted. Again, with the pandemic, we had to pause, but it just brought it back this current fiscal year. We also trialed something called the resource radiologist role. And so this is really, amongst other responsibilities, this is the go-to person for that shift of who the technologist will call. So you're really expected to get a lot of questions. You're expected, as a part of this role for the day, that if a clinician doesn't know what to order, we're gonna call and ask you for support. And so this was really a positive thing. We unfortunately had to abandon it because we didn't have staffing to sustain this role. But this past August, we've been able to bring it back. And it's, again, a very positive thing for our frontline staff. And I think also our faculty. I think knowing that today on resource, I'm gonna get a lot of questions. You know what more to expect in your day while other people aren't in that role. I think it's positive from both sides. And then lastly, we got a lot of content in these surveys. So we shared a lot of feedback. And we adapted it a lot over time. So this is a slide that we would share every faculty meeting. So this is our first 16 surveys. We surveyed every month, about every month, month to six weeks. So a lot of time has elapsed with this project. And you can see, this is the number of positive reactions by radiologists. Every radiologist had a unique number that they only knew. So they could see how they really scored against peers and how, you know, is it a good, kind of evenly spread. You can see there's a lot of positive interactions happening in our department. We also shared the negative interactions. Not as many lines, but you can see, like, we maybe had an outlier or two. And that, but again, that really helped us do some targeted coaching and interventions. I think, you know, I think it's important to, typically, I know, as an end user of surveys, it's really easy if I'm frustrated to take a survey because I didn't like what happened, I didn't like my experience. But our technologists were not just giving negative feedback. And so, over time, you can see that that bar of positivity really increased. So our first survey, we had 46 negative comments to only 12 positive. By our 16th survey, we have 39 positive comments. And you can barely see that slice, but we only had four negative comments. And then looking at sustainability. So this is surveys 21 through 26, again. Now this is more than a year's worth of work, almost over two years. You can see that our technologists continue to give really positive feedback. And one of the ways we adapted this was I started to send that verbatim positive feedback to our faculty. You know, I think I was always struck by how almost immediately I'd get an email saying, thank you. This means a lot. I really enjoy working. I, you know, I remember this interaction. It really was a good experience for that patient. And so looking at how our comments have changed, a majority of radiologists are happy to answer questions and help us out. Dr. X should write the handbook on how doctors should interact with all coworkers. She, he makes genuine conversation with anyone she, he speaks with, and is always very respectful and helpful. She, he never speaks as though she, he is above anyone in status, even though she, he is a physician. I honestly do feel like there has been a conscious shift in some of the radiologists. It is great to know we work in a facility with such high employee culture standards. Thank you, respect. So looking at our recap of where we've been in terms of interventions, we really started with behavioral-based interventions, but also system-based interventions. And I think, you know, it's, you know, when we ask someone to change how they behave, you know, that's great, but is it sustainable? And so I think we really put in systems to make sure that we were able to sustain really higher quality interactions. And so looking at over time, we've definitely improved. COVID didn't help, but we were, we have been able to really maintain a much higher rate of positive interactions in our department than we once were. And so as we were kind of getting into more of a sustained phase of this work, we asked if we could look at intra-departmental interactions. And so this is peer-to-peer. And so we first had to understand, is there, are there areas of more opportunity? And a couple of the areas specifically between our faculty and ultrasound techs looked as though that they maybe needed some help. And so that's when we started Respect 2.0 projects. And I won't have enough time to go into all of those interventions, but really, this is, we really replicated and reproduced the exact same methodology and flow that we did for the first project. And we saw improvements with all groups that went through that second round of projects. And then lastly, I wanted to share how we've worked to improve our patient and family and technologist interactions. And so like we've heard today, we're all tasked with improving patient experience and understanding when we can do better. And this is a question which I think correlates to some of our other presentations today, is that worries and concerns was our highest correlated question to achieving that 9-10 score. And so we started to engage our online family community. We have, at this point, we had about 200 families that we had access to to ask, what would an ideal encounter look like in radiology? Knowing that most people typically are worried when they come to radiology. They don't choose to maybe come and spend the day with us if they didn't have to. And so I was struck by how similar what our family want is what we want in our own interactions. They want us to greet us. They want us to smile and connect with us. They want us to explain things in a way we can understand, to help the child be familiar with the equipment. They want us to talk to them during the exam, making sure they're still doing okay. And at the end of the exam, praise them for a job well done, walk the family back to the waiting room and talk about results. So these were simple things that our families wanted. And so we tried to figure out, how do we get our staff to do this every single time? And so our first intervention, we called Talking Points. And this was, here's all the steps we want you to do before and after an exam. We coached our technologist, we created a how-to video, we laminated sheets in our QC, and we said, do this every time. And at first, we really saw improvement. We saw almost an immediate jump after just this small intervention. But unfortunately, it wasn't sustainable because we were asking people to remember and change behavior. And habits are hard to break. So we created a team, much like the Respect work. These are all of our modalities representative, different sites of care. And we asked, how do we get this change to stick? And so we considered using something called a K-card. This is a tool mostly, I think, more used in manufacturing. But we set out to see, could we use this type of tool for patient experience? A lot of stuff on there, I know you can't read it. But this is really every step we wanna achieve in every patient encounter to achieve a desired outcome, which is really that nine, 10 score or ideal patient experience. And so what's unique about a K-card is you have a colleague observe you in your interaction. And so you're interacting with a family and another technologist is just observing and watching to see, do you hit all these points? Once that encounter is complete, they coach you and say, hey, you did a great job, you hit everything. Or you forgot to talk about results or you forgot to say your name and your role. And so it's not really failures, it's really identifying opportunities and trying to remove those barriers. But initially, this was awkward. No one really wants to observe their peer, no one really wants to be observed. And habits are hard to be formed. We heard that this is gonna take way too much time, it's not gonna actually help our scores, and we already do this. But we said, let's try it. So this was the first year of K-cards. And we could see with the blue line, the more K-cards we completed, that orange line or nine, 10 scores correlated really nicely. So we said, let's keep doing it. So then we did a second year of K-cards. And again, you can see that correlates really nicely. This is what our K-card looks like today. It's a little bit easier to read. And sometimes we'll focus on pieces of the K-card, knowing that some of our exams are pretty long, so we can't have another technologist all the time observe. But this is something that has most definitely improved our patient experience and is really just a standard of how we treat and take care of every patient that comes into our department. We also try to recognize, and we did cut Dr. Coley's head off there, he's too tall. But this is our ultrasound tech, who received one of our recent quarterly patient experience awards. And so this is voted on by peers on patient and family comments. We get a ton of great comments in our department. And so each quarter, we go through comments and pull out really outstanding comments and have peers vote on this. It's really meaningful to our staff. We celebrate patient experience week for our patients, but just as much for our staff. We have staff nominations. We give our patients and families trinkets, making sure everyone knows that we all have a role in patient experience. This year, we had every site color a piece of this really giant poster, which people were really excited about. And we brought it back to our main campus to put it together. We try to show our appreciation. This was a video we put together recently. We had our highest nine, 10 score from patients and families at 95.8%. And we're surveying hundreds of people. So this was really, really significant. And it's been a very tough couple of years. We try to give different trinkets to say thank you to staff. We have cookies a lot. Cookies, we seem to be better than the pizza party. So we do cookies when we wanna celebrate a high score. We've done something called gratitude games, where we try to really encourage during the month to show your gratitude to colleagues. So week one, we'll say send out an above and beyond, which is like an e-card to a colleague just saying what you're grateful for. We focus on patients and families saying thank you, we appreciate you, we're grateful for you. A lot of dogs come to our department. It's always interesting to see who's gonna lay on the floor with a dog. It's always people I don't anticipate, but people love dogs. And then we try to have visual representation of what we're grateful for. And so how has this translated to our overall scores over the last several years? And you can see that that line has continuously shifted in the right direction. And we've been able to sustain around 92% 910 score in the midst of the pandemic, which has been really remarkable. So in closing, I think what I've learned through all this work is that we all want pretty much the same thing. We want simple things. We want people to listen to us, to smile, to have a positive interaction, to ask questions and explain decisions. And so while we think often that interventions are expensive, or we're gonna have to ask Epic for a big change, or the IT group can never do that, that's not often what people need or want to have a really good experience. And so I'll just close with, I think when we started this journey, I was a little bit concerned of how we would use a QI project to improve culture. But I think that if you utilize the tools the way you would for any other project, it can definitely shift culture in the right direction. And with that, I thank you for your attention.
Video Summary
The discussion focuses on the significance of patient satisfaction in healthcare, specifically radiology, exploring methods to measure and improve it while addressing potential issues. Patient satisfaction is linked to the quality of care and can influence outcomes and reimbursements, yet it may lead to undesired practices, such as opioid overuse. The presentation compares healthcare satisfaction measurement to business metrics, emphasizing tools like HCAHPS for standardized, patient care evaluation.<br /><br />Analyzing patient satisfaction requires understanding local data, demographics, and practice-specific considerations. Factors such as age, with younger patients generally showing less satisfaction, and modality, with mammography rated higher than MRI, impact satisfaction levels. Aspects like communication, empathy, and reduced wait times are crucial in enhancing satisfaction.<br /><br />The presenter emphasized that communication is key across different interactions in radiology, from responding to patient needs to managing inter-departmental dynamics. Initiatives like improving telephone interaction protocols, fellow onboarding, and engagement activities have positively influenced radiology environments. The role of empathy, communication, and efficient service processes significantly enhance patient and colleague satisfaction, demonstrating that simple interpersonal improvements can effectively elevate overall patient experiences.
Keywords
patient satisfaction
healthcare
radiology
quality of care
HCAHPS
communication
empathy
wait times
service processes
RSNA.org
|
RSNA EdCentral
|
CME Repository
|
CME Gateway
Copyright © 2025 Radiological Society of North America
Terms of Use
|
Privacy Policy
|
Cookie Policy
×
Please select your language
1
English