false
Catalog
QI: Organizational Learning in Radiology | Domain: ...
MSQI3317-2024
MSQI3317-2024
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Welcome to the third and final session of the Quality Improvement Symposium. It's a pleasure to be with my co-presenters, Lane Donnelly and Olga Brook. And I'm going to go ahead and kick us off. Our topic this afternoon is Organizational Learning in Radiology. My name is David Larson, and I'm coming to you from Stanford. Let's get started. So I don't know how many of you have been here at the earlier sessions, which were fantastic this morning. And it's my pleasure now to talk a little about the concept of a learning organization. And much of what I'm going to talk about, it's a fairly deep topic, so we are going to hit on some, like I say, some deep topics as we go through this relatively superficially. But I will give you some references to look at, and hopefully it will really build upon what we talked about this morning and set us up for our next speakers as well. We talked a lot about this morning the IOM report on improving diagnosis in health care. And I remember when this report came out, I was very excited to look and see what were the author's concepts of how are we going to improve diagnosis in health care. And when I looked at it, these are the types of things I found. We need better celebration of success and absence of complacency and recognition of mistakes as opportunities to learn, belief in human potential. It's interesting. I mean, these seem to be fairly vague concepts, amorphous concepts, for a report that's about improving diagnosis in health care. So I thought it would be an interesting journey to look through a little deeper. Why did the committee think that these were so important? Why are these the elements that will get us to improve our diagnostic process? So let's talk about this concept of a learning organization. And I'm going to start with an analogy that I presented before, so you may have seen this before, so please bear with me, but I think it's an appropriate way to kind of frame what we're talking about today. So this is a representation, a crude representation, simple representation of how we work. And this basically represents different steps in a process, in any health care process, but in this case, an imaging process. So at one step, we have the technologists who acquire the images and the radiologists who interpret the images and then the referring clinicians who receive the report and then act on it. And the color of these arrows represent how they do their work, and then these arrows represent feedback that's given in this process. And so this represents a small practice, a small environment, where you have a few people who know each other, they work closely together, and when there's a problem, they have a conversation, they bump into each other all the time, they know each other's kids, you know, there's a relationship. So this is a self-reinforcing, a self-improving process. This tends to work pretty well without a whole lot of outside influence. This is now a representation of what happens when the organization starts to grow. Now, if you look at it from this person's perspective, for example, nothing has changed. You know, they're still doing things the same way they've always done things, the same with this person. But the organization has grown up around them, and this is what it's starting to look like. And the recipient now starts to see variation, inconsistency. So from the perspective of this person, they're starting to wonder what's happening, what's going on. And so they then start to do what they've always done, and they go and give feedback. And so they go to each individual, and you often will see this in the radiology department as a good example. I would know that when I was dictating with a certain attending when I was a resident, I would look to see who the attending was going to be that morning, and then overnight, that's how I would dictate, according to their style, right? So then each would give feedback to each person upstream, and if you happen to be the radiologist in the center of the universe, then everybody accommodates your desires, and all feels right in the world again, right? But the problem is that you haven't improved the system, you've just pushed the problem upstream. And so this system now, it works okay, but it really starts to break down. This is now a medium-sized practice, not even close to really what we're at, but it's as much as my PowerPoint skills would allow me to do. So this now represents, again, these individuals are still doing the same things, the same way they've always been doing things. But this is what has happened. The world has grown up around them. And I think that's a lot what we're facing now. So if you're now the radiologist, or if you're the referring clinician, and you want to go and give feedback, who are you going to give feedback to? What are you going to tell them? Are you going to go tell each technologist, okay, when you're on with me, do it like this. Okay, when you're on with me, do it like this. Okay, when you're on with me, do it like this. Is every radiologist going to give that feedback to every technologist? So what ends up happening is you give up, right? In fact, there's a term for this, organizational-induced helplessness. The belief, go look it up, it really exists. And it's this concept that there's nothing you can do about it. And if it feels like there's nothing you can do about it, it's because there is nothing you can do about it. Literally, the system cannot be changed, at least in the way it's currently designed. And so it's extremely frustrating to the individuals who are both the recipient of the work, and then the individuals who are upstream, who then they would be happy to oblige if the people downstream, if the radiologists would all get on the same page and be consistent in what their expectations were. And so they're frustrated, and it feels like a food fight every day when you go to work. And we wonder about why do we see increased burnout? Well, I would say this is a major contribution to why we see that. And then when you add a PAC system to it, and it further separates us and disrupts those relationships that we've had for a while, this is a pretty frustrating environment. So I would say this is the preface for why we need a learning organization. So what is a learning organization? What do we mean by learning organization? Well, this actually was, this term was coined by Peter Senge, who talked about an organization in which both individuals continuously learn, grow, and improve, and the organization as a whole can learn and improve performance. So in other words, when something happens in the past, they can make changes in the organization so that it doesn't happen again. If something negative, it doesn't happen again in the future. They can change and evolve over time. And Professor Senge talks about five disciplines. In his book, The Fifth Discipline, he talks about personal mastery, mental models, team learning, shared vision, team learning, and systems thinking. So we're going to go through those quickly. So personal mastery. So each individual in the organization, he says, needs to have, or at least the majority of them, need to have a commitment of those individuals to the continuous process of learning. So we need to first commit that we are lifelong learners. And this is something that can't be forced on individuals who are not receptive. It's something that people need to intrinsically have. It can be acquired through training, coaching, and mentoring, and mostly through continuous self-improvement. Most real learning at work doesn't occur from formal training. It occurs from routine work. But it also heavily depends on the culture, and it especially emanates from leadership. And so if your leaders have a culture of continuous improvement, intellectual curiosity, then that tends to help drive the culture of the organization. Or at least if your leaders don't have it, then it is really difficult to have that throughout the organization. So that's the first thing, is a commitment to personal mastery. The second one is the concept of mental models. And I'm going to spend a few minutes on this concept of models, because I think it's very important, especially in this time of machine learning and artificial intelligence. So what is a model? A model is basically a hopefully accurate representation of something in the real world that helps someone achieve a goal. A mental model is what we think in our mind how the world works. And a model is intentionally more simplistic than the thing it represents. Otherwise, it would be overwhelming, and it wouldn't really be useful. So it intentionally simplifies what you're trying to represent. George Box said, all models are wrong, but some are useful. So they're not perfect, and they're not designed to be perfect, but hopefully they're helpful in terms of understanding the world around us. So a mental model, then, is a deeply ingrained assumption, generalization, or even pictures and images that influence how we understand the world and how we take action. And mental models are a significant part of skill, a significant component of skill. So I would highly recommend, if you want to learn the modern scientific understanding of skill development, Anders Ericsson, he put it in a book that I think is really well illustrated for the lay person that really talks about what is the science behind skill development. And a big part is mental models. So if you think about models, and especially their usefulness, there are a few levels of models, recognition, prediction, and control. So let's talk about recognition. Recognition is for a given context, can you identify salient elements and their relationship to each other? So here's an example. Take a look at this image. All right? What are these people holding? Multiple choice. Are they holding steaks, beer, phones, doves, or hammers? What do you think? I guess we don't have a voting, a clicking system, but I bet most of you said beer. And if you did, you were probably right, because you understand the context, you understand the body language, the position, you understand that they're at a barbecue, and what do you typically drink at a barbecue, or what do you hold? You certainly don't hold your phone in that way. So this is a mental model that we all have. We have millions of these, and we don't even know. And it didn't take you 10 minutes to work that through. You just knew it. This is your Aunt Minnie, right? I mean, it's really clear what that is. Okay, so here's another one. What are these people holding? Steaks, beer, phones, doves, or hammers? All right? So the answer is doves. If you said beer, I think you could get partial credit for that. If you said hammers, I think that's probably more a reflection of your cynical view of the institution of marriage. But the context helps you. You have a mental model, and immediately this fits in with your mental model as to what is likely happening. Okay, so here's another one. What types of flowers are these? Now, if you said iris, you would be right, but not completely right. So these are different types of iris. So this is a computer vision model, a machine learning model, that can look at images of different flower types and can describe those flowers and categorize those. So these mental models, now we're seeing these can be automated with machine learning. They don't always get it right. So this is comparing a parrot or guacamole. Sometimes it's hard to differentiate the two. Or a chihuahua or a muffin. Sometimes it's hard. I mean, it's hard, right? I mean, even for a human, right? But this is what we're seeing more and more. These things can be automated. Okay, so the next level of a model is prediction. So that means for a given context, can you understand how salient elements are likely to interact to create certain outcomes? So, for example, take a look at this image. I'll give you a second. Okay, what is about to happen here? Can you predict what's going to happen? All right, so hopefully you could recognize there's a man on a bike not looking where he's going. There's a hazard ahead, and the man probably doesn't know the hazard exists. And I bet the woman there does not appreciate even being considered a hazard. And the prediction is that man will likely strike the hazard, right? So this is an accident waiting to happen. So you can not only recognize what's going on, but you can recognize, you can anticipate how the events are going to unfold based on the elements that you're seeing there. Okay, so the next concept is control. So that is for a given context. First of all, can you recognize the salient elements? Can you predict the most likely outcomes? And then can you identify the most desirable outcome or a goal and predict how different strategies are likely to result in different outcomes and then select and successfully execute the strategy that's most likely to result in achieving that goal? And then can you monitor the process as it unfolds to ensure that the selected strategy is having its effect? In other words, can you recognize, you know, can you see what's going on? Can you see what's going to happen? Do you know what you want to have happen? And can you select a strategy that will help you make it happen? So here's an example. What should you do in this situation? All right, so to know this, you would have to recognize, first of all, what's going on. And this is a pot with boiling water, probably with some starch in it. So that means that the pot is about to boil over. Okay, what's your desired outcome? Well, I would think it's that it not boil over. So what are you going to do? Well, there are a couple of options. You can stir the pot, but you may not have something handy. You can blow air on the foam, or you can put a wooden spoon over the top and it won't boil over. Who knew? I didn't know this until I looked it up on the internet. But these are strategies that you can apply. You recognize the situation, you apply a strategy and make happen what you want to have happen. Okay, here's another example. What is this? Can you look at this and see what is this? Okay, so if you recognize that this is a basketball court, you would get partial credit. That's the first step, yes. Then if you recognize this is a diagram of a defense, you would also be right, but at a higher level. If you understood that this is a box-and-one defense, then you would also be right, at a deeper level. And then if you understood why someone would use this, it's used to contain a dominant player on the opposing team. That's why you use a box-and-one defense. So that's your recognition. So then if you can see a team and recognize they have a player that is really good, then you can anticipate they're going to pick us apart. So we're going to throw together a box-and-one defense in order to increase our likelihood of winning. Maybe, hopefully. Now, what about the difference? How are these two diagrams different? And if you don't have any background in basketball, that's okay. We're illustrating the concept here. So it turns out these are two different defense types. There's a box-and-one versus a diamond-and-one. One is good for a good perimeter player, and one is better for a good all-around player because you've got someone at the low post who can stop them. But the point is that you have more and more sophisticated models as you gain more experience, as you train, or as you learn. And so these models continue to advance, and it's these models that mostly we're completely unaware of that really drive how we behave. Okay, so here's another example. What is this? Well, if you said an ultrasound, then congratulations, you passed. Then for extra credit, you may say this is a pyloric ultrasound. And you might say hypertrophic pyloric stenosis is the diagnosis here. And the prediction is this would cause gastric outlet obstruction if it's not treated. And so the strategy, then, is a pyloromyotomy, a surgery. Now, what if I said, well, two minutes later, this image was obtained? And hopefully, if you had a sophisticated understanding, you would say, ah, you know, I fooled you. So this is actually pylorospasm, and this is normal, and you should simply observe. So this is a more sophisticated model that we develop as we train. We develop these internal mental models. So in a linear, simple system like we described at the beginning, there's little need to articulate even the concept of models, really. We just do them. That's what we naturally do. That's what this whole meeting is primarily about. And then we have these feedback loops that naturally reinforce our desired outcomes. It's a tightly coupled system. But in a complex system, in a loosely coupled system, it's much more difficult to control, meaning that we have different abilities of people to recognize the salient elements of a situation. They have different goals, and often those goals are very subtle, and different strategies, which also are very subtle, to achieve those goals. And so organizationally, we need a different approach. We need to figure this out. So the third point is shared vision. And this is where we go from personal mastery to collective aspiration. So this is when people truly share a vision, they're connected, bound together by a common aspiration. Personal visions derive their power from an individual's deep caring for the vision. The greatest example I can think of is JFK's moon speech. He said, we choose to go to the moon in this decade and do the other things, not because they're easy, but because they're hard. Because that goal will serve to organize and measure the best of our energies and skills. Because that challenge is the one we are willing to accept, one we're unwilling to postpone, and one which we intend to win. So how rousing is that, right? That and a few billion dollars will get you to the moon. And so here now you go from individual to shared models, and especially shared mental models, where now people have common goals, they have shared strategies, they're willing to challenge, build, and refine each other's mental models. And that's really critical to a learning organization. We have mental models that we're willing to challenge each other's mental models and refine them and build them over time. And often then those models are made explicit in ways like policies and procedures and protocols and increasingly in algorithms. So number four is team learning. And really the textbook on this is Leading Teams by Richard Hackman. And in the preface of this book he talks about, he gives a pop quiz. Here are three items from the citizenship test given to all fourth grade students in Ohio. Number one, which branch of Ohio State government makes laws? Judicial, executive, or legislative? If you said legislative, yep, you're right, pretty easy. Which of the following people is consuming something? Carmen is walking her dog, Jalil is buying a new shirt, Dale is turning through baseball cards. Well in this context, Jalil is buying a new shirt, he's consuming something. Number three, when people work together to finish a job, such as building a house, the job will probably A, get finished faster, B, take longer to finish, C, not get done. So if you said A, you probably do not work in an academic medical center. Right? So how often, we basically take the doubling factor. Every time you add one more person to the project, it takes twice as long to get the project done. So that's the question. Can you work together as a team to be more effective rather than less effective? So a team, first of all, what is a team? A team is a group of individuals with different skills working toward a common goal who hold themselves individually and collectively responsible. So team learning then builds on personal mastery and shared vision, and this is the ability to collectively think insightfully about complex issues and innovate in a coordinated way. This is an image from a huddle, a daily readiness huddle, from Lane Donnelly's group in Texas Children's. So number five is systems thinking, and this really brings them all together. So systems thinking, so first of all, a system is a regularly interacting or interdependent group of items forming a unified whole. So a bunch of things that interact with each other to form a whole. There are two major types of systems. There are systems that are complicated and systems that are complex, and they're different. So a complicated system has many interconnected parts, but the interactions themselves don't change the function of that. And so you can look to see how those parts function, and then you can extrapolate from that and see how the whole system functions. And so these are more amenable to being reduced to generalized models, whereas a complex system has many interconnecting parts, and those interactions change the way the whole thing functions, and so it's much more difficult to generalize from knowing the individual subparts. So if you know about rain and about wind, it's really difficult to extrapolate that in terms of what the path of the hurricane is going to be. You need different data. So complicated versus complex. And there's a certain subtype of a complex system that's a complex adaptive system, and that's where you have the complex system where the interactions affect the outcome, but then the adaptive part means that each of those agents can actually think for themselves and can do something, can act autonomously. And so one example is the murmuration of starlings. Has anyone ever seen this? Oh, it's just one of the wonders of nature if you haven't seen it. I saw it in Ohio on winter nights, winter evenings. You'd see hundreds of thousands of starlings that would go across in waves. Look it up on YouTube. It's really amazing. And if you knew exactly how a bird worked and knew everything about an individual starling, there's no way you could possibly predict that. And so it's how you have this emergent behavior from a bunch of individuals. So that's what comes out of a complex adaptive system. So how do you control a complex adaptive system? Well, it's kind of like Paul Plissik talks about the difference between throwing a rock versus throwing a bird. So if you want a rock to land in a certain place, you can calculate that trajectory and launch it and get pretty close pretty reliably. It does not matter how you throw a bird. It's not going to go where you want. So are you managing your department like you're throwing rocks or throwing birds? And how are you going to change your approach then? So a complicated system, then like throwing a rock, you can use data to drill down and diagnose the problem. You can bring in outside experts to fix the problem. You can identify and remove or replace bad parts. And it's amenable to central planning and control. A complex system, in a complex system, the data are not reliable, so you gotta go and see to understand what's going on. Because the problem is often in the interactions, not in the parts themselves. So then the strategy is develop a few simple rules and promote autonomy as long as they adhere to those rules. And then it's this concept of eyes-on, hands-off leadership where you have leaders as teachers and facilitators instead of a command and control model. And these critically depend on feedback loops. There needs to be a lot of feedback. So systems thinkers then acknowledge that they're part of a larger system. They can see the big picture and recognize their role and use appropriate strategies in a given context. And they're willing to suboptimize locally to optimize globally. And we need to understand our patients experience healthcare as a system, not just as our individual. So if we're only seeing our little part, then that's not really patient-focused. If we're truly patient-focused, then we need to think as systems. Systems thinking integrates the other four disciplines of personal mastery, mental models, shared vision, and team learning. And so then we recognize, again, that complex systems are different. So we should think more like a gardener and it should be nurturing rather than thinking like an architect where we specify everything that should be done. So instead, it should be direction pointing. So have a vision. Interactions should be governed about how we work together. And enabling, so incentives and resources are aligned. So, okay, so let's come back to models for just a minute. In general, and especially mental models, start with skill. Well, even zero is trial and error. There's really no model. You're just doing whatever, you're flying by the seat of your pants, basically. After a while, you develop skill, which is a mental model. And then after a while, a bunch of people get together and they specify how they should do things. And so they put together a procedure, often written instructions or rules, and then eventually that can get automated. And as that happens, then your models become less tacit and more explicit. And they go from being individual to shared models. So here are some examples. My grandma used to make homemade donuts. They were awesome. And she could just roll them out and she didn't need any help. She'd just do it. She had the skill. Well, if we wanted to replicate her work, we could get a recipe and then others could replicate it as well. And now there are machines that all of this logic is built into the automation and can put out very consistent donuts, in this case. Here's another example, if I'm going from skill to automation. I recently had a talk with my 17-year-old daughter. It's a talk that we had dreaded for a long time. You probably know what this talk is. I said, honey, it's time for you to understand how to drive a manual transmission. And this is terrifying to her, right? So we spent a lot of time. She's still working on it. So that's a skill. Well, there are solutions to this that can automate all of this so that we can bypass this aspect of skill. So it demands less of her attention so she can get to more important things like texting while driving. So how do we use this in radiology? Well, here's an example. If you want to go and optimize your CT radiation dose, for example, number one, you can go and try to teach your technologist how to remember everything, all the parameters you're supposed to use. Or you can write it out on a piece of paper. Or you can embed it into the machine and automate it at exposure control. But here's a warning. I have, unfortunately, years of frustrating experience with this. Just because it's automated doesn't mean that it's right. It doesn't mean it's actually in control, right? So be careful because you can automate bad models. And especially as we go into AI, I think that's something I would look to everyone in this room. We need to be very careful because, yes, you can automate some great things, but you can also automate some not-so-great things. And just because it's automated doesn't mean it's great. Okay, so let's talk about, just in the last few minutes, the elements, first of all, the elements of a learning healthcare organization. So Richard Bomer talks about the four habits of a high-value healthcare organization. And I would really say that's a learning healthcare organization. He defines four habits that these organizations have, specification and planning, infrastructure design, measurement and oversight, and self-study. So specification and planning means, in a given context, what strategies should we use? Now can we apply that to the clinical environment and operational decisions? So we boil these down to explicit criteria, and these are manifested in organizations in concrete ways, like clinical decision support and care pathways and algorithms and pre-procedure checklists. Number two, infrastructure design. So first of all, deliberately design a care team and then build the infrastructure around it. Harmonize your management systems to align the budgets, incentives, data, goals, clinical processes, everything around it, and it's mutually enforcing. Number three is data and oversight. So these types of organizations don't rely on regulatory bodies to tell them what to measure. They have their vision, they know what they want to accomplish, and they go and measure it themselves. And they collect more data, generally, than what they're required to by the oversight bodies. And then they integrate this into their performance management portfolio. Then finally, self-study. So these organizations constantly examine positive and negative deviants to learn new things. So Brent James will often talk about, if you and I are doing things differently, then either you have something to teach me or I have something to teach you. But we're gonna work together to harmonize our approach. And this then eventually can translate into these learnings into specification, organization, infrastructure, or into skill development. These organizations can develop innovation in-house and then disseminate it throughout the organization. They tend to nurture a culture that supports learning through continuous improvement. And these organizations encourage dissenting views, so they constantly learn and challenge their own mental models and their more formal models. So here's one brief example of a learning, a concept of learning organization in healthcare. Can you, as an organization, get together and agree on standardized structure reports, implement them, and then improve them over time? This is just one of many examples that we're starting to see in terms of organizing ourselves in the delivery of healthcare and including in radiology. So many healthcare organizations use some of these pieces, but learning healthcare organizations are different in that they tend to engage in all four habits systematically. So these activities are baked into their structures, their cultures, and their routines. And they're integrated into a comprehensive system for clinical management. So what are the aspects unique to radiology? How are we different? So here's a very, very general model for the diagnostic process and how it fits in the clinical process. So a patient shows up with a clinical scenario, a diagnosis is made, they're treated, and it leads to certain outcomes, and there's a feedback loop, and they may show up for follow-up or other clinical scenarios. And so caregivers provide this, and there's frequent communication with the patient. Well, this is composed of, it starts with an initial assessment, there's a diagnosis, and then a conversation with the patient based on what we think the diagnosis is, what likely treatment strategies will result in. We align with the patient's goals, and then we determine treatment. Well, diagnosis is not always straightforward. Sometimes clinicians need help. And so there's this concept that there's a working diagnosis where you start with an initial diagnosis, but it evolves over time and it tends to be revised. And so often you have other information gathering activities such as imaging, labs, pathology and consults and so forth. So in the context of imaging, then what that means is there's imaging acquisition and the radiology report. So clearly within our domain is image interpretation. So that's, on the clinical side we've got their activities, on our side we've got these activities. But there are many activities that are in no man's land. They require a joint effort to really do well. Things like image appropriateness determination, things like modality and protocol selection, things like integration into the working diagnosis. It's interesting, I've heard several times today people lament over the lack of a good history in an order for radiology report. Well, guess what? We're all on the same team. So we need to figure out how to work well together. And so that leads to this concept of how do we do that? So this is a recent AGR article we published on information and communication and teamwork that the editors thought was important enough to put on the cover actually. And the concept here represents, the colors of these stick figures represents different roles. So you may have a physician and a technologist, a nurse, an administrator, an IT support person. So the concept of teamwork is how well do these individuals work together? Now these interfaces are different than how do teams work with each other, right? Often we work together with our own peeps, different than we work with those other people, from the emergency department for example, who are always constantly demanding us. Well, do we view ourselves as being on the same team with them? And then finally, there's this concept of collegiality. And that is how do we deal with our colleagues? So our, the other radiologists. When we go on a team, do we say, okay, now that I'm here, we're doing everything different. And do we throw our colleagues under the bus? That's where I see some of the most distressing interactions, where I see tears, is when we have colleagues who undermine each other. We need to learn how to work better together. So finally, coming back to where we started, the IOM report on improving diagnosis, these concepts of celebration of success. Well that means do we recognize and truly value excellence? Absence of complacency, do we constantly innovate and search for new ways to improve outcomes? Recognition of mistakes as opportunities to learn. We're about to hear from Lane, understanding that we accept that mistakes will happen. We learn from them without tolerating routinely mediocre performance or reckless disregard of safe practices. Belief in human potential means we foster professional and personal development. Recognition of tacit knowledge, those closest to the work have the most detailed knowledge, and so we systematically empower the front line. Openness means that knowledge is openly shared, both formally and informally. Trust is that people are naturally willing to trust their colleagues. People will naturally work hard to earn and keep that trust, and we're outwardly looking, meaning we view our competitors as a source of insight, and we're focused on deeply understanding our customers' needs. So how are we gonna manage this complexity? It's gonna be through understanding individual skills, systems, and procedures. It's gonna be through well-designed models, and it's going to be through individual learning and improvement and organizational learning and improvement. Thank you very much. Our next presenter is Dr. Lane Donnelly, also coming from Stanford, and his topic will be Positioning Peer Review to Foster Continuous Learning. Lane. I'm gonna talk about peer learning and the steps to transitioning from peer review to peer learning. And as David said, I am at Stanford University, but I just arrived there three weeks ago, and the program that I'm gonna talk to you guys about was enacted at my former institution, Texas Children's Hospital. And I'd like to start out with a quote from Mark Twain. When a man loves peer learning, I am his friend without further introduction. So we've been talking about this for a long time. Actually, he didn't say that. He was talking about cats, but we all know that his original name was Sam. So if you see Mark Twain pop up throughout the lecture, it's a hint that this might be a part related to Sam. So we're gonna talk about helpful steps in converting from a system of peer review to one of peer feedback, learning, and improvement, and shortened for that is peer learning. At Texas Children's Hospital, we refer to this as peer collaborative improvement, so you may hear me throw around that term as well, but they all mean the same thing. So background, and as it pertains to the IOM report, one of the things that was evident in reading that document was there was an emphasis on the need to create robust, non-punitive methods of review of diagnostic errors to facilitate learning and foster improvement. And a growing number of people are questioning whether the historic approach to peer review that has developed in the radiology community over the past 15 years and the associated relationship to OPPE that many people have has not really met the charge. So at Texas Children's Hospital, prior to our switching to a peer learning system, we had a peer review system that is similar, perhaps, to many other historical ones. We randomly audited cases, and we used a RadPeer grading system to evaluate the potential errors in those cases, and then we calculated the error rates, which was calculated by the number of significant discrepancies per radiologist. And for our ongoing professional practice evaluation, we had two metrics that we reported to the medical staff offices, and one was report turnaround time, and the other was error rate. In January of 2016, we transitioned from that system to one of peer learning, and our goals for our peer learning system are to improve our clinical services through studying our learning opportunities, not to identify poor-performing physicians. And I think it's important to note that it is very important to have a process when there's a suspicion of poor physician performance, and we have a separate FPPE process put in place for when that occurs, which has nothing to do with our peer learning system. And here's Mark or Sam. All right, just for more background, the Texas Children's Hospital Radiology IT Infrastructure, we have Philips PACS, we have Epic Radiant for the wrists, we use PowerScribe 360 for the dictation system, and we use Primordial for our work lists. Many of these systems have peer review systems built into them. We happen to use the Primordial solution to do our peer review. And the way that had historically worked, the radiologist would receive two random assignments of cases to peer review each time that they were on the clinical service. They were assigned cases based upon their practice profile, so in other words, neuro people would get neuro cases and so on, and we assign the cases when they were less than 24 hours old with the theory that if a significant error was identified that you could actually act upon it. The radiologists are then shown the image. After they look at the image, they disclose the report, and then a palette comes up where they would grade that image. So, the steps that we're gonna talk about regarding the peer learning and the key changes that we did, was, there are six of them. The first one is sequestering learning and improvement activities from monitoring for deficient performance. Method of case identification moving from random sampling to active pushing of identified learning opportunities. Three, replacing the numerical sample Replacing the numerical scoring system of errors with qualitative descriptions of learning opportunities. Four, sharing those findings at learning conferences. Five, linking the peer review or peer learning process to process improvement systems. And finally, evaluating radiologists for OPPE based on participation and not based upon error rates. So with that, we'll talk about the first one, and I'll spend a little bit more time on this one than we will for the other five. Sequestering learning and improvement activities from monitoring for deficient performance. So, we're all familiar with Radpeer, which has been a very successful system in the sense that it has spread and is used by many groups. This is an article from 2009 that showed that at that time, there were 10,000 radiologists using the Radpeer system, and probably a significant other percent using systems that were very similar. And as most of you are aware, there is a grading of errors in the Radpeer system, historically based on a one through four grading system, with threes and fours being considered errors. And now that has been transformed to a one through three system. One of the advantages of that system, when you calculated error rates, this particular graph from the ACER website shows that you can show a particular physician's performance as compared to their group, and as compared to the national average. The advocates of Radpeer in the articles published about it had advocated that it accomplishes peer review with minimally added work, and it helps meet multiple regulatory requirements. But there were never any articles written or attempted to be written that claimed that there was evidence that it contributed in any way to improving our services. Along the timeline, after that peer review system was identified, the joint commission in 2007 changed the ongoing professional practice evaluation requirements such that you had to acquire practitioner-specific performance data in each of the six ACGME categories. And so when the medical staff offices went to radiology departments, despite the fact that this was never the intent of peer review, people had these error rates, and many people began to use that data as part of their OPPE systems. There were even descriptions of more penetrating faculty surveillance where radiologists were counseled, remediated, sanctioned, had restricted privileges, and even termination based on these processes. And of course, the thing to ask yourself is, when you know that this might happen to one of your colleagues based on you entering error rates, or that they're gonna turn around and potentially do that to you, how often are you actually gonna put in a grade of a three or a four? And the answer is not very often. The national error rates, according to these peer review systems, are way lower than we all know that errors actually occur by. So related to that, many of our visions of peer review turned into, as depicted in this picture. So as David has taught us, we have competing frameworks related to potential applications of peer review. One is the study of errors to improve performance through feedback and learning, and the other is the identification of poor performers for the potential purpose of remediation and restriction from practice. Now we know that you can't do both of these at the same time. You can't be a coach and a judge simultaneously, and we also know, even when you just try to be a judge, that these gratings are very subjective and not accurate from multiple studies that have been performed. And I can tell you from our experience, in the 15 years that peer review was going on in the old system, a physician was never identified as an outlier and had any remediation related to the peer review system. Now we did have issues that have arose through other avenues that needed to be dealt with, but they never came out of the peer review system. I think part of the problem is that the word peer review is thrown around to mean very different things. So we have peer learning that we're talking about today, where we're trying to identify issues and learn and improve based on those findings. We have ongoing professional practice evaluation, or OPPE, where you have the routine evaluation of the engagement and performance of faculty. And then FPPE, which is, again, very important, which is an intense evaluation if poor performance is suspected for any one of many reasons. And one thing that I think is very important is that these activities need to be sequestered from each other and have the appearance that they're sequestered from each other, particularly the two at the ends. If these are mixed together, you can't do the peer learning part. So moving on to the other items, a method of case identification moving from random sampling to active pushing of identified learning opportunities. We still have random auditing of cases in the Texas Children's System, but we have emphasized the importance of pushing actively identified learning opportunities into the system, and we all know that through multiple different avenues, consultation with referring physicians, review of previous comparison studies, clinical conferences, incident reports, and information fed in through radiology pathology discrepancies and other types of things, these are where you really identify the errors or learning opportunities that have occurred, and you're much less apt to identify those through random audit. And in our system, there's just a little button that you can click to, and the palette comes up and you can push any error into the system to have it as part of the peer learning process. Next, replacing numerical scoring of errors with qualitative descriptions of learning opportunities. We have stopped using a numerical grading system in any sense. We have moved to GREE, or learning opportunity. We also have a button for great calls, because there's a fine line between when someone makes a really good call and someone might miss that same finding, so we think we can learn from those as well. In learning opportunities, we try to categorize as perception, cognition, technical or protocol issues, reporting, communication, radiologist recommendations, and other processes related and very importantly, there's a text box so people can actually explain why they're putting the case in. It saves, obviously, a lot of time. We have found that not only based on the literature that the scoring systems are inaccurate, but they also serve as a distraction. One of the major things that would get discussed at our historic conferences was whether it should be a three or a four or those things, and again, it kind of distracted the discussion away from improvement and to the grading system. Peer learning conferences. First of all, in our system, the individual interpreting radiologist receives individual feedback at the time that the learning opportunity is logged into the system, but in addition to that, all of the learning opportunities are reviewed and then prepared for a monthly conference to facilitate learning. A common cause analysis is performed to identify common, significant, or potentially repeatable issues. That conference occurs once a month and rotates between the different areas of our subspecialty and usually it's either the division chief or a designee is the convener of that conference and again, reviews all the materials that have been entered to pick out the things that we can learn the most from. The cases are shown in a de-identified fashion, usually in a PowerPoint presentation. We actually videotape all those conferences because we do require the faculty to watch those if they have not been present. Linking the peer learning system to process improvement. Many of the issues, as we've heard in lectures earlier today in this series, that are identified through this process are related to system-related issues rather than individual performance-related issues and there needs to be a way to link those things that are discovered through the peer learning process to the performance improvement process so that they can be assigned appropriate ownership and followed through to fruition to make sure they are actually addressed. And finally, the last category, evaluation of radiologists related to OPPE based on participation in this process and not upon error rates. For the OPPE process, we actually have 13 parameters that we track related to faculty performance. Of those 13, only three are related to the peer learning process and they're all related to participation. So again, we still have a random review of cases, so we ask that the faculty perform at least 90% of those reviews that they're assigned. We ask that they attend or attest that they have watched 75% of the peer learning conferences and that each faculty has agreed that their cases are reviewed as part of the process. And again, no error rates are calculated or shared. So we, again, first implemented this in January of 2016 at Texas Children's Hospital and we decided to look back at our first 16 months of experience a while ago and I'd like to share some of that information with you for the rest of our time. So during that 16 months, there were over 12,000 cases reviewed as part of the process. This was approximately 760 per month and there was about a 9% hit rate on learning opportunities identified or slightly over 1,000 cases. And if you look at the categories, because we have random review, 90% were agree. The highest category was perception at 5.1% and the next highest category was reporting issues at 1.9%. This also has Mark or Sam show up. So interestingly, because we had both a random review of cases and actively pushed cases, we could compare between the two. For random review, there were 445 identified cases out of over 11,000 for a 3.88% learning opportunity rate. For the actively pushed cases, obviously, people were taking the time to put these into the system and there was a much higher learning opportunity identification rate. You might ask, why is this not 100%? And that's because the great calls were mixed in with those and also things like we enter all the cases that we learn were discrepancies from the surgical pathology feedback process at our institution. And many of those, by far the most common thing would be ultrasound for appendicitis where we called appendicitis and then it wasn't appendicitis. And for many of those, even in looking back in retrospect, we would have called appendicitis based on the measurements or something along those lines even though it wasn't. So that's why there are some actively pushed cases that are agreed. So of the total learning opportunities, 61% came from the actively pushed cases even though the labor that went into putting the actively pushed cases in was much less than that from doing all of the random reviewed cases. This is a graph of actively pushed cases entered per month and this is when we started the peer collaborative improvement process. So prior to that, even though people could enter cases and push them into the system, we didn't ask anybody to do that so nobody did. When we started the program, we had an uptick in that from the baseline of zero to a new level but not where we wanted it to be. And then we did two interventions which sent the rate up much higher and it has been sustained at a higher rate since that point in time. And those two things were, one is we changed our nomenclature. When we first started doing this, we had agree and apparent error and we said apparent error because we were trying to give the benefit of the doubt that maybe it wasn't actually an error and we changed that vocabulary to agree. We added great call and we changed apparent error to learning opportunity. To try to emphasize, again, the non-punitive nature of our goals. And then the second thing that we did at that same time, which I think was probably more the reason for the uptick, was we sent out a monthly report with all the faculties named on it and how many actively pushed cases that they had entered that month. So everybody could see how many everybody else had put in. And this served, you know, everybody's busy, this was something new, they didn't naturally remember to push the cases in and this was just served as a reminder to do it, as well as it was a little bit of a shaming thing to try to get people to do it as well. And I think that was really what pushed it forward. We did also survey the faculty about the new system on a scale from zero of non-favorable to 10 favorable and of all the questions that we asked, you can see that they were all ranked very favorably. I know you can't read this, but the faculty thought that the new system was an improvement, that it was focused on improving our service rather than placing blame, that they thought OPPE was better than it used to be. And interestingly, the thing that they agreed upon the least was the value of random auditing of cases, that they saw more value from the active pushing of cases. So in conclusion, we have found that conversion from peer review to PCI or peer learning has increased our ability to identify and learn from our errors, our identified learning opportunities has increased, that it is viewed as focused on improvement and non-punitive by our faculty. And finally, in comparing the yield of actively pushed cases to randomly audited, we have a higher yield of learning opportunities from our actively pushed cases based on the labor required for that. We have also found subjectively that the more meaningful learning opportunities that we discover come from the actively pushed cases and the faculty perception is that the actively pushed cases are of greater value. So we are considering the elimination of randomly auditing cases. Here are some references for this work. One thing I would add in addition to that is since we've started this process, we had a site visit from the Joint Commission and were asked by our medical staff to present this process to them and they were fine with this process and we also had during that time period a site visit from the ACR and they asked if we did peer review and we said no, we do peer learning and they were fine with that process as well. We obviously explained the process to them. So I do not think there is a regulatory argument not to move in this direction. Thank you very much. Thank you so much, Lane. That's a great example of a practical application of learning healthcare system. So our next speaker will be Dr. Olga Brook. She's coming to us from Beth Israel Deaconess Medical Center and she's going to now talk about how do we implement it in terms of actual improvement, especially towards patient outcomes so her talk is entitled Imaging Improvement Aiming at Patient Outcomes, actually it's Improving Patient Outcomes Through Imaging, as I see. And this is obviously an important concept that how do we not only improve our reports to our referring clinicians but how do we then get it to our patients. So Olga, thank you very much. Okay. I'm not as tall as my colleagues in Janssen's quality so what I'm going to talk about is basically how do we improve patient outcomes through imaging because that's what we do, right? And this is not entirely a new concept but mostly what we really usually have been focusing on diagnosis, right? But really the whole concept is now changing that we really have to look how what we do affects patient outcomes. Okay. So how to improve patient outcomes? Before we answer this question, we really need to know what is actually patient outcomes. What are we talking about? And the outcomes, you know, we have now measures and what they do is actually they evaluate the well-being of the patient, ability to perform daily activities, and eventually survival. What they actually do, they assess results of the healthcare as experienced by patients and that's the definition of the outcomes. And that's what we need to affect. So our old thinking, we had a lot of measures because we had to measure what we do. But it was involving mainly process, okay? How fast are we completing and reporting the study? What's turnaround time? What's radiation exposure? So this is all very important but it's really about the process and not really tailored towards the end, towards the patient outcome. Now focusing outcomes, what we're trying to do is to measure how imaging results will result in early diagnosis and then early treatment of this patient. Can we prevent complications and thus duration of episode of care? And not specifically radiology episode of care but throughout the whole episode of care for this patient, whether it's inpatient or just diagnosis of a new cancer, for example. And improvement in patient overall well-being. Can we do that by imaging? Okay, so there are three pillars, the three main things that I would like to talk to you about that I think are essential in improving outcomes for our patients through imaging. And those are standardization, patient satisfaction, and most important, as you can see, that's the biggest one, is information technology. Okay, so standardization. What can we do and why is this important? So this is the old process of QA that I don't even want to show here but that's how we used to do. What we were trying to do is that we have normal variation of our process, we're trying to find what's the threshold of acceptability. Okay, below that we don't want to see any cases of that and above we can accept. So that was the process of QA, which worked but it's really cutting off only the worst performers. The new process in quality improvement is basically trying to identify ideal practices and then trying to move the whole practice towards those ideal practices. And that's overall, you can see that results in much better results. The whole graph is much better placed and it's also much narrower, so there is much less variation between the people. What's the difficulty in that? First of all, you need to identify those ideal practices and get everybody to agree and then actually to move towards that. So it's not as simple as it looks but that's the ideal situation. So what are the tools for standardization that will help us? And it's all about decision support. We can call it different names but essentially it's a decision support helping either referring physician to decide what they need to do, what they need to order, how they need to order, and also decision support for radiologists. And there are multiple things that will go through that. And the goal is basically direct patient to the most appropriate care, okay, whether before the study or after the study or throughout. So for example, for referring physicians, patient presenting with acute back pain without neurological deficit. So what is the appropriate study? Let's try to vote. Lumbar x-ray, anybody for it? Yeah, I see none but I would say a lot of places would still do a lot of x-rays of the lumbar spine. Spine CT? I have a few hands. Spine MRI? Acute back pain, no neurological deficit. No votes. Oh, come on. What about observation? Okay, so most of you will say observation. And obviously it's not as simple as that. There are nuances to that. But how many of you actually reading those spine CTs, x-rays, and the MARs for this indication? Plenty, right? Yeah, exactly. So decision support system is definitely needed there and majority of referring physicians would not object to that. They're basically under pressure from the patients to do something, right? But if they have a decision support, something with the guidelines that will remind them what is the right guideline in each and every situation and also will be able to show to the patient, they will be in much better shape. So the goal for decision support for referring physician is to guide them to appropriate imaging modality that will provide the answers in the quickest and cost-effective way. Here's another example. Patient presenting with abdominal pain. And we've seen it all, right? So usually the first study will be abdominal x-ray, then it will be ultrasound, and then CT, and then MR, and then sometimes even nuclear medicine study, right? That's absolutely not the optimal sequence, right? We all know that. In how many cases we actually see that's happening to our patients, very frequently. So frequently starting with more advanced imaging study will provide the results more efficiently and quicker for the patient, right? So if we're considering this from the patient perspective, yeah, something is happening every day. I'm getting this study, and this, and this, and this. But am I getting results? Am I getting answers? No. Am I getting the appropriate treatment? No. So really patients will be on board, but sometimes you have to say, okay, don't do this x-ray. I think the appropriate study for this patient will be MRI. Yes, up front to start with. But that's something that, again, decision support will help. So our goals are threefold. Reduce the number of sleepless nights. It's not my term, but I really like it. I think it's very important to acknowledge the patient side of it. How many nights they're actually sleeping, they don't know what's going on. They're worried. Their family is worried. They cannot work. And if we can reduce it to get to appropriate end point, to the end point quicker, that's better for the whole system. We want to start the appropriate treatment earlier, as early as possible. And then eventually, yes, it will actually save quite a bit of the health care dollars. Now, this was impossible previously, because we would never say no to a study, right? Because we were getting paid for each study. That's actually ending. And yes, we've been talking about accountable care models for quite a while. And what is accountable care models? They're basically, the goal is to provide coordinated high quality of care, specifically to the Medicare patients, but I'm sure it's coming to other groups of patients. And the goal is to get the right care at the right time, while avoiding unnecessary duplication of services and preventing medical errors, right? Exactly our quality model, right? And it's actually happening. So in Massachusetts, since 2016, there were six ACOs that were covering 160 members under MassHealth. I think next year, in March, we'll have nearly a million patients covered under this ACO. It's happening. It will happen. And it will involve more even private insurers as well, because it makes sense. It's better for the patients, and it's better money-wise. So what are the barriers to the decision support? Number one, we need to persuade referring physicians. It's hard, but I think this whole system and change to ACO will help us with that. Importantly, we don't really have evidence for every indication, okay? We do have a lot of consensus guidelines, guidelines by committee, which is important because it's based on experience of very experienced members of our societies. However, this is not sufficient. We really need evidence, and many times something that you would expect to happen, if you really do a proper study, you would be surprised to find that it's completely the other way around, and I don't really need to give you examples of that. So whoever's here in the audience, please do some research. Try to find out what is the right thing to do for many indications, and it doesn't really matter. It matters as much as possible in each subspecialty. IT support is very important. When we get all that evidence, we need to implement it and implement it in an easy way to do it. People are lazy, so we really have to make it easy, both for referring physicians and for radiologists. Decision support for radiologists, at least at this point, includes three major things. One is a disease-specific structure reporting, incidental finding guidelines, and the follow-up recommendations. What is disease-specific structure reporting? We have been using structure reports in many places for quite a while. In the last 10 years, I would say, most of you probably converted to some sort of structure reporting. However, disease-specific structure reports is different. It's really tailored to that specific disease entity that this patient has, and it needs to actually be structured so it can help you and will serve you as a checklist. It needs to be sufficiently detailed so it's not very superficial, and it needs to list all the relevant positive and negatives for this disease process. That will include clinical management and surgical planning. There are many examples that are already published in the literature, there's a lot of actual evidence behind them, how they help and affect patients' outcomes. Pancreatic cancer staging, there are quite a few papers on that, but overall, what it shows is that there is a superior relation of pancreatic cancer, and those structured reports facilitate surgical planning. This is important. Surgeons were more confident regarding the decisions about the tumor susceptibility when they had structured reporting. So they actually trust us more when they see the structured reports. This is a recent example about shoulder MRI. Structure reporting improved the readability, facilitates information extractions, and again, different physicians prefer the structured reports. Another topic is the rectal cancer staging. Again, facilitates surgical planning, which is extremely important, leads to higher satisfaction, and again, abdominal surgeons were more confident about report correctness. Okay? They, again, trust us more if we use structured reporting than just we use a prose. For uterine fibroids MRI, again, very similar results. All of those studies really showing very similar findings. It's much more helpful in treatment planning. They are much clearer, much easier to extract information, and they are more complete. But this is my favorite, and maybe because I'm not doing neuro. This is MRI for multiple sclerosis, a study that was published just in September of this year, and showed that neurologists couldn't understand lesion load significantly more often when reading structured as compared to non-structured reports. And most importantly, neurologists needed to evaluate the images to actually understand non-structured reports. So we're really not doing our job well if our referring physicians cannot understand our report without looking at the images. So when you think about implementing department-wide standardized reporting, there are a couple of things that you need to keep in mind. First of all, it does require that your radiologist in your group accept standardization. That is important that it will reduce variation, and they're happy to embark on this road. And it's not easy to do. The way to do it is really to make it easy and preferable for the radiologist to use your standardized reporting as compared to their own and how difficult it is for them to create and maintain and use their own reports. Consensus is essential. You need to make sure that everybody agrees, and it's very, very hard. And it requires multiple edits, audits, later on on the reports, structured reports that you created. Overall, in summary, disease-specific structured reports provide better service for referring physicians and our patients. They're easy to understand, improve physician satisfaction, and they facilitate surgical planning and clinical decision-making. And if you would like to hear more about structured reporting, there is a refresher course tomorrow at 12.30. Incidental findings. So incidental findings is really very frequent. I'm sure all of you dealt with that. And we are basically victims of our own success, of our technology advances. We just now see much more. It doesn't matter where, but mostly on CT. So thankfully, ACR addressed this issue by creating the Committee on Incidental Findings, and that's really the fathers of radiology. You can see the names here on that first paper in 2010 published in JACR. This paper is actually most-read article on JACR as of November 2017, and second most-cited article in JACR as of November 2017. So I would really recommend looking at that. So incidental findings are, most of them are likely benign. And if not benign, they very frequently have no clinical significance, or very minimal. However, despite that, they cause a lot of increased utilization of cross-sectional imaging. Why? Because not patient and not physicians are willing to accept the uncertainty, even with the rarest possibility of important diagnosis. And because, until recently, we didn't have any guidelines what to do with those things. So what is the solution? Again, JACR comprised this clear set of guidelines for multiple sets of incidental LOMAs, and that's just a number of those. And that's really great. It's very, very useful. What's the problem with that? Well, so here's an example. This is just a guideline, a decision tree for incidental cystic renal mass, what to do with that. As seen on CT, okay, single modality. That's for solid renal mass. This is for liver mass, this is for adrenal mass, this is for pancreatic cysts, and this is for thyroid nodules. Okay, there is a lot of information there. There is no way we can remember all of those decision trees, okay? And that didn't really end there. There are many more editions of all those guidelines. They're more detailed and more nuanced. As of recently, JACR published a single paper just on the pancreatic cyst, and it has one, two, three, four, five flowcharts just talking about pancreatic cysts, okay, what to do when you see them on CT. So this is really impossible. This is great, but this is impossible amount of information to kind of remember, right? So we simply need IT solution for radiologist decision support. Follow-up recommendations, okay? That's another huge field that we have a lot of diversity, I would call it, among us. We recommend different things for, after our study, in about 10% of our studies. And this is our data. Dr. Sievert did the study, evaluated all our recommendations in our department, and this is specifically for abdominal section. We frequently saw that, you know, recommend follow-up CT, ultrasound, or MR, okay? What exactly did we recommend? Why did we recommend? Was it which modality that was, without contrast? And the question about contrast is actually very important, because referring physician needs to order that, and if they order it wrong way, the billing will not happen. So insurance company will not pay for that. So we actually have to tell them what type of modality, what type of study, with or without contrast, needs to be performed. When to perform a study, okay? Recommend follow-up. Follow-up when? Tomorrow? Three months? A year? Two years? Five years? And some studies do not even provide recommendations, even though they probably should have. And when we do provide recommendation, if you ask multiple radiologists, we don't agree with each other. And here is just a few examples of those recommendations, and that's from real studies. Recommend imaging of the abdomen with contrast to further relate left renal mass. What is the imaging? CT or MR? Maybe ultrasound will be sufficient. Contrast-enhanced CT, ultrasound, or MR, so we have modality, but we really gave the choice. Recommend to relate for possible neuropathic segment to lesion. Again, when? Short-term follow-up imaging is recommended. What type of imaging? What is the short-term exactly? There is no definitions here whatsoever. So here's an example. That's just a regular case, and I'm sure even those of you that don't read up abdomen on every day, you know this is a cogenic lesion in the kidney. Most likely, that would be AML, right? We're all in agreement. What to do with that, okay? What is the next step? So we have here four options. Let's try to vote. This is AML, and there is nothing that needs to be done about this. Who would vote for that? Okay, we have a few hands. No, this is AML, sure, but let's confirm this with MRI. Anybody would vote for that? No. CT? Yes? Okay, a few people. No, let's follow up this with ultrasound. Okay, so quite a few people want to follow up this with ultrasound. And so here's the explanations for each of those answers. Each of those answers can be right, okay? It's AML, no follow-up necessary. Really it's a benign lesion. Why would we want to follow up this if it's small, okay? If it's less than four centimeters, we know the large AMLs may rupture. Now some people would like to confirm this with CT or MR. Yes, because very rarely, extremely rarely, this echogenic lesion could be RCC. How often that happens? Extremely rarely. CT versus MR, CT is cheaper, more available, and MR is more accurate. You kind of have to weigh in. And then ultrasound, some people would say, well, it doesn't matter if it's RCC. I accept that. But it's very small, and until this RCC will cause issues to this patient, it needs to be at least greater than two, three centimeters in size, right? So I can just follow up with the cheapest test, which is ultrasound, just to make sure that it's not growing. And the problem here, again, we have no consensus, okay? There are a couple of studies, pretty poorly performed, I have to say, talking about follow-up of those echogenic lesions. We really don't know what is the right decision here. And until somebody decides to actually do a proper study, we don't have sufficient evidence to give guidance to radiologists. Again, this is a call for action. There's a lot of stuff that can be done to the benefit of our profession. So we need more research. We need evidence-based recommendations that will account for multiple things, for cost of the follow-up, importantly, but also for anxiety of the patient. This is something to be considered. Do we really want a 40-year-old to have follow-up, whatever study it is, every year? Not really. What is the patient's life expectancy? What is the comorbidities and things like that? And then again, eventually, we do need solutions, IT solutions, to implement those guidelines. And then again, that will end up with radiology decision support. The next component that I think is very important for us, how we can affect patient outcomes, is through patient experience and satisfaction. So there are a number of studies, not in radiology, but in other portions of medicine, that are talking that if the patients are happy, they will also have better outcomes. So here's the study of ophthalmology outpatients, which showed that patients with higher patient satisfaction, they are more likely to return to the facility where they were treated. They're more likely to comply with the advice that you've given to them. And they're more likely to recommend this facility to their family and friends. This is another study in patients with diabetes, showing pretty similar findings. So if the patient is satisfied with their care, with their provider, they actually will do more preventive practices as required for diabetes care. And lastly, but very important component, is some studies, very large studies in surgical literature, suggest that maybe patients with higher satisfaction scores have better surgical outcomes. This is a study on Medicare data on six common surgical procedures in nearly 3,000 hospitals. So it's a humongous data set, showed that patients with higher satisfaction scores had low length of stay, lower admission rates, and lower mortality. And this is another study showing similar results on inpatients, older inpatients, patients with higher satisfaction scores had better outcomes in regards to failure to rescue, death, and minor complications. Now I just want to make sure that this is all great studies in showing that there's probably some correlation between better patient outcomes and surgical complications and surgical metrics. However, not all the studies agree. And this is the study on urological patients that found actually limited association between patient experience and surgical outcomes. Those authors actually suggest that we should view patient experience as an independent quality domain rather than mechanism with which to improve surgical outcomes. Either way, it's important for us to know what our patients think. So in summary, patient satisfaction is important. They're more likely to follow instructions on your preventive care. Patients are more likely to follow your recommendations for follow-up, right? All those follow-up CT and MRs. And sometimes we think that maybe surgical outcomes and procedural outcomes, in our case, will be better. How does that apply in radiology? If the patients will follow PrEP instructions, your study, the quality of the study or procedure that you do will be better. They will follow up on your recommendations for follow-up. And importantly, they will come back to your facility in the future and will recommend it to their family and friends. This is a concept that I would recommend for all of you to look at this study and paper in JCR about patient-centered radiology process model. What are components that actually affect radiology process if you think about how it's centered on the patient? And there are multiple components of it from staff-doctor perspective, procedure-specific morbidity, and also process-related issues. If you do implement patient-centered principles in your practice, it has been shown that it will increase patient overall satisfaction, decrease length of stay, and reduce medical errors. So how to build the patient satisfaction surveys? There are plenty of literature about it. But to make radiology patient-centered, we really need to know what our patient want. And I myself made a few assumptions and discovered that actually what we think our patient want is actually not exactly what they want. Just a recent example is that we thought we opened up a new facility, beautiful new outpatient facility with free parking. Okay? We are in Boston. Free parking is important. And we thought we'll have plenty of patients that will want to go there. No. Patients barely want to go there. Why? Because they actually want to have their CT or MR be done exactly at the same time that they go and see their oncologist, vascular surgeon, et cetera, on the same day in the same facility. And also that specific group of patients, they don't really care about parking. Most of them are using public transport. Again, if we don't ask, we don't know. So when you build your patient satisfaction surveys, try not to cover the complete patient experience because it's just basically going to be too long and patient will not compete it and you will not get the information that you need. It doesn't have to be similar to the hospital surveys. You really need to focus on your practice and what things that you can actually affect. Okay? You can, if you cannot change parking in your institution, don't ask about parking. But just ask about things that you can change. Leave a lot of space for comments. The most valuable results we actually, in our case, came from the comment section. Okay. And the last component that will affect, in my opinion, patient outcomes is IT infrastructure. And this is just a whole paradigm change in radiology is that from the, again, process metrics we are changing to value-based metrics. And this is just a few examples. This is an example in breast cancer. So currently we are using those BioRx3 reports, callback rate, et cetera. And in the future we are looking to see how many patients are diagnosed at a resectable size, how many patients with early diagnosis of recurrence, et cetera. And the same for multiple other conditions. That's for abdominal pain, again, from turnaround time to actually time to diagnosis, something that is actually important for outcomes. Stroke, similarly, and cholangiocarcinoma in interventional radiology. So IT infrastructure is really required to improve patient outcomes. And honestly, every component of your department that you think about, IT infrastructure will improve it. So if you do have money to invest these days, consider investing in the IT because this will actually give you the best opportunity to measure and thus improve patient outcomes. So in summary, the three pillars that help us in radiology to improve patient outcomes is standardization, focus on patient experience, and IT infrastructure. And thank you.
Video Summary
The Quality Improvement Symposium concentrated on organizational learning in radiology, emphasizing how to enhance diagnosis and patient outcomes. Key discussions included the concept of a learning organization, how growth in healthcare structures can complicate processes, and the necessity of a unified, adaptive approach to healthcare improvements. David Larson introduced the importance of creating a learning organization as described by Peter Senge, which involves continuous learning, shared vision, and systems thinking among others.<br /><br />Lane Donnelly discussed the transition from peer review to peer learning, highlighting the importance of a non-punitive approach to learning and feedback in radiology. He emphasized sequestering learning activities from performance monitoring, pointing out the inefficiencies in traditional peer review systems and promoting active identification and discussion of learning opportunities.<br /><br />Olga Brook focused on improving patient outcomes through imaging, emphasizing three pillars: standardization, patient satisfaction, and IT infrastructure. She discussed the role of standardized structured reporting, incidental finding guidelines, and decision support systems in enhancing patient care. Patient-centered care models and feedback systems were also highlighted as crucial in enhancing patient satisfaction and compliance, potentially leading to better outcomes.<br /><br />Overall, the session underscored the importance of collaborative, informed strategies and IT solutions in achieving consistent healthcare improvements, ultimately striving to better patient outcomes across radiology departments.
Keywords
organizational learning
radiology
diagnosis enhancement
patient outcomes
learning organization
healthcare improvements
peer learning
non-punitive feedback
standardization
patient satisfaction
IT infrastructure
collaborative strategies
RSNA.org
|
RSNA EdCentral
|
CME Repository
|
CME Gateway
Copyright © 2025 Radiological Society of North America
Terms of Use
|
Privacy Policy
|
Cookie Policy
×
Please select your language
1
English