false
Catalog
QI: Understanding Error and Improvement in Diagnos ...
MSQI3217-2024
MSQI3217-2024
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
and welcome to part two of the RSNA's Quality Improvement Symposium, Understanding Error and Improvement in Diagnosis, based on the 2015 Institute of Medicine report on the diagnostic pathway. My name is Ella Casarone from the University of Michigan, and I'll be your moderator for this morning's session. So it's my pleasure to have some outstanding colleagues here today presenting in the area of Understanding Error and Improvement in Diagnosis. And interestingly, they're not all radiologists. We have two radiologists and a pathologist. And if you think about the diagnostic pathway, our pathology colleagues are dealing with many of the exact same issues we are about the diagnostic pathway. When to test, how to test, how to communicate, and how to be integrated part of the diagnostic pathway. So I have colleagues here today from Penn State, Timothy Mosher, Danny Kim from NYU, and Jeff Myers, pathologist from the University of Michigan. Dr. Mosher is going to be speaking to us about Failure Points in the Diagnostic Process, Understanding Error and Improvements in Diagnosis. Dr. Mosher. Thank you, Ella. So we're going to follow up on the first session, kind of starting now to look in more detail at the diagnostic process and starting to dissect it, looking at where are the failure points, what are some of the failure modes that we need to be thinking about. So although medical errors have been with us since the start of medicine, the real modern error of patient safety and focusing on medical errors was really triggered by the groundmark publication up to Eris Human in 1999, and the subsequent crossing the quality test in 2001. One of the reasons that it got so much attention was the numbers that it quoted. It's estimated that between 44,000 and 98,000 patients die in hospitals each year as a result of medical error that could have been presented. That was an extrapolation of the Harvard Medical Practice study done in 1991, and those numbers really shed a lot of attention to the impact that medical errors was having. Just to kind of put it in perspective, if we look at errors that are causes of health, causes of death that are currently driving a lot of attention, opioid errors are about 30,000 a year. Drunk driving deaths are about 10,000 a year. So these are huge numbers. One of the things that didn't get a lot of attention in that report was that 17% of these medical errors were diagnostic errors, and that really, if you look at what is recommended in that initial report, there was really only one recommendation related to diagnostic error, and that was the promotion of computerized order entry. Yilxin Li, who was one of the authors of that report, went on and did an additional kind of estimate. If you look at the correlation of misdiagnosis on autopsy studies, and you were to extrapolate that, estimate that between 40,000 and 80,000 hospital deaths were due just to diagnostic errors. So again, a huge number built on an extrapolation. Problem is we don't have a lot of good data on what the actual diagnostic error rate is and what the impact is on patient death, but we have estimates that it's quite large. So with those large numbers coming out, the attention in the early 2000s was really focusing on systems errors around the medical errors that we're seeing, primarily looking at medication errors, wrong site surgery, and infection. Not a lot of attention given to diagnostic errors. I think some of the reasons were really well illustrated in this perspective piece by Bob Wachter in 2010 in Health Affairs. He looked at some of the challenges of diagnostic error. One was the IOM report itself. If you're saying that 98,000 people are dying each year from medical error, you better have a way of beginning to try to reduce that. So a lot of resources were put into addressing the system errors around medication, wrong site surgery, infection, those sorts of things. And that drew some of the resources that could have been applied to diagnostic errors. Part of the problem we have with diagnostic errors, it's difficult to define and measure. We can be pretty clear when a wrong site surgery happens. I think we have a good definition of that, or a medication error. But trying to define precisely what a diagnostic error is a little grayer, it's a little more challenging. Part of the problem also is where these errors occur. Much of the diagnostic process occurs in a very fragmented ambulatory care setting, where you have health systems that aren't well integrated, you have medical records that are not talking to each other, so a highly siloed, fragmented area. Very difficult to produce change in that system. They're multifactorial. We're gonna spend some time at this talk talking about system and cognitive errors. There's a long interval between when a diagnostic error occurs and when it's detected, and we see the result of that diagnostic error. It's not like a medication error where you get fairly quick turnaround and are able to identify the factors that drive it. And finally, one of the other reasons is that there's not a good business model for diagnostic errors. They occur in the primary care setting frequently, ambulatory kind of mom and pop physician groups. They don't have the resources to begin to put in some of the system effects that you might have in a large hospital setting, nor do they have regulatory agencies like Joint Commission that are focusing and trying to coordinate improvement that might drive down diagnostic errors. So I think this editorial really reflected a lot of the areas where we're gonna see problems in the diagnostic process. This lack of awareness around diagnostic errors, one of the things that led to the 2015 publication of Improving Diagnosis in Healthcare. And although it doesn't have a large number out there, what the conclusion of that committee was is that diagnosis, and in particular, the occurrence of diagnostic errors, has been largely underappreciated in efforts to improve the quality and safety of healthcare. The result of this inattention is significant. The committee concluded that most people experience at least one diagnostic error in their lifetime, and sometimes with devastating consequences. So it wasn't the large number that we had before, but I think part of that reflects the fact that we don't have good, hard data to be able to come up with a quantifiable number of the people that are harmed by diagnostic error. The other area the committee spent a large amount of time developing was an operational definition of diagnostic error. And they really focused on the perspective of the patient. So diagnostic error is the failure to do two things that are really important to the patient. One is establish an accurate and timely explanation for the patient's health problems. And second, communicate that explanation to the patient. So both steps are considered part of the diagnostic process and potential failure points for diagnostic error. The committee also spent a lot of time developing a conceptual model about the diagnostic process. And this conceptual model is centered around the patient as well. They're the one person who's involved in every step of this highly fragmented system. It also includes both the diagnosis, the creation and explanation for the patient's problems, and the communication phase of diagnostic error, getting that information back to the patient. It also includes following up on the accuracy of the diagnosis. So looking at the treatment and going back and saying, was that diagnosis right? What was the ultimate outcome of that? So it develops a learning organization are the areas that it really focuses on. Now, if you take this complex problem, you can start to divide it into three different phases that are similar in the types of failure modes that occur there. The first is in a pre-analytic phase. This occurs before the patient really engages with the health system. So first might be the patient doesn't recognize that a health problem occurs. You think about the stroke that's not recognized or a patient who has chest pain doesn't recognize that it's a myocardial infarction doesn't seek out healthcare. There also is problems engaging the health system. This could be delays in accessing the health system because of hair, not having adequate resources, not having insurance, communication, cultural issues that might be barriers or regional lack of diagnostic expertise that's not available for the patients. There also is a post-analytic phase where there's a failure to communicate that diagnosis to the other providers, back to the patient. This could be systems issues related around follow-up of incidental findings that would correspond to failure points in this post-analytic phase. And then finally, we have the analytic phase where we traditionally think of the traditional diagnostic process. And we're gonna focus most of the talk on this session. But I think they're similar in that if we look at these two areas, the pre-analytic phase and the post-analytic phase rely heavily on systems-based solutions. So particularly the pre-analytic phase, looking at some public health applications to reduce those failure modes, educating patients about symptoms that might be related to diseases, looking at the allocation of resources to make sure that those resources are available to the patients. The post-analytic phase relies heavily on infrastructure, IT infrastructure to ensure accurate and timely communication, verifying that that communication has occurred, tracking software to be able to look at when this failure points break down and make sure that we have redundancy in that system to correct it. The analytical phase is really quite different. And partly it's because of the different failure modes that can occur in that area. Gordy Schiff published a paper back in 2009 using a deer taxonomy, a way of classifying where diagnostic errors occur. And it maps very nicely to the NAM report on here. So what he looked at was a review of 583 different cases in which diagnostic error occurred and tried to determine what step along the path they occurred in. So in his analysis, 10% of these errors occurred during history taking, about 10% occurred during physical exam. Almost half of them occurred during the diagnostic testing phase. Diagnostic testing phase really encompasses everything from was the appropriate test ordered? Was it interpreted properly? And was that communication of those results passed back to the physician? If we break that down and look at some of the subcategories, the two biggest causes of error was failure or delay in ordering the proper test. We saw that in the first session as being a common theme of where this breaks down. And then the erroneous interpretation of the lab or radiology reading of the test. So more falling on our plate. Those two subcategories alone counted for 11% of all of the errors. Appropriate referral and consultation was relatively small, about 3%. And about a third of the errors occurred because of lack of failure to properly integrate all of that information and come up with a proper assessment and diagnosis. Mark Graber's published a lot looking at different failure modes within the diagnostic process and really categorizes them in three areas. One is the no-fault error. These are patients that you just didn't have an opportunity to make the diagnosis. Somebody who drops dead of coronary artery disease without symptoms beforehand. Or very typical symptoms that are gonna be easily mass or confused for another condition. Unfortunately, we're never gonna be able to eradicate these types of diagnostic errors. The others are more system types errors. Some of those we talked about before, the access to healthcare, availability of records, the intercompatibility of different health systems. And we can use typical systems types approaches to address these types of errors. They can be reduced, but they're gonna require consistent vigilance and monitoring to make sure that they don't reoccur. And finally, one of the things that's unique to diagnostic errors is the large component of cognitive errors. These could either be due to a knowledge deficit, perception errors. They tend to be very difficult to reduce because they seem to be somewhat idiosyncratic and very contextual. They occur in specific situations. So we can't apply a lot of the applications we've done to reduce system errors and have an impact directly on cognitive errors. And if we look overall at where diagnostic errors occur, this is another paper by Mark Graber that he did a retrospective review of 100 cases. Cognitive errors play a significant role in the overall failure point of the diagnostic process. In this case, almost half of the errors were both a combination of system and cognitive factors. Another quarter were cognitive factors alone. So almost three quarters of all errors had a cognitive factor involved in the diagnostic process. So we're gonna look at where these failure points occur. We're gonna have to really begin to look at diagnostic thinking and what is driving those errors in the diagnostic thinking process. James Reason frequently uses the Swiss cheese model to look at how you address errors within a system. In system modes, we kind of think of the cheese as being opportunity, the holes in the cheese as being opportunities for errors to occur, latent errors that are present and give an opportunity for active errors to pass through them. Now when we're thinking about a very well-defined system process that we can engineer, we can make the system redundant and we can put other pieces of cheese in place and reduce the overall risk of an error occurring in that process. It tends not to work as well in cognitive errors. These tend to not have the level of redundancy. Oftentimes it's a single person that's doing it and these errors kind of come and go depending on the situation. So very difficult to apply the same engineering approach to cognitive errors. There's some other factors that drive this. We tend to think of system errors as something that is a fault of the system. We want to be clear and transparent to it. We want to be communicating these to others so that we can address and fix these problems. Unfortunately, we oftentimes think of cognitive errors as being a failure of the person. And there is certainly a cultural aspect where we don't, particularly as physicians, where we tend to think of that as being the art of medicine where we're not willing to share these cognitive errors and learn from them. The other failure that we incur is to think that these two occur in isolation. In fact, they're very much integrated. One of the pieces that came out of the IOM report is the fact that you have to look at the work environment as how it impacts on the diagnostic process. So all of the diagnostic team members work within this environment, and the changes in those environments can drive performance of the diagnostic team members. If we're going to reduce the risk of cognitive errors, it's going to be very difficult to implement that on the individual person or diagnostic team. We really have to look at what can we do to influence the surrounding environment that drives the risk of developing those diagnostic errors. So when we think about cognitive errors, we're really thinking about the way we think. It's an inherently human process. You're probably familiar with the work of Daniel Kahneman, thinking fast, thinking slow, which describes the dual process theory of cognition. And that's a very useful model for understanding about how these diagnostic errors can occur. How do we get from a patient presenting into the office to a diagnosis? Well, early on, what happens, the patient presents. We might take a history of physical. We start to do some mental matching with an illness script. So the clinician's looking at this kind of sounds, narrows down the list of potential opportunities. If he doesn't recognize that illness script and say, I don't know right away what this is, we require a much more focused analytic process. We have to kind of think like we would be solving a math problem. This is called type two processing or systems two thinking. Over time, as we became more and more familiar with that pattern, we develop a more robust illness script. That repetition leads to pattern recognition. And now when patients start to come in, it becomes more instantaneous. In radiology, we often talk about Aunt Minnie. How's it Aunt Minnie? Because I know Aunt Minnie. I've seen Aunt Minnie so many times. I don't have to think about who Aunt Minnie is. That's a type one process in which we recognize that pattern, we quickly jump to a diagnosis. A very efficient process requires a lot less cognitive energy. And it's our natural preference to use type one processes rather than type two processes. Now to kind of illustrate this, I'm gonna give you a little quick cognitive test here. We're gonna take a test here. So if a bat and a ball cost $1.10 in total, the bat costs $1 more than a ball. How much does the ball cost? Give you a few seconds to think about that. So if you said the bat costs $1 and the ball costs 10 cents, you would be using type one processing. So there's this triggering effect. When we say that the bat costs $1 more than a ball, you tend to split the $1.10 into $1.10. Unfortunately, as you start to do the math, you realize this is wrong, because then the bats only cost 90 cents more than the ball. If you came up with the answer that the bat is $1.05 and the ball is five cents, you'd be correct, but that's much more of an analytic process. That's the type two processing. And our natural tendency is to wanna use type one processes whenever we can. So that's a general overview of the conceptual model for the diagnostic process, and for thinking about cognitive errors as a major source of errors in the diagnostic process, we're now gonna focus a little more on radiology itself and look at where do the failure points occur in the radiology diagnostic process. A good way to begin to look at these errors is to look at malpractice claims. Any source of diagnostic error we have is somewhat biased, and that's certainly the case also with malpractice claims, but these are very useful because they tend to tell us what are the errors that cause significant harms to the patient? This is some data that recently was published by Dana Siegel up at CRICO. CRICO has been involved in looking systematically and developing this robust taxonomy for diagnostic errors for the last 40 years, so it's a very rich learning base to be able to look at where these processes break down and lead to patient harm. This was a study that looked at about 30,000 cases put into the database between 2010, 2014. About 1,300 of them had radiology as the primary service for allegation. And if you look at where the errors occur, about 60% of these occur as diagnostic errors. And if we break that out, about 80% of those had an allegation of misdiagnosis or wrong diagnosis. I tend to think of those as more of the cognitive error component of the diagnostic radiology process. There was also fairly large errors related to communication, both communication to provider and communication to patient. Those we addressed earlier, those are going to be more system-type fixes that we can look at to address these two failure points. But I want to talk a little more about the cognitive errors that occur within radiology. So if we're going to reduce radiology diagnostic errors, we're going to have to look at how do we address this cognitive component. If you look at misdiagnosis, that is the finding was present on the film, wasn't appreciated at initial interpretation, but retrospectively could be identified, we kind of think of those as perception errors. And this by far tends to be the most common diagnostic error that we see within radiology practices. Most of the times, the numbers quoted are somewhere between 60% to 80% of all radiology errors are related to these perception errors. And it shouldn't be that surprising if we think about, we're relying on the human mind to interpret these images. And the human mind was really never set up to look at static images. We were in a dynamic, fluid, three-dimensional world. Our eyes are very sensitive to change. They're not very good at picking up static-type processes, and optical illusions are a good example of those. If you're looking at this image here on the left, some of you might be seeing an object that's moving. Some of you might be seeing an object that's static. If you look at the image on the right, some of you might be seeing a duck. Some of you might be seeing a rabbit. So our mind takes that image, that stimulus, and converts it into a different perception. And that process is very prone to types of errors. We can look more deeply into where perception errors occur, and we break it down into three different steps. The first thing we have to do is we have to scan the image. There's only a part of our eyes has the high acuity to be able to pick up the imaging findings. So we, rather than have the image moving, we have to scan across it to be able to pick up that information. Once we scan across the image, we decode that into spatial frequencies. We begin to look at specific patterns. So that's taking, we're doing this constant mental matching game of what are you looking for, and are you picking up something that resembles that? That occurs very quickly. It tends to occur subconsciously. But if we find a finding that seems to match, we hold that in short-term memory for a brief period of time. And if enough of those go on, we click, and we say, okay, I recognize this now as being an abnormality, we then can go on and make the interpretation. Just to give you a little example of this, I'm going to do another test. I want you to take a look at this image, and I want you to tell me if you can see an animal within this group of lines. Raise your hand if you see one. Okay. I'm going to give you a little more history now. I want you to find a blue horse. So a little challenging. I'll show you. Here's where the blue horse was. And so if we kind of think about those steps of perception, it kind of can see the same approach that we're using when we interpret a radiograph, we can use an interpretation of this image. The first thing is we had to search. Now in the first setting, there was very little information to work with. So mentally, you had to hold the blue, yellow, and orange lines all in your head and then try to come up with a match that could be any type of animal. Once you had that additional information, it was a little simpler. You only had to focus on the blue lines, and you could start to hold, look at a specific pattern of where you might be seeing those abnormalities. Now this is similar to what we're doing when we're reading a film. We have a mental matching game. If I'm looking at a chest radiograph, I have a mental model of a pulmonary nodule, so I'm constantly matching back and forth, or I'm looking for infiltrate or anelectasis. So it's a data-driven type search process that we are going through. We can use eye-tracking data to begin to understand where do these errors occur, and it can occur in all three phases of perception. Search errors, this was an older study, accounted for about 30% of the errors. Synthesis errors, search errors I should say, probably going to increase as a lot of our data sets now have more and more images, and we're having to press for shorter and shorter time to look at those images. So there's certainly a greater risk now that you simply aren't even looking at some of the images that are being presented within these large data sets. Synthesis errors are ones which we scan over the image. Something catches our eye. You might have a little longer dwell time on it, but you kind of pass by it and you continue on. You don't extract that information. It's interesting, if you look at synthesis errors and you go back and you say, you were dwelling on this part of the image, but you didn't have a finding there, and you re-highlight those, you can actually improve accuracy by about 15% by re-highlighting those areas that you spent some longer time focusing on. And finally, recognition errors, where you extracted the information, you said there's a finding here, but you talked yourself out of it. It's overlying lung markings, or it's some abnormality that you talked yourself out of. That's about almost half of the errors. So those are perception errors. We think of those as the misdiagnosis. The other is where we make the wrong diagnosis, and that is a slightly different process in which we over-rely on heuristics, or our mental shortcuts that we use in the diagnostic process. We can understand how heuristics develop. Early on when we're trainees, we're very unfamiliar with the interpretation of films, so we have to rely heavily on type 2 processing. It has a high cognitive load, it's very cognitively demanding, and we're not very good at it. So it's a high cognitive cost with low amount of reliability. Over time, as we look at more and more studies, we start to develop these mental shortcuts, this pattern recognition that develops subconsciously. These are the developments of the heuristics that we rely heavily on as expert thinkers. And as we become more experienced radiologists, we can use these to become highly accurate with relatively low amount of cognitive load. That cognitive load is important because that's really our cognitive reserve. Once that's expired, our ability to be accurate drops substantially. These expert use of heuristics rely on the fact that we're constantly testing them. We should be thinking of heuristics as a hypothesis. If they're working well, we have an initial diagnosis. We transfer that care to somebody else. We might find that there's a change in diagnosis, additional information came back. We get some feedback and say, hey, remember that case that you had? That was wrong. That should lead to negative reinforcement, causing us to recalibrate our heuristics. Unfortunately, in this highly fragmented diagnostic process, it's rare to get feedback. And so the lack of feedback, unfortunately, oftentimes drives positive reinforcement. So we have an erroneous heuristic that's constantly being reinforced. I haven't heard that I'm making mistakes. I must not be making mistakes. And so overconfidence is something that tends to drive these wrong diagnosis. So what we really ought to be thinking about is if we're using heuristics, we need to be monitoring, reflecting, and calibrating on that heuristics. One of the approaches that I use now is I recognize the fact that I'm using heuristics. So as soon as an image comes up, I take my gut and I dictate a report. But then what I really go back on is I say, okay, why was that report wrong? Where did I make a mistake and can I really reflect on it? Likewise, you need to be following up on the accuracy of your heuristics by getting feedback on your diagnosis. Long term, I think the impact is going to be looking at humans, factors, engineering. You know what happens here drives what happens in the brain. And what we really want to look at is what can we modulate here to improve our overall accuracy? What's driving us between system one and system two? There are some things that we know impacted, cognitive fatigue, priming, environmental factors. But I think having a better understanding of those factors is going to improve our overall ability to reduce cognitive errors. So I wanted to leave you just with four take home points. The first is that the failure points in the diagnostic process consist of both system and cognitive errors. We shouldn't be thinking of them as independent factors. They're tightly integrated. We ought to recognize that traditional approaches that reduce system errors are not very effective at reducing cognitive errors. Radiology, the most common failure mode we have is perception errors. Finally, the future improvement I think is going to rely a lot more on humans, factors, engineering. Looking at how we can modulate that environment to reduce the overall impact of our performance in the diagnostic region. Thank you. Our next speaker this morning is Dr. Danny Kim from New York University. And he's going to speak to us about a very important topic. What are those impediments and barriers to a smoothly functioning diagnostic process? Technology, liability, reimbursement, and culture. Dr. Kim. Thank you, Ella. It's great to be here with you. So in this section, we'll talk about some of the other factors in addition to the cognitive factors that can impair the diagnostic process. So first looking at technology, we're going to focus on health information technology for the purposes of this talk. So health information technology plays an important role in the diagnostic process. It captures the information that informs our decision making. It can shape the clinician's workflow and decision making. And it facilitates information exchange among different healthcare providers. So it's a central repository for a lot of the information. There's a broad range of such technologies used in healthcare. Things such as the electronic health record, computerized order entry, clinical decision support tools, laboratory and PACS, health information exchanges, and certain medical devices. When we look at how technology can serve as a barrier to improving diagnostic performance, we look at poor usability. So usability is described as how easy something is to use, how effective is it, how efficient is it. Difficult navigation. When you're at the workstation, how easy is it for you to get the pertinent clinical history, pathology results, laboratory results, even finding the referring clinician if you have a critical result, how easy it is for you to navigate through the system to find that information. Cluttered displays of information. We spend a lot of time trying to go through different sections of the medical record to see, to find the information that we need to make a diagnosis. Lack of accurate, timely, and reliable data. You know, when we're trying to read a case we've mentioned before, having the appropriate clinical history, where do I find that? Is it available? Especially from the emergency department when you're trying to read a case, do you have the clinical notes available? Are the labs ready? A lot of that information needs to be readily available for you to interpret the images and make the diagnosis. Lack of evidence to aid decision making. So this goes to decision support tools. What is the evidence from which those recommendations are made? And that can vary from topic to topic. Poor transmission of information to other healthcare providers. In our system where people are on different computer systems, they're in the hospital, outside of the hospital, how is that information transferred to each, all of the people that take, participate in the patient's care? And then finally, just the technology downtime, whether it's maintenance or unintentional downtime, that can in fact impact the diagnostic performance. One of the more controversial areas is clinical decision support. So while it has great potential to support the diagnostic process, a lot of questions about validity and utility remain. So there's a lot of investigation to see how valuable these clinical decision support tools actually are. At this meeting, we see a lot of presentations on artificial intelligence and natural language processing, and they offer promising solutions to improve this going forward. Already in place are computer-aided detection tools for imaging. We see that in breast imaging, lung cancer detection, colon polyp detection. They're slowly being incorporated, but what is the data to support their use, and do you find it effective and easy to use? So I think wider use of these clinical decision support tools in clinical practice has been hindered by the lack of usability and acceptability studies in the actual clinical environment, not in a test setting, but in actual workplaces. And I think that needs to be done. So some of the barriers to improving diagnostic performance. Technology, while it may help in a lot of cases, it can also distract clinicians from providing patient-centered care. There are a lot of anecdotes about the computer being in between the physician and the patient, and the physician spends their time looking at the computer screen and not the patient, and they can miss a lot of information from that, indirect communication, observing the patient. So it can be a hindrance. When we look at technology, it can consume a lot of time and cognitive efforts away from the diagnostic process into managing the technology. So how much time do you spend actually looking at a case, thinking about a case, thinking about the differential, rather than trying to manage the dictation software, the EMR, trying to get all that information? How is that time divided? And technology can take away a lot of that time and hinder an accurate diagnosis being made. When we look at recommendations in designing health IT systems, it's recommended to use a human-centered design approach. In this approach, you balance the requirements of the technical system of the computers and software. They should be able to do what they need to do, but you have to balance that against the socio-technical system. And that's just a term used to describe the interactions between the technology, the people, the workflow, and the environment. So the technology system has to fit into how people do their jobs. Does it fit with the workflow? Do people find it easy to use? Is it appropriate for the environment? We have examples of order entry in our emergency department where it takes them so many clicks to order a CT scan of the abdomen. It's amazing how many clicks they have to go through. So that's something that can impact the diagnostic process. They're spending more time trying to get the right order instead of thinking about their patient and the diagnosis. Next, we'll talk about medical liability. So there are two main functions of the medical liability system. The first one is to compensate negligently injured patients. And the second function is to promote quality by encouraging clinicians and healthcare organizations to avoid medical errors. When we look at the medical malpractice claims, this is a chart published in the New England Journal of Medicine. Diagnostic radiology, we're sort of average, in the mid-range. In any given year, about 7% to 8% of radiologists will have a medical claim against them. And about 2% or 3% will have an actual payment for that claim. So you can see we're about average. Diagnostic errors are a leading cause of malpractice claims. And for radiology, as you've seen in previous examples, it's much higher. This was some data published in Radiology by Wang. And you can see at the top, failure to diagnosis accounted for the majority of the malpractice claims. So barriers to improving diagnostic performance, I think, are concerns regarding medical liability, inhibit disclosure of errors to patients and their families. And this is not for malicious reasons, it just may be unintentional. Some of us have limited knowledge and guidance on how to disclose errors in an effective manner. It's an emotionally charged situation. How do you disclose these errors to the patient? Sometimes we receive mixed messages from senior leadership and risk management about things we should say and shouldn't say. And it can be very confusing in that time. There's also personal embarrassment and experience, lack of confidence. I don't want to admit I made a mistake. So that can be a challenging factor as well. When we look at our tort-based judicial system, it really hinders improvements in quality, safety, and continuous learning. If you look at the system, the patient and the healthcare provider or organization are positioned in an adversarial role. So they're against each other rather than being collaborative. So it sets up an interesting dynamic where they're trying to, on the one hand, trying to collaborate to get the best care possible. But on the other hand, if there's errors, trying to defend each of their positions and point out errors on the other side. So this leads to defensive medicine. So defensive medicine is the practice of ordering tests, procedures, or visits, or avoiding high-risk patients or procedures, primarily to reduce exposure to malpractice liability. And we all know this occurs. And it's a barrier to providing high-quality care. It leads to overly aggressive and unnecessary care for patients. Diagnostic over-testing leads to potential patient harm. In radiology, unnecessary tests may lead to extra radiation exposure, exposure to contrast material. You can have adverse reactions to that, allergic reactions, MRI safety events. So there's a lot of potential harm from these unnecessary tests. And all these extra tests can hinder the diagnostic process by creating a lot of superfluous data. You do all these tests, you get some results. There's incidental findings. There's false positives, false negatives. And now when you're making the diagnosis, you have to incorporate all that information together. So a lot of that data may not be necessary and actually hinder an accurate diagnosis being made. It contributes to over-utilization of imaging services. And I think this is accentuated by the decreasing consultations with radiologists. I think with the advent of PACS, it separated the referring clinician and the radiologist so there's no rounds anymore. They don't speak to each other anymore. And I think those consultations are valuable, not only in choosing which tests should be done, but the interpretation of those tests. We mentioned earlier about lacking clinical history. Oftentimes, read a case, it has a generic history. Abdominal pain, fever, rule out abscess. But when you have more consultations and you can have more specific discussions, I think that helps to improve the diagnosis. It's not just the referring clinicians, but it's also radiologists that contribute to recommending unnecessary additional imaging examinations. They may lack some confidence or they don't want to be the last one to take a stand. And they'll recommend additional imaging being performed. And that leads to additional problems. Again, incidental findings and exposure to all the safety concerns about additional testing. When we look at the patient perspective, there's an IOM report looking at the patient side of medical liability. And they found that many instances of negligence do not result in litigation. They also found that judgments are inconsistent with the evidence base. A lot of these, again, are emotionally charged situations. And sometimes the judgments are not concordant with the data or the rational evaluation of the case. There's highly variable compensation for similar medical injuries. Doesn't seem to be fair. And only a fraction of negligently injured patients receive compensation. So patients aren't benefiting and we're not improving the system. Medical malpractice reform is necessary and is actually underway. I think over 30 of the US states have some sort of tort reform. But these traditional approaches that they're implementing for medical malpractice reform have not been that successful. Either in compensating patients or in improving diagnostic performance. I think the two main approaches that we see are one is just limiting the amount of compensation you can receive. And imposing some barriers to bringing lawsuits. So while they may be reducing the overall cost of care, reducing insurance premiums, are they really doing benefit for the patient or the system? So medical malpractice reform could potentially make healthcare safer by allowing us to be more transparent about the medical errors that are being committed so that we can learn from them. Use them as valuable learning opportunities and not have the stigma and concern about medical liability. We should be open about that, explore them, study them and hopefully improve future performance. Patients would also benefit. They could be promptly and fairly compensated for the injuries that were avoidable. You know, that would be a more patient-centric view. There are a lot of new approaches to medical liability. In the ION report they talk about communication and resolution programs. They talk about safe harbors for using evidence-based guidelines. For treating your patients. And there's also discussion of administrative health courts. So we'll see where in the future this process takes us. But those are the two things to keep in mind. You know, are patients being fairly compensated and are we learning from our errors? Next we'll look at reimbursement. So in the US the fee-for-service payment model is still the dominant model that's being employed. However, when you look at this model, it doesn't incentivize high-quality and efficient care. If you look at the Medicare fee schedules, there tends to be an imbalance. There's higher reimbursement for procedures and diagnostic testing than there is for evaluation and management services. So it diminishes the importance of the cognitive effort involved in diagnosis and gives more payment for the tangible, discrete procedures and tests that we perform. It also incentivizes more diagnostic tests since they result in more payment. Again, more tests creates more results, false positives, false negatives. It can hinder the diagnostic process. It incentivizes treatment over non-treatment. So if you see a patient in order to get paid, you've got to make a diagnosis and you have to treat it. Again, this creates more inaccurate diagnoses in the patient's chart. Some of them may be valid, some of them may not be. But this is what the payment model reinforces. There's also no consequence for ordering unnecessary diagnostic testing as of yet. I know there's legislation in the future that's going to require some decision support and appropriateness of testing, but that's not in place right now. There's no incentive for making an accurate diagnosis. There's no rebate or discount if you make a wrong diagnosis. There's no incentive for coordinating care with other healthcare providers. You know, to improve diagnosis requires a lot of collaboration, but there's no additional payment for even consultations with radiologists about selecting the appropriate test or reviewing results of a test. There's simply no time for that, no money for that. So it tends to be disregarded. There's no incentive for proactive outreach to patients to manage their care. When we look at incidental findings and the follow-up management, a lot of these patients require follow-up exams sometime in the future, six months, a year, two years. When you look at the systems for how the patients are reminded of their follow-up appointment to come back in in a year or two years, those systems don't exist. There's no incentive in the system to make sure those systems are in place to get those patients back. So a lot of them fall through the cracks and they don't get the follow-up study that they need. Another factor in reimbursement is the documentation guidelines. I think initially documentation was created to facilitate clinical reasoning so that everyone could share their thought processes and other clinicians could see them, but it's being usurped by the billing department to satisfy billing requirements. And so we often see a burdensome level of detail that is often irrelevant to patient care. When I go into the patient chart, even the HNPs now, there are pages and pages of stuff that really doesn't impact me that I don't really need to know or see, but that's all in there for regulatory and billing purposes, and they can impact care. I spend more time trying to find the information that I need to interpret that study than I do looking at the films and thinking about them. Repetitive and irrelevant data in the medical record, just to review a case, it's amazing the number of charts you have to go through, the number of lab reports, pathology reports to get the information you need. Spend a lot of time going through all this repetitive and irrelevant information. There are new payment models and care delivery models that are being evaluated and being implemented. And it remains to be seen how effective these will be. These include new models such as shared savings programs, capitation or global payments, bundled care, accountable care organizations, patient-centered medical homes, Medicare's value-based purchasing, which is a pay for performance program. A lot of these programs are being implemented and tested, and they have great potential to reduce diagnostic errors and improve performance. However, there are some concerns whether the incentives are in place for clinicians and healthcare organizations to possibly reduce use of appropriate testing and clinician services that may inadvertently lead to greater diagnostic errors. So we'll see which of these models works in the future. I think continued investigation and evaluation is needed to see what the outcomes are down the line. Next, we'll talk about culture. So organizational culture are the norms of behavior and the shared basic assumptions and values that sustain those norms. In lay terms, it's how we do things here or how it's done here. That's a simple way of kind of defining it. And this can be a barrier to improving diagnostic performance. Having a punitive culture can be very harmful to improving diagnosis. And this doesn't just involve formal discipline. It can be admonishments from your colleague, placing blame on your colleagues instead of looking at the system and trying to improve. So that can be very damaging when you're trying to improve diagnosis. Hierarchical attitudes, this is already in place in our structure. In the academic environment, we have assistant professors, associate professors, full professors, section chiefs, vice chairs, chairs. How comfortable do you feel disclosing an error that maybe your colleague made, discussing that error? A lot of these people are responsible for your promotion, advancement of your career, mentoring you, publishing papers, getting on committees. That can be a challenge when it's your superior that you're trying to have a discussion with about medical errors. And this is the same in private practice as well. You have partners, associates, part-time people. How easy is it to have those communications about medical errors with those people that can determine your future? A presence of subcultures. So within a large organization, you can have pockets of people or groups that don't share those same values or have different values. That can impact the performance of the overall organization. These subcultures can exist in departments or it can be smaller to more divisions or floors or even individual people who can express a different set of views or opinions than the organization. And this can impact the overall performance of the organization. As was stated previously, lack of open communication and peer feedback. Is that possible? Is that what happens in your organization? Without that open communication and that feedback, it's very challenging to improve your performance. So those are requirements. Discounting the role of teamwork and collaboration. So diagnosis is not a one-person endeavor. It really takes collaboration of all the team members who are involved in the patient's care. And so we shouldn't discount that. Something more insidious can be this belief or this acceptance of the inevitability of errors. We say, oh, we're in healthcare. It's a complex field. Errors are gonna happen. Having that belief can be very damaging when you're trying to improve performance. You know, you compare it to other high-risk industries like the airline industry, nuclear energy industry. You know, they really look at errors and try to minimize the errors as much as possible. And we have to be on that, we have to have that same approach or that culture. Any error we should look at and try to avoid them in the future and not be accepting of a certain amount of errors as being okay. If you're that patient and received that error, it wasn't okay for you. I like this quote by Peter Drucker, who was a management guru. And he stated that culture eats strategy for breakfast. So you can have the best strategy, plans, technology, but if the culture doesn't support it, you're probably gonna be unsuccessful. So you shouldn't underestimate the importance of culture because it can make up for less than optimal strategies. But if you don't have that culture, a lot of your improvement efforts are gonna be doomed to failure. So barriers to changing a culture, there are a lot of barriers. System inertia. There are people in departments or organizations that have been there 10 years, 20 years, 30 years, 40 years. They say, this is how we've always done things here, this is the way it's going to be done in the future. And new members as they come into the organization are sort of indoctrinated into that. This is how it's done here. That's a lot to overcome when you're trying to change the culture. Loss of benefits of the current culture. So there may be good aspects of the current culture that you don't want to minimize or remove. So there's concerns that you may be losing the positive aspects of the current culture that exists. There's uncertainty regarding which approach to improving culture is best. You can't decide between different strategies, and as a result you don't do anything and you don't change the culture. Change fatigue. I think this is responsible for a lot of the physician burnout that we see. When things are constantly changing, it's hard to get into a rhythm or some pattern, and it can cause a lot of fatigue for the clinicians. And that can be an impediment to changing culture. Failure to convey the need for change. Poor communication of little successes. These can serve as motivation for the organization to continue on that road to improve performance and change culture. Inadequate identification, preparation, and removal of barriers to change. And insufficient involvement of leadership and management. That's really critical when you're trying to change a culture. One model to strive towards is a just culture. In a just culture, it balances the learning from the medical error with the personal accountability. So in this culture, responses to a medical error should be based on the circumstances of the clinician's actions. What was the environment, what was the scenario that led to the error? In this culture, it distinguishes between a human error, an at-risk behavior, or reckless behavior. And the response is contingent upon that. So for human error, like making a typographical error in a report, you should console the clinician. If there's an at-risk behavior, maybe copy and pasting into a report, you should coach the clinician, but that's not desirable. For reckless behavior, where you're just ignoring safety policies, like giving gadolinium to a pregnant patient, the correct response for that would be to discipline the clinician. So it's dependent on what the clinician's, what the circumstances of their actions are. That will inform the response. So the IOM recommends that healthcare organizations should adopt policies and practices that promote a non-punitive culture that values open discussion and feedback on diagnostic performance as a way to improve it going forward. So in summary, in technology, deficiencies in the health IT system can consume time and cognitive efforts away from the diagnostic process. Medical liability, our current medical liability system, does not promote fair compensation for avoidable patient injuries and hinders clinicians' efforts to disclose their errors and to learn from them. Our current dominant fee-for-service payment model does not incentivize high-quality and efficient patient care, and reform is necessary. And organizational culture can have a critical role in efforts to improve diagnostic performance. And these are some of the references that were used in the presentation. Thank you. Our next presentation is by Dr. Jeffrey Myers, who's a pathology colleague of mine at the University of Michigan. Great that he's accepted to present us today on the roles of teamwork, technology, and culture to improve diagnosis, particularly in the area of patient and family-centered care. And as I mentioned to you, diagnostic radiology and pathology are very similar disciplines in this field of improving diagnosis. We have a lot to do and learn together. Jeff. Thank you, Ella, and thank you for the opportunity to be here. I remember when I was a resident, which was a long time ago, at Washington University, there were only two regular conferences at which our attendance were mandatory. And one was a conference that we had with radiology. In those days, you'd actually bring these things called films, and we'd put them up on boxes. And it was clear that we share many values and that our role in the value chain for healthcare is similar. So I'm delighted to be here. It's been a great session this morning. I've enjoyed it. I'm in awe of your conference, by the way. My taxi driver said that there are 55,000 people here. The largest meeting of pathologists in the world gets up to about 5,000. I had no idea there were so many of you. So thank you for letting me in. I'd like to focus on patient and family-centered care, and I suspect many people in medicine would think that a radiology meeting and a talk delivered by a pathologist would be the least likely place in which we should be talking about patient and family-centered care. And yet, I would argue that it's been the consistent theme through everything I've heard this morning. That I would argue you and us are in an ideal position to transform the patient experience from a platform of diagnostic medicine. And I'd like to think about that in kind of three parts. First, talk about finding your path, really our path, to patient and family-centered care. How did that happen? Talk about the work that we've begun and the importance of focusing on why before you get to how and what. And I want to finish by talking about restoration of purpose in general. In his 2015 IHI keynote address, Dr. Berwick talked about medicine as unfolding in a couple of eras, and the third is upon us, although it's not quite clear what it is. Era one was kind of the longest stretch of history, from Hippocrates to the 70s, maybe the 80s. That was really an era of independence. It was an era of trust and prerogative. It was an era of inquiry and learning and mentorship and research. And it was really the ascendancy of profession. Elliot Friedson, a medical sociologist working right here at the University of Chicago, says, here's the thing about a profession. A profession is a work group that reserves to itself the right to judge its own quality. And I would argue that we've done that for a long time. For a long time, when people wondered about the quality of medicine, the response was, trust us, we know. People say that with the quality chasm trilogy that has been featured in virtually every talk, expectations of the public have changed. I would argue they never changed. We just got outed. They didn't know. And that was era one. Era two is the era in which we stand today. It's an era of accountability and scrutiny. I would say unremitting scrutiny. It's an age of measurement. The number of metrics that apply to your quality program have never diminished. In era two, they've only grown. Age of incentives and an age of skepticism and doubt, which do not reflect changes in expectation, but rather sharing of information. And we've seen throughout the talks this morning kind of pictures of the trilogy of reports that change the rules of engagement. I remember when this was released in November 1999, and I would speak to pathologists and ask how many had read it, nobody raised their hands. When I talked to hospital administrators, everybody raised their hands. And this report was brilliant. The authors said the goal of this report is to break the cycle of inaction, our inaction. And they anticipated all of the excuses, which was heartbreaking, really. They said despite the cost pressures, the liability constraints, your resistance to change and other seemingly insurmountable barriers, it's simply not acceptable for patients to be armed by the health system they look to for comfort. And while we were still kind of reeling from this information, and by the way, it penetrated all fields, I began to see stories about pathology in better homes and gardens. In August 2008, my father called me and asked about the latest issue of Reader's Digest. I asked, who reads Reader's Digest? He said 80-year-old men living in South Dakota. And he wanted to know about something called floaters, when the wrong tissue gets on a slide and you attribute that diagnosis to the wrong patient. And I asked, how in the world did you learn about floaters? He said, we didn't want anybody to know about that. And he said, it's right there in Reader's Digest. Now that may make you feel safe because the number of 80-year-old men in South Dakota who now put us at risk is small, but I was horrified to learn that there are more readers of Reader's Digest than the Wall Street Journal, Fortune, and Businessweek combined in households that earn over $100,000. This started here. And while still reeling, in 2001, we got the rest of the story. The Institute of Medicine wanted to make it clear that a focus on safety could not come at the expense of effectiveness, which we've heard a lot about this morning, being patient-centered, being timely, being efficient, and being equitable. At the time, I was working at the Mayo Clinic in Minnesota, and everybody breathed a sigh of relief as they went through the checklist and thought, well, at least we're most of these. But we didn't understand what patient-centered meant. And now this morning, we've talked about the third in the series, in which another brilliant committee made it clear that this would stand no more. Don Berwick would say that this thinking has, in era two, driven our obsession with the tools of improvement, which is a great thing. I'm not disparaging it. Every one of your institutions has some version of the Duran trilogy, which kind of starts with quality control, right? It's replacing the tire if the tire goes flat. Quality improvement is understanding why it went flat and responding to that by building a better tire or a better car and innovation. What Duran called quality planning is about building something else altogether, like an airplane, so you don't have to worry about the tires at all. And Don Berwick says, this machinery really is about the manipulation of contingencies with the idea that we can somehow elicit the care that we dream of. And as one of the chief authors of era two, Don Berwick says, we got it wrong. What we got wrong is the balance of the Duran trilogy. We have become lovers of the tools of scrutiny, massive investments in control at the cost of improvement and innovation. And in most organizations, it feels like this, right? I mean, when was the last time your chair or your boss, whatever their title said, listen, we'd like fewer metrics. We'd like to know less about you so that you can spend more time doing what we pay you to do. Anybody? Had that happen? Don Berwick says, one of the ways to kind of move from era two to era three is to have a national agreement for a 50% reduction in the metrics. Sounds awesome. Because what we've learned in era two is that we cannot possibly inspect our way to excellence. But what then is the path? Because the stakes keep feeling higher. We're here today to talk about diagnostic error, but that's really about more effectively managing health at a population level. It's really about transforming the experience of care in a good way and doing all of that while reducing the cost. It makes you breathless, doesn't it? I mean, it feels hard enough already. How do we do that if the machinery with which we've become so comfortable in era two doesn't apply? And how does that apply to those disciplines that have been kind of comfortably ensconced behind the curtain, removed from the patients we touch? Radiology is different. Radiology is different. I mean, patients come to you, right? You put them in your machines, but mostly you don't know them. Am I right? I mean, they don't even come to see us, so it's easy not to know them. We began thinking about this in the fall of 2013 when we started a strategic planning and leadership development initiative that eventually we called Michigan Innovative Personalized Patient-Centered Pathology, or MIP3. And we did that mostly because everything at the University of Michigan requires a catchy acronym. So we started with the acronym, and then we developed the words. We developed what felt like a compelling vision of pathology that looked different from the road we were on. And in October 2014, we shared that message with all of our staff and faculty, and we invited them to some small group meetings because we wanted to hear from them how far off we were and how they were thinking about our future. 211 of 217 faculty, staff, and trainees showed up, so they wanted to talk. When we finished our small group sessions, we met again, and we told them what we thought we heard, and we said we thought we heard six things, six themes. The first thing we heard the loudest was surprising to me. What we heard from our faculty, staff, and our trainees was, part of our problem is we've forgotten why we're here. Patients and families are at the center of everything we do. And what we heard was our job is to provide them access to understandable information on their diagnoses in a way that's timely as part of a multidisciplinary team. And by the way, we're not ready to let go of era two. None of this should come at the cost of the continuous improvement at which we become pretty good if we're going to preserve current state operations. And finally, we need to quit focusing on everything that we can do when it comes to modern technology and have a lot more conversations about what we should do in putting that technology to work for patients. And we took these themes, and we began to do some projects to understand what it might look like for a pathology to operate in this fashion. We put a pathologist in a weekly multidisciplinary lymphoma clinic so that they could be available to patients to help them understand the diagnosis that was driving their care. It was wildly popular, but harder to sustain in a fee-for-service environment. We put a social work team in the Wayne County Medical Examiner's Office, which is Detroit, which is part of what we do, because we realized that we weren't responding to the needs of all of the families who come to that place for devastating news and to identify loved ones who have often perished in horrific ways. We began meeting with families who had lost young children, whose children had come to us for an autopsy, and we wanted to understand how they wanted to get the news. And their response has been, we want to get the news from you, because it feels like you know what we want to learn. As we were on this journey, we met with Linda Laron, who's actually the Chief Administrative Officer for our Cardiovascular Center. We wanted Linda to meet with our MyP3 group, because she's edited this beautiful book of stories from patients and providers, and we wanted to learn about that. And she connected us to something called Patient and Family Centered Care, and she said, I think the work you're doing is this work. If you go to their website, the Institute for Patient and Family Centered Care says, Patient and Family Centered Care is an approach to the planning, delivery, and evaluation of health care that is grounded in mutually beneficial partnerships among health care providers, patients, and families. It's really redefining the relationship to be a collaborative one. Now, when you talk to providers about Patient and Family Centered Care, the most common response I hear is, oh yeah, no, I've done that my whole career. And with all due respect, you haven't. You've done great things to and for patients, but the difference is, Patient and Family Centered Care is about doing it with them, and that looks different. And it's based on the recognition that patients and families are essential as allies when it comes to quality and safety, not only in their direct care, but in our quality improvement conversations, in conversations about safety, certainly any conversations about diagnostic accuracy, also about our research, designing our facilities, and developing our policies. Articulated in its simplest form, Patient and Family Centered Care is about working with patients and families, rather than just doing to and for them. And I have to tell you, for a pathologist, this is frightening. Not even sure what it looks like. As we were having these conversations, one of our very bright, well-intended faculty came to me, visibly agitated, and said, I got a phone message. I didn't know where this was going. He said, I got a message from a patient. I said, that's great. He said he wants me to call him back. I said, that's great. He said, what should I do? I thought, wow, we have a long ways to go. I said, I don't know. Why don't you call him back? He said, I'm not his doctor. I asked, did we send him a bill? He said, yes. I said, I think you're his doctor. See what happens. He called him back, and then he came back to tell me the story. And it made him feel like a doctor to have the conversation. But Patient and Family Centered Care is not some touchy-feely thing to make us feel better. It's a hard-nosed strategy to do hard things in a way that makes sense. Don Berwick says, it's actually the most direct route to achieving the triple aim. One of the common tactics in developing a Patient and Family Centered Care program is building a Patient and Family Advisory Council, which in some institutions is an enterprise-level activity. In our institution, it tends to reside at the level of departments or centers. We had 30 of them at the University of Michigan, and we now have one in pathology. We tried to learn about how to build a Patient and Family Advisory Council before we launched. One of the things we were told was probably no bigger than maybe 15 tops. We started at 42. Our thinking was, if you throw the doors open, how many pathologists are going to walk in anyway? We got 42. We don't want to turn anybody away. Fundamental to why these are effective are the patient and family advisors, here illustrated by arrows. They're really the folks that connect us to working with, rather than doing two and four patients and families. Juliet Schluchter, one of the earliest patient and family advisors in the country, said, when advisors and healthcare professionals are guided by mutual respect and a thirst to understand, good things happen. One of the early lessons we learned was, to do this right, you need to lean on your patient and family advisors. We also learned that to do this right, at least in our place, you need to partner with institutional resources, which for us is Molly White, on your left, and Kate Balzer, in the middle, who are from our enterprise-level patient experience office, and they run the adult-side patient and family-centered care program. They taught us about the principles that should really be the guideposts for the work that we do, which is dignity and respect. It's about sharing information that is useful and affirming, and it's about participating with patients and families and collaborating with patients and families. It's really a switch for a diagnostic discipline in pathology from focusing only on what is the matter with you, which remains an important question, but pausing long enough to also ask what matters to you? Because not everything that matters to us matters to them. One of the things that we've struggled with is that as you begin this journey, because pathologists are wired as problem solvers, and so are you, it's very hard to really work with patient and family advisors as opposed to bulldozing them with our own solutions, because we think we know what the answers are, but we don't. And information is key. We've heard about it through all of the talks today. And patient and family-centered care is about learning what matters to families and patients, not only with the information, but how they receive it, and when it's available to them, and who should be a communicator and a participant in the conversation. They should be participants in their care and decision-making at any level they choose, you should answer. And it's about collaborating not just in their daily care, but at all levels of your healthcare system, whether it be facility design, education, or research. About a year ago, we were trying to wrap our heads around all of this and thinking about the guideposts that we would use for identifying the work that was important, and we came to these We wanted, as an advisory council, to learn to share information generated in our laboratories with patients and families in ways that are affirming and useful. We wanted to help patients and families understand that we were part of their journey and understand how we could do it better. And we wanted to learn what it looked like to build those mutually beneficial partnerships so we could roll up our sleeves and get the job done. About a year ago, we took a large group, 20 including three of our patient and family advisors to San Antonio for a two-and-a-half-day seminar to understand the work. This wasn't the actual seminar, this was at dinner after the seminar, lest you all start taking out your phones and signing up immediately. And we learned from that experience that taking advantage of networking opportunities in your organization makes you stronger and better when it comes to this new territory. So that's how we got there. That's how pathology at the University of Michigan started thinking about our place in the universe of patient and family-centered care. And I think when we came home from San Antonio, what we understood was that it was important to just do something because it can be intimidating. And the first thing we wanted to do was to develop a consistent message that we could use in communicating to others why we thought this mattered. And what we landed on was that we're doing this work because we want to build a patient and family-centered culture of diagnostic medicine and personalized pathology We think what we do matters. This guy named Simon Sinek, perhaps you've heard of him or read some of his books, he's kind of a business consultant and speaker and author, and he has this notion that knowing why you do what you do is more important than knowing how you do and what you do it. That Apple in the 80s and 90s was not very different from other computer companies in terms of how they did what they did and what they did. The difference was in why they did it. They believed they could change the world. We believe we can change the patient experience and we're going to do that by building mutually beneficial partnerships. And, by the way, we think that that will change culture. In terms of the what, we want to work with patients and families to understand our role. We want to propagate knowledge in a language they can understand. We want to use a variety of multimedia and communication pathways to disseminate information to them. We want to increase their access to educational and professional resources. And we want to implement multidisciplinary initiatives to provide coordinated patient-centered care. We wanted to add patient stories to everything we did, although the problem is when you go to the cupboard to pull out the pathology patient stories, there ain't much there. So we sent an email to StoryCorps and said, can you help us? They said, we can. We have a program called their Legacy Recording Partnerships. It's taken eight months, but we're about to sign a contract so that we can do that at our institution. We've added patient advisors to our design teams. We've added them to our quality committees at both a local and a national level. Diversity, equity, and inclusion. And we've invited them to help us design our new space. We're working with a patient so that we can tell her story in a way that connects other patients to the role of the laboratory in their care. And we're bringing patients to all of our quality meetings so that they can tell us what it means. In feedback from our staff, they said things like, I greatly appreciated her candor, her, in this case, being Michelle Mitchell, a breast cancer survivor. I felt for her because it must be difficult to get up in front of all of us and talk about the feelings and experiences of such a personal nature. And another said, wow, this is really valuable. So as we celebrate our one-year anniversary, we've learned a lot. We've learned about changing culture. And the National Quality Forum points out that it is a fundamental domain when it comes to improving diagnostic quality and safety. Because patients, their families, and their providers are key members of our diagnostic teams. And I think this is really what ERA 3 looks like. ERA 3 is the restoration of purpose, which I would argue for you is a patient and family-centered culture of diagnostic medicine and personalized radiology. It's learning to get it right, not by just doing two and four patients, but doing it with them and coming out from behind the curtain and learning what matters to them. Maya Angelou said it this way, they'll forget what you said, they'll forget what you reported, they'll forget what you did, but they'll always remember how you made them feel. Thank you for your attention.
Video Summary
The RSNA's Quality Improvement Symposium emphasizes the need for understanding and improving diagnostic processes, drawing from the influential 2015 Institute of Medicine report. Speakers at the symposium outline the challenges faced in diagnostics, including cognitive and system errors. Ella Casarone highlights the importance of engaging both radiologists and pathologists in this endeavor since both fields encounter similar diagnostic pathway issues.<br /><br />Dr. Timothy Mosher discusses the historical context of medical errors, using the landmark "To Err is Human" report, which highlighted alarming rates of medical errors, including diagnostic errors. Mosher examines failure points within the diagnostic process, noting that cognitive errors are particularly challenging due to their idiosyncratic and contextual nature.<br /><br />Dr. Danny Kim addresses non-cognitive barriers to improving diagnostic performance, such as technology, medical liability, reimbursement models, and organizational culture. He highlights the drawbacks of fee-for-service models, which do not incentivize high-quality care, and the repercussions of medical liability, which often inhibit transparency and learning from errors.<br /><br />Dr. Jeffrey Myers from the University of Michigan concludes by advocating for a patient and family-centered approach to diagnostics. He underscores the importance of incorporating patient advisors in healthcare processes to create mutually beneficial partnerships and foster a culture that values patient involvement in their care.<br /><br />Collectively, the symposium emphasizes a holistic approach to improving diagnostics through addressing errors, leveraging technology, and embracing patient-centered care models to enhance safety and quality in healthcare.
Keywords
Quality Improvement Symposium
diagnostic processes
Institute of Medicine report
cognitive errors
system errors
radiologists
pathologists
medical errors
To Err is Human
non-cognitive barriers
patient-centered care
diagnostic performance
RSNA.org
|
RSNA EdCentral
|
CME Repository
|
CME Gateway
Copyright © 2025 Radiological Society of North America
Terms of Use
|
Privacy Policy
|
Cookie Policy
×
Please select your language
1
English