false
Catalog
Making the Most of Radiologist Peer-Learning Tools ...
R1-CNPM20-2021
R1-CNPM20-2021
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
So let's jump right in, in the next several minutes we'll be discussing the following. The role of the conference in the peer learning model, how to achieve buy-in, how to set expectations, preparing the conference, moderating the conference, post-conference documentation, and then I'll give you an example of how we do it at Lahey. So peer learning is a model of continuous feedback, learning, and improvement that's really well recognized as a method to reduce error and radiology at this point. At its core, peer learning depends on a work culture where errors can be openly identified, discussed, and addressed. In the past several years, adopters of peer learning programs have successfully launched programs at their several institutions and shared their experiences. And although variation exists, common factors include establishing a system for broad and varied case identification, defining a process of case review and constructive peer feedback, presenting learning opportunities periodically at peer learning conferences, and then translating those learning opportunities to process improvement and regulatory compliance. As the most public-facing cornerstone of any peer learning program, the conference bears the responsibility of initiating and perpetuating the just culture that allows the program to thrive. So as we said, peer learning really depends on broad participation, both to identify sources of error as well as to prevent those same errors from happening again. In order for radiologists to commit their time and trust to the process, they're going to need context. Having a clear understanding of the background and evolution of peer learning is likely to achieve those initial steps towards acceptance and participation, and this is especially important for those currently or previously involved in peer review. So just to go back a little bit, the focus of quality improvement in radiology is now over two decades old. It was really launched in 1999 with the Institute of Medicine released Error as Human, a groundbreaking account that highlighted the societal impact of medical error and how this was perpetuated by a culture of silence. The ACR recognized this immediately and created the Radpeer system, which you are all familiar with. It made it publicly available to ACR members in 2002, and it's shown here on the right in its original form. The idea of Radpeer was great. It sought to identify diagnostic errors by having radiologists score a random sample of studies with comparison imaging, and then rate the quality of the prior interpretation via a numeric scoring system. Over time, and despite revisions, however, it became clear that Radpeer did not achieve the primary goals of identifying, addressing, and preventing those errors. The random case sampling did not help to identify cases with the most learning potential or opportunities for system improvement. Any radiologist will tell you that these instead arise from clinical feedback, interdisciplinary conferences, M&Ms. Reports also cast doubt on the accuracy of peer review, citing high inter-rater variability. Radiologists' perception of the system was also poor, with many feeling that they were just doing it to meet hospital and regulatory requirements, i.e., just checking the box. The combination of factors were summarized in a landmark article by Larson et al in 2016, and to many it seemed that the peer review had evolved into a vicious cycle of distrust and resentment, undermining collaborative learning efforts. So in response, the ACR and several radiology quality leaders advocated moving away from a merely score-based peer review system towards a collaborative, education- and improvement-oriented peer learning program. In 2017, a few early adopters successfully launched programs, including ours at Lahey. And since then, the movement has been growing. Most recently, in just this past September, the ACR approved an alternate pathway for accreditation, clearing the last remaining hurdle for peer learning in many centers. And Nelly's gonna be talking about this in a little bit. So at this point, you've established the background. Colleagues will understand the decision to move in the direction of peer learning. So you've established the why. Now let's look at the other logistics, the how, the where, the when, and the who. Well, how, there are several systems that are commercially available and can be integrated into PACS. And these allow radiologists to enter a case by entering a quick note about why the case is being submitted, clipping a representative image, and demonstrating the teaching point. And then identify the case as either a good call, a discrepancy, or a point being raised for group discussion. These case entries are then accessed by a moderator. You can select the cases to be shared, add relevant teaching points, and provide feedback to the interpreting radiologist. Where are these conference held? Well, any person that's moderated a peer learning conference can tell you that it's really ideal to have in-person communication. It really helps to foster the collaborative environment. It's easier for the moderator to read the room and see when a conversation regarding a particular topic or case is needed. That being said, remote options have been necessary this year and will likely continue to maximize participation with people on different schedules and at different locations. When? Well, it's really helpful to have regularly scheduled conferences so that moderators and attendees can plan accordingly. Depending on your institution, these may be subspecialized. Some institutions advocate monthly conferences. At Leahy, we've settled on quarterly specialty and departmental conferences. Attendance at each conference should be recorded with a minimum annual attendance required per attending physician. Ours is two per quarter for a total of eight per year. CME attendance can also be obtained to inspire attendance and also to underscore the educational mission. Who should attend? Should it be limited to attendings or staff radiologists? Should residents and fellows also be invited? Well, because peer learning is focused on valuable teaching points, there's no reason it should be limited to staff radiologists. At academic institutions, these learning points will be beneficial to residents and fellows as well. Survey responses from trainees have demonstrated a majority believing that peer learning should be mandatory and incorporated into the curriculum and that their participation as residents and fellows will increase their odds of later participation as staff radiologists. Many programs have started setting up separate resident peer learning programs and we've done the same. So, let's talk about the role of the moderator in preparing the conference. The moderator oversees submissions, organizes the cases, and prepares them for conference. He or she will reach out to individual radiologists as needed to provide feedback, and this can be either an automatic message sent through the peer learning software, although phone calls are helpful in higher profile cases to allow the radiologist to be prepared for the discussion and potentially provide some context. Many cases may not be simple perceptual message, but may reveal a variation in practice pattern, and for these cases, literature review is needed to establish best practice standards for the group. Conferences can be theme-based, and those are particularly popular. We recently reviewed the appearance of dural arteriovenous fistulas on non-invasive imaging and discussed our strategies for when to recommend diagnostic angiography as the next best step. Instead of scoring cases, they can be categorized, and our software allows us to categorize the errors as perceptual misses, interpretive errors, errors of communication, or systems issues. And all of this does take a little bit of time, so that needs to be considered when allocating the resources of the group. A few tips for the moderator, well, the moderator sets the tone to maximize congeniality and learning potential. It's really helpful to present a mix of misses or learning opportunities, good calls, and points for discussion. This is both for variety and for group morale. In essence, many times, these cases overlap if someone's good call can be a previous reader's miss. At these sessions, pitfalls and mimics are also emphasized, and strategies to prevent such errors can be shared. In a mixed specialty practice, some specialty experts can be called on to present their strategies for arriving at the correct diagnosis. Documentation of discussion points can be included as the teaching points. And as participation is a key metric in regulatory requirements, attendance should be recorded. Conference notes and teaching points can be submitted as meeting minutes. And at times, a question is raised during the conference that requires further literature review. So this should be performed as quickly as possible to close the conversational loop. Let's just go through an example of how we do this. At Leahy, we follow the following cycle. A radiologist submits a case. The case is reviewed by the section chief, who decides if it's a good case for conference. The feedback is provided to the interpreting radiologist. The moderator prepares the conference and runs the conference. So we use a primordial software tool that I can display here. But essentially, if I'm reading a CTA, And I see that there is an ATOM aneurysm here. And I look at why the CTA was ordered, and I see that this was identified on contrast head CT. So I can say, okay, this is a pretty good call. I'd like to let the reader of the head CT know that this was a positive finding and that I'm submitting it as a good call. So I pull up this software box, and it essentially allows me to submit the case into the peer learning software. And once the software is launched, it's really a two-step process to submit the case. One, I select the case in which the finding was made, identify it as a subspecialty neuroradiology case, and I want to give feedback to the attending radiologist. I can also select resident here. And the next step is very simple. I can enter a note on why the case is being submitted and clip a representative image showing the teaching point. I can then categorize it as a learning opportunity, a good call, or a case for discussion. I'm gonna switch hats now and enter into my moderator phase. And so when I enter the software as a moderator, I have access to all the information that's been entered. And I can categorize it as according to the type of error that I'm perceiving this to be. The next step allows me to see the report and see the finding that was made. I can identify who signed that report as well. And lastly, it allows me to identify some clinical information, enter the clinical history. The head CT was obtained as a 70-year-old male with altered mental status. He is being evaluated by neurosurgery to undergo either clipping or coiling. And the teaching points are a reminder to evaluate vessels on your non-contrast head CTs. So using this, I can now prepare the conference. I also can identify this as a good case for conference. And this generates a automatic reply to the interpreting radiologist that will show up in his or her packs. So let's switch hats once again. And now I'm in the conference and preparing and showing the case. And in this setting, I'm able to see the head CT, show the CTA. I can also identify the clinical history, the reason for submission, the clinical follow-up, any teaching points. And I can enter in some conference notes about group discussion. So I'll just type in great discussion. And essentially, that is all documentation that's gonna be submitted as the meeting minutes. So now the loop is completed. So we've discussed the role of the conference in the peer learning model, how to achieve buy-in, setting expectations, preparing the conference, moderating the conference, post-conference documentation. And I gave an example of how we do it at Lahey. Hello, I am Dorothy Sippo, and I will be presenting Approaches to Peer Learning and Self-Review. We will be discussing formats for peer learning, case types, informatics tools to support peer learning, connecting it to quality improvement, and self-review of automated outcomes feedback as an up-and-coming aspect of peer learning. There are different formats for peer learning. There is individual review, a group conference, or a combination of the two. Peer learning begins when a radiologist is interpreting cases and sees a case that one of their colleagues read. They recognize that case as a learning opportunity and go on to share it. They can share it directly with the individual interpreting radiologist as individual review or through a group conference. In some cases, a physician peer learning manager will first review the peer learning cases, determine their merit, and then decide whether to share them with the original interpreting radiologist and or through a case conference. Regardless of the format used, the person submitting the case and the details of cases for conference should be anonymized. There are a variety of case types. One type is a learning opportunity with clinical follow-up. This includes potential errors such as those in perception, interpretation, communication, or systems issues. There are also cases where additional clinical information such as imaging, pathology, or operative findings that subsequently become available are used. Other case types include grade calls, which provide positive feedback for identifying challenging findings or diagnoses. Zebras, which are rare or unique cases. It can be particularly helpful to share these in conferences so that more people can see them and become aware of the condition. And also artifacts that can help identify the case. It's important to consider connecting peer learning to quality improvement. This can be done by tracking case characteristics, including imaging modality and case types, to identify patterns of common errors. One can consider changing search patterns, report templates, or clinical reports to identify the most common errors. This can be done by identifying the most common errors and changing search patterns, report templates, or clinical workflows based on lessons learned from peer learning. Informatics tools can be used to facilitate the collection of peer learning cases within routine clinical workflow. And can be integrated with PACS. This is an example of the software that we use at Yale. It is part of our workflow orchestration and communication tool provided by Nuance Primordial. Here's the tool and it has a peer learning button. When I see a case in PACS that involves peer learning, I click on this button and I am then given a list of cases for that patient. I can select the case relevant to peer learning and it automatically identifies that this is for the breast section since I am a breast imager. I then have the opportunity to provide a brief description of the case and why there is a peer learning opportunity. I can also provide additional clinical history or follow-up information. And I have the option of attaching some relevant images to the case. I can label the case as being a learning opportunity, a good call, or flag it for discussion in a group conference. At our institution, a peer learning manager, the educational lead for our division, reviews these cases and decides whether to forward them on to the original interpreting radiologist or incorporate them into a section group conference. We can see that at the heart of peer learning, one radiologist is providing feedback to another. Sometimes that feedback is subsequent imaging results or pathology. It's possible to potentially automate the process of giving this type of feedback. We'll see an example with screening mammography, where the use of the BI-RADS assessment category helps us to anticipate what kinds of outcomes could be helpful to give to the original interpreting radiologist. In this case, with screening mammography, an exam is given a BI-RADS zero and we expect the patient to return for subsequent imaging. When they undergo that diagnostic workup, another BI-RADS category is assigned. Let's think of an example where the patient is recalled for an asymmetry and a subsequent diagnostic workup, they're assigned a BI-RADS one or two when it's found to be negative or benign and the asymmetry presses out. In this case, that false positive screening exam can be helpful feedback to the radiologist. In other cases, a biopsy may be recommended. In this case, we want the system to search for pathology results. When they become available, they can be fed back to the radiologist who interpreted the original screening exam, as well as the radiologist who recommended the biopsy at diagnostic workup. This can allow them to see a cancer they diagnosed or a false positive exam. While at Massachusetts General Hospital, I partnered with our mammography information system vendor to customize their peer learning system to enable automated outcomes feedback. When the radiologist logged into the reporting system, a box would pop up informing them if there were peer learning cases for review. This was developed with MagView. They could click on a button that would show cases with review pending and cases that they had also saved from review in a teaching file and could then subsequently share in a case conference. By double-clicking on one of the cases for review, this review window would come up. It has several parts. This gives a quick summary of the case. This portion allows the radiologist to see the patient's reports, the screening mammogram, diagnostic, and any procedure or pathology reports. The system also automatically pulls up relevant breast imaging. And at the bottom, some information can be collected about review of the case. Here we see the summary where the radiologist can review how they recalled the patient for focal asymmetry from screening. Additional diagnostic workup showed a suspicious mass, and the biopsy revealed pathology results of cancer. The bottom of the page allows the radiologist to answer a question. Will the case influence future clinical decisions? We ask this question so that the radiologist can pause and reflect on the case, and also so that we can learn about what types of cases are most impactful. The radiologist can enter some comments about the case if they want to come back to it later, and copy it to another work list, namely the teaching file. We studied the impact of this feedback system on the screening mammography performance of eight academic breast imagers for 21 months without the feedback tool, and nine months with the feedback tool. This is a statistical control chart plotting abnormal interpretation rate over that time. We can see that the average abnormal interpretation rate before the tool was in use was 7.5%. There was a statistically significant drop in abnormal interpretation rate, 6.7% post-intervention. We also studied the positive predictive value of biopsies performed pre and post-intervention. It was 41% pre-intervention and rose to 51% post-intervention. This also was a statistically significant improvement. We did not find evidence of a change in the cancer detection rate. When we think about connecting peer learning to quality improvement, self-review of automated feedback may be a powerful tool. Self-review with automated outcome feedback can extend beyond breast imaging. There are a growing number of American College of Radiology reporting and data systems, like the BI-RADS that we just saw. These systems standardize reporting of findings and expected outcomes. They can be used as the foundation to create automated outcomes feedback system for a wide variety of organs. Here we see the ACR BI-RADS atlas, which was the original RADS system, but that a number of other RADS systems exist for liver, lung, ovarian, prostate, and thyroid as a few examples. As takeaways, peer learning cases can be shared individually and or through group conferences. Types of cases include learning opportunities with clinical follow-up, such as potential errors and additional clinical information, including subsequent radiology or pathology results. Other case types are great calls, zebras, and artifacts. Tracking cases can help identify common errors and inform changes in search patterns, report templates, or clinical workflows for quality improvement. Informatics tools can support peer learning and enable automated outcomes feedback for self-review. Thank you. Hi everyone, my name is Andy Moriarty and I'm a practicing body radiologist at Advanced Radiology Services in Grand Rampas, Michigan, and the chair of our quality committee. Today we're going to talk about transitioning from peer review to peer learning as part of making the most of radiologist peer learning tools session. Our objectives are to review the historical and practical differences between peer review and peer learning, to summarize the potential risks and benefits of each program, to discuss recent trends in transitioning from peer review to peer learning and examples from practices, and to describe strategies that practice leaders can employ and facilitate when making these transitions and anticipating challenges. So first I wanted to do a very quick review of peer learning and peer review to build off what Dr. Acebo has presented earlier. I want to say that traditional random peer review score was predominantly perceived as punitive and the radiologist departments recognizing this are increasingly transitioning from that retrospective score-based peer review to more of a active non-punitive peer learning program. The ACR put on a summit in January of 2020 that discussed and explored many of these topics which will form the presentation today. They concluded that radiologists and practice leaders are encouraged to develop the programs tailored to their local practice environment and foster a positive organizing culture. So one of the key differences is that there's voluntary case submission and this encourages continuous improvement by identifying cases that otherwise might not be recorded or otherwise shared to practicing radiologists. Many of these cases are encountered in daily practice and there are a number of publications recently discussing how they can be identified. One such paper from EITRI here in 2018 found that many of the teaching cases and quality improvement measures that were developed by looking at this case process would not have been found in a non-random peer review or score-based process. The overarching highlight though is that every practice is going to face unique challenges and that's based on your local environment. The good thing is that there's increasingly a number of publications available that can help you in your process. There's two examples here, one from my own facility, Private Practice Radiologist Experiences, published in 2020, describes our transition here from 2016 to around 2020, but if you reach back a little bit more you can see there's well-documented experience in the academic literature, a survey from a large medical group. So we're going to spend a little bit of time on the challenges and then recommendations. And so soon after the implementation of RadPeer and score-based peer learning, a number of authors found that there were logistical problems, sort of fundamental errors to the program, as well as administrative burdens that, you know, some of these things could be fixed by program adjustments, but others were really just fundamentally difficult to improve and really indicated that we might need to adjust the program at that time. Some of the main concerns were the high inter-rater variability and the tendency of radiologists just to agree with their colleagues, not really resulting in a valid measurement of radiologist performance. The overall general dislike of the program by individual radiologists who really didn't want to disclose errors under most circumstances and didn't think that the program had a positive impact on workflow. And then really that there was just a not a lot of acceptance of the value of peer review because of its random nature and the lack of ability to demonstrate quality improvement. So talking about some of these logistical challenges, the obvious first one is difficulty in removing bias from the ratings, followed closely by the use of expert consensus as the gold standard. Radiologists, we talked earlier, were kind of reluctant to talk about rating their peers, colleagues, and to give a bad review when it might impact someone in their group who they're close friends with or knew on a personal basis, and didn't have a lot of consideration for long-term patient outcomes or follow-ups. And there was a lot of talk about the sampling error because you're not actively catching these cases in a sort of prospective manner as they come up during the day. The final two points kind of go together. Administrative burden, a lot of that sort of grew out of the lack of integration in the clinical workflow. Some of that has been solved by tweaks to the program, but again these are all minor issues. The bigger challenges or the bigger concerns were sort of fundamental problems. Radiologists really seeing this program as punitive or sort of seeing score-based peer review as punitive, not having good review, not having good agreement between the raters, and not really identifying all those cases in a sort of a random retrospective fashion. The lack of finding these cases didn't contribute to learning, and the radiologists really didn't feel like it was a good use of their time. When you surveyed them, around 45% said they were dissatisfied with the program, that it didn't do a good job of really portraying actual radiologist performance. 80% of them said that it really didn't change anything about their practice patterns, and 87% said they only did it because they were required to do so. So what are the proposals for some of the new strategies? Well, early on in 2006, Fitzgerald proposed a transition to peer knowledge sharing instruction and teamwork. David Larson followed that up with analogies to commercial aviation and anesthesia as exemplars that have made such a transition in the past. This study was closely followed up by the Royal College of Radiology model, which recommended abandoning peer review scores and transitioning to a system based on, quote, peer feedback and learning conferences. That was adapted a little bit by our colleagues in the United States, who said that we should call this new form of thinking peer learning, and that we could really sort of propose it as a mechanism of continuous feedback and improvement. And then Dr. Donnelly, in 2018, suggested some suggestions of how practices might transition to that. So in the subsequent years, there's been a number of practices that have adopted this. They have found that the transition has actually been very well received. They see an almost immediate increase in the number of learning cases that are submitted with minimal or no impact when removing the score transition. They find that you implement this comprehensive program, and people are very well receptive to it. They want to participate. They want to learn from this process. And they really just have an overall much more positive experience compared to score-based peer learning. So how do we go about identifying what you do to make that transition? So the key elements of peer learning are a group of eight topics identified in the 2020 summit. And that is really replacing random peer review with peer learning. The radiologists favor this method, and that except for the additional time and engagement required, there aren't a lot of undue side effects. So taking them one by one, a broad group of participation, really means that all radiologists are actively involved. If you're going to have radiologists who want to get away from peer learning, they have to get into a process where they're all regularly submitting cases, participating in these peer learning conferences, and taking an active role in program maintenance. The active identification of learning opportunities is sort of fundamental to the peer learning concept. You need to look at those cases that you find every single day, or at the workstation, and in a de-identified way, forward them for review and discussion. Individual feedback is still an important part of peer learning. You need to do that in a way that's sort of timely, confidential, and constructive. You need to have those one-on-one conversations with radiologists so they know this is not a punitive system, this is about learning. But you need to follow up those conversations with peer learning conferences. These need to be held regularly and have wide participation across multiple organizations. You can have it as department-wide, you can have it broken down by a subspecialty section depending on the size of your practice, but the emphasis needs to be on learning and improvement. Then you need to take those cases that you learn from, or that you identify, and really link them up with the process and system improvement activities of your organization. You need to find the issues that underlie the root cause of this and translate those into system and process improvements rather than punitive or blaming of individuals who may have committed that error. The cadets panel recommends preserving your organizational culture, and really that is building on what's called a just culture of learning and improvement to minimize blame and other destructive behaviors. Of course, a key element of peer learning is making sure that the cases and discussions stay sequestered, and that is important to build trust in the program, but really to sort of foster all that data sharing and make sure that these are not part of any ongoing OPPE or focused practice evaluation committees that really judge radiologists' competence. The final topic that they recommended was effective program management, and that really is a infrastructure in place to support the ongoing program, including policies, well-designed roles and responsibilities, documentation of activities, and most importantly, time for the people who are going to be administering it. The summit concludes that it's a new model, successfully promotes feedback learning improvement in pretty much all practice settings, again depending on your local environment. So how do we talk about the transition to peer learning and the important steps that you, as the radiologist, can take at your local environment? Well, you need leadership support, right? You need to have someone who has dedicated time and that can have some staff to make sure that these programs are going to be administered in a reliable way. You need to build on that just culture, but you also need to get buy-in from the other radiologists in the department, and then you need to have a well-defined governance and management policy, including that's IT tools and workflows, and active participation of your radiologist once you get the program going. You need to meet regularly, and you need to continually improve by making iterative changes to the program as needed. Peer learning implementations vary widely based on your local practice setting, and so these will all be done at the individual practice level. If you're a practice leader, you can meet your local practice environments while still doing the basic requirements that are now outlined by the ACR and other accrediting organizations. You want to make sure you focus on just culture and not using the term peer learning for anything that really would be more applicable to peer evaluation or scoring or sort of performance evaluation. These programs are purely based on learning. You want to try to get rid of score-based peer review when possible, and if you do decide to run them in parallel, you want to keep the two separate programs completely separate so that you're not mixing and matching. And then we want to hear from you. We want to hear your experience and what you find is different, because we know that it's going to vary from practice to practice. When you're talking to hospital administrators and accrediting organizations, we need the same thing from our hospital administrators as we need from our radiology leaders. That's a well-formed policy, time to do it, and then personnel and IT systems that can really help us make sure that everything is going. But then again, from the hospital perspective, you want to hold radiologists accountable. If they're going to say they're going to regularly submit and monitor cases and hold these conferences, then those should be the metrics we're tracking, not the numbers of one, two, threes, and fours, but the number of learning opportunities that are submitted, the number of people that regularly attend conference, and then any sort of process or system changes that are made as a result of these programs is really much more valuable. The last two bullets apply to our accrediting organizations about formally recognizing peer review and specifying the minimum criteria. You'll be happy to know that this has really been taken to heart by the JACR as part of their accreditation process program, and there's a number of guidelines published in 2021 and coming in 2022 that will help further solidify this as a valid approach to meeting regulatory requirements. From IT system vendors, more of the same really. We want them to actively collaborate with us. We want them to understand the philosophy of peer learning and how the just culture really differs from score-based peer review that they may have been doing in the past, and make sure they're not mixing and matching peer review and peer learning when they're developing these programs to help us. So, in conclusion, we feel that peer learning has emerged as a new model that successfully promotes feedback, learning, and improvement in the practical setting. We know that you can do it. We know it won't be easy, but there's an entire community and a growing body of literature out there to support you, and we're happy to be in a resource we can. So, if you have any questions, my contact information is here, and I'll be available for questions at the end of the session. Thank you. So, I'm going to share with you sort of impact that it's had on our practice. A lot of the introduction and background has already been shared by the prior speakers, and as you all know, with what we have right now, or hopefully we'll be transitioning out of it, it's really trying to identify the outliers, right, in the expected normal curve, and that doesn't help everyone, and it certainly doesn't shift the curve, and that's, I think, the beauty of peer learning is you're not worried about the outliers. You're focused on the larger body of the individuals and the team, and trying to get everyone into a different level of performance. So, as, you know, sort of the leader of the peer learning path, Dr. David Larson quoted, we sacrifice understanding how often errors occur, and really turn towards learning from each other, so that we're moving the curve forward, and we do that through social learning, because when we come together and share and learn from each other, we grow together, and cultivating that is really key to moving us forward, because we work in a team environment, and reviewing targeted cases is really key, because that's how, you know, we learn best from our mistakes, right? You never forget when you've missed something, and that's when the greatest growth occur, and collectively, that's where we see performance improvement that then improve patient care, and as Dr. Concho and Dr. Dorothy Sebo showed, you know, improve the practice. So, as has already been mentioned, the key is psychological safety, and peer review completely undermines psychological safety, and that's why it's really important to ensure that this content is sequestered, and so that we build trust, and build that camaraderie, and then move towards reporting culture, because we know nobody wants to make mistakes, right? There's so many things that impact how we do, that, you know, we know, for example, overnight, evenings, we make more mistakes. That's well known. It's not a factor of us, it's a factor of these factors. When your list is out of control, right, or when you're rushed, like these things impact us, so there are honest mistakes, and we have to treat them as such, rather than stigmatizing errors, and so ensuring that, and recognizing that, and then endorsing a reporting culture, so that we can learn from these things, because again, that's when the greatest growth occur, and then, you know, collectively move towards improvement, because we can all learn from each other's, and we all have different perspective, and we know it works. Many, many studies have shown the value in increasing the addendum rates, and getting more radiologists to participate, and attend these meetings, and then to learn from each other, and then hopefully move on to systematic changes that move the practice forward. So, we started at Mayo Enterprise Abdominal Peer Learning, and this is birthed during the COVID era, so we had the three sites, Mayo Arizona, where I'm from, and then our Rochester, our flagship site, and then Florida, and the beautiful thing about peer learning is, you adapt it to what makes sense for you, and for us, what made sense was, so we have two individuals from each side, so we have six people in our team, and then we each alternate, we alternate, and so each of us end up presenting at two conferences, so it's monthly, and so we have two, we moderate two sessions each, and there's 12 total per month. It's done via Zoom, we use Visage to share our images, it's anonymized, you can anonymize images, and then we kind of, we have a subspecialty peer learning, so it's, we only do review abdominal cases, we have about 40 to 60 radiologists that participate regularly, CME credit is given, and so we, Dr. Bowman from MCF, Florida, we did a survey of the participants before we started, and then after we started, so we had about 40 people respond to the survey, and we found that with what we were doing, peer review, what most of us are probably still doing, only 20% felt that we were able to effectively identify errors, and similarly, a very low percentage of the respondents felt that we were able to share errors, to reduce error, to learn new things, or to change our behavior, and a very few minority felt like it made them better, and after we started peer learning, the results were significantly different, and really, you know, focusing on what we need to do, which is to improve ourselves, so that we can improve patient care, and so then there was a significant change once we adopted peer learning, and 80% wanted to continue that, and wanted to keep peer review as sort of the checkbox item, but wanted to continue peer learning, because we thought it was very beneficial for us as a group, so the ACR peer learning pathway, this is the website, you scroll down to accreditation, it's the first link, it tells you what the minimum requirements are to use this as the QA pathway, in place of peer review, and these are the individual items, there's a checklist that the ACR accredited, the body that comes when they come to accredit the facility, they're going to go through this checklist of peer learning items, and see if, you know, you're meeting these things, and again, the beautiful thing is, there's lots of flexibility, and to do a program that makes sense to your institution, and report what makes sense, so you need a written policy, the code identified, you know, define the culture, the goals of improvement, what is the peer learning opportunity, and then describe how it's structured and organized, define what your targets are, right, minimum participation, number of cases, what projects have arisen from these efforts, and then metrics, number of percentages of radiologists have participated, number of cases submitted, things that are, number of quality improvement projects that have arisen, and that's it, that's all you need, so it's very, it's flexible, it's, you know, it's meant to be simple, and really just to encourage us moving towards this more effective approach, so this is a summary of what we talked about, and the impact of peer learning on our practice at Mayo, on the knowledge, attitudes, and behavior of our practicing radiologists, and then an introduction to the peer learning pathway. Thank you.
Video Summary
The video elaborates on transitioning from traditional peer review to peer learning in radiology, highlighting its role in fostering a culture of continuous feedback and improvement. Peer learning encourages error identification, discussion, and sharing for collective improvement, contrasting with the punitive perception of score-based reviews. The American College of Radiology and institutions like Lahey and Mayo have adopted this model, focusing on non-punitive, collaborative learning environments that enhance quality and compliance. Key elements include broad participation, case identification, constructive feedback, and regular conferences to translate learning into process improvements. The program’s success hinges on confidentiality, cultural respect, and leadership support, fostering a culture where radiologists can learn collectively from shared errors and insights. This approach has shown positive outcomes, such as reduced error rates and improved diagnostic performance. The discussion includes practical tips for integrating peer learning into daily workflows, such as using informatics tools, scheduling regular conferences, and involving all levels of staff, including trainees. Overall, peer learning shifts focus from individual errors to systemic learning, enhancing patient care and fostering a resilient learning environment.
Keywords
peer learning
radiology
continuous feedback
error identification
collaborative environment
American College of Radiology
diagnostic performance
informatics tools
RSNA.org
|
RSNA EdCentral
|
CME Repository
|
CME Gateway
Copyright © 2025 Radiological Society of North America
Terms of Use
|
Privacy Policy
|
Cookie Policy
×
Please select your language
1
English