false
Catalog
Practice Standards: High-Value Structured Reportin ...
W6-CNPM17-2021
W6-CNPM17-2021
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
My name is Dr. Stein and this talk is about how to convince stakeholders that standardized structured reporting is high value. I have no disclosures. In 2008, Don Berwick and the Institute for Healthcare Improvement offered the triple aim to serve as a framework for the delivery of high value care, which included enhancing the experience and outcomes for the patient, improving the health of the population, and reducing the per capita cost of care. This aligned with the emerging healthcare landscape and increased reliance on care outcomes and the patient experience and reimbursement structures and spurred a paradigm shift in healthcare and in radiology. One mechanism through which radiology is achieving this paradigm shift is through improvements in its major work product, the radiology report. The major goal of the radiology report is to deliver timely, accurate, actionable information to the referring provider, the care team, and the patient. As an alternative to freestyle, prose narrative reports, which employ nonstandard terminology, introduce unnecessary variability, and potential ambiguities, structured reporting has become increasingly a preferred alternative. Often thought of as a tiered system, the simplest tier one involves consistent format with headers and sections. Tier two, logical organization with a list of required elements and anatomy, and tier three, standard lexicon with defined, consistent terminology. Decreasing unnecessary variation in radiology reporting and producing reports that are concordant with national guidelines is fundamental for patient care, central for radiology's financial success, especially in value-based payment models, and supportive of innovations in radiology reporting, including multimedia reporting and artificial intelligence-assisted reporting, which ushers imaging reporting into the modern era. When you have your stakeholder meetings on the value of structured reporting, much of the discussion will focus on improvements in report quality, which translate directly to patient care improvements. In this talk, we will start by unpacking the benefits of structured reporting in report quality and clinical impact, and move into the benefits of structured reporting in revenue capture, and finally, in supporting radiology reporting innovations. Throughout the talk, we will consider the stakeholders to include radiologists themselves, referring providers in the care team, patients and their families, and also those concerned for the overall health populations. Starting with report quality. This is a useful table the AUR Radiology Research Alliance Structured Reporting Task Force put together in a review of advantages and disadvantages of structured reporting back in 2018. In this table, and in fact this article, it can serve you as an anchor for you and your stakeholder conversations. Let's first touch on the benefits of the checklist-style approach inherent in structured reporting. All stakeholders in healthcare are interested in reducing diagnostic errors. One major source of error in diagnostic radiology is cognitive bias due to satisfaction of search, a premature stop in searching for diagnoses after making the initial diagnosis. But consistently using a checklist can help avoid this form of diagnostic error. There is evidence to support this in the literature. This large, over 3,000-exam research study on lumbar spinal MRI incidental extraspinal findings demonstrated that structured reporting helped identify significant findings in over 28% of patients which were not included in the unstructured reports, the difference being the presence of a checklist style serving to reduce cognitive error of incomplete searching. Another study looking at structured reporting for chest CT reports. This was a study out of Montefiore that I conducted with colleagues in 2015. Our main finding was that following the implementation of structured reporting and introduction of a coronary calcification field in non-coronary chest CT templates, the accuracy of reporting of coronary calcifications increased as compared to the control time period using PROSE reports. This study from New York City, St. Luke's, Beth Israel that studied almost 3,000 reports without and with the use of a structured reporting template for emergency room cervical spine CTs. Use of a checklist structured reporting template significantly improved the rate of diagnosis of non-fracture-related emergent findings in resident reports. In this JICR article out of UVA, Dr. Arun Krishnaraj and colleagues explored the impact of structured reporting on error rates, reviewing 400 studies submitted to UVA for overreads and analyzing differences in language and organization of reports. They found 12.4% of formerly overread reports had clinically significant errors. Of these, over 22% were emergent. One strategy that helped mitigate these errors was the use of structured reporting with an example provided here. CTA trauma templates in their environment specify vascular versus non-vascular components and include a field within the vascular for supra-aortic vessels, which is a common blind spot that this virtual checkbox addresses. Here in this example, it helped identify and report a left vertebral artery dissection at its origin. Again, the impact of the checklist approach and outcomes is made clear. Moving past the benefits of a checklist approach for structured reporting by generic exam time, in contextual or disease-specific reporting, templates have been shown to enhance clinical impact in tumor staging and surgical planning. I won't dwell here too much in my talk because Dr. Victoria Cherniak, my colleague, will be discussing directly the value add of ACR-RADS reporting later on in this segment, so I will cover this section only briefly with some data from the literature on the high value of disease-specific reporting templates. This report from Dr. Olga Brook and colleagues at the BI on pancreatic cancer reporting demonstrated significantly higher reporting of key features with the use of structured reporting, significantly higher scores for scoring surgeons on accessibility information for surgical planning, and higher completeness scores for surgical planning. This team implemented a 12-item structured reporting template for brain MRI examinations in patients with known or suspected multiple sclerosis based on published guidelines. Reports created a year before implementing the template served as a baseline and a random sample of templates and non-template reports were read by five neurologists for comprehensiveness and follow-up. They found that the neurologists assigned mean higher report ratings to template reports in three of four parameters assessed. The likelihood of a report receiving the best possible rating on any given Lankford scale was significantly higher for template reports and, conversely, reports not using the template had significantly higher likelihood of receiving a negative rating. Another article, here again we're focusing on the question of report clarity and completeness for our current providers. In this report out of Italy, AJR in 2018, radiologists who implemented a template for multiple sclerosis reporting of brain and spinal cord MRI reporting had reports evaluated by three neurologists to assess their understanding of the lesion load and the diagnosis clarity of reporting for clinical decision making. They found that all neurologists could understand lesion load significantly more often when reading structured versus non-structured reports. For two of the three neurologists, structured reports contained adequate information for clinical decision making more often than did non-structured reports. When reading non-structured reports, two of three neurologists needed to evaluate the images themselves significantly more often. The IHI AAA includes improving the patient experience. Keeping in mind that patients and their families are stakeholders in this equation, we know that the intersection of structured reporting and direct patient reporting brought on by the 21st Century Cures Act introduced some challenges but also has the potential to improve the patient experience and provide patients with greater agency in their care. The information blocking provision mandating immediate access and portability of electronic health information has led to major questions regarding ethics and professionalism for radiologists, including asking who is the intended audience of radiology reports and how should content be presented or worded, which gets to the heart of radiology reporting, structured or not. There is emerging data that focused patient-centered efforts will be required by radiology to help patients understand their imaging reports. In this study of over 100,000 consecutive reports, researchers found that radiology reports often contain complex concepts and polysyllabic terms unfamiliar to lay readers. Only 4% of all radiology reports in their sample were readable at the 8th grade level, which is the reading level of the average US adult. This means radiologists might have to explore using simpler, more structured language to address the goals of patient-centered care. The implications of structured reporting on patient perceptions and satisfaction in an era of online web portal access remains still an open question. Will structured reports and standardized lexicons increase or decrease confusion among patients? Do patients prefer or require or really demand lay language summaries of radiology reports? And will radiologists spend additional time creating these summaries or be aided by natural language processing or AI? Some efforts, such as the University of Pennsylvania's Porter Platformer, aim to provide an automated lay language translation of radiology reports. In addition to automated lay language summaries, the use of embedded terms within radiology reports may improve patient comprehension of the radiology report. There is further data in the literature emerging now that to be patient-centered in our reporting, we may have to add additional language to our reports and supplement the communication. In this 2021 JACR article, funded by the ACR Innovations Fund Grant Program and led by Dr. Katim of Emory, the team developed two standard language messages they call inforads that inform patients in very general terms about their radiologic results and offer recommendations for next steps. Inforads can be added to inform patients that there are no concerning findings, message one, or that there are actionable but not critical findings that require follow-up, message two. They then conducted a survey of surrogate patients to assess whether inforads would have the intended effect of one, decreasing anxiety with message one, and two, increasing calls to providers with message two, and finally elucidating whether patients desired to see those messages to be included in radiology reports or not. They did find that they lowered anxiety and increased urgency accordingly, and over 70% of patients were interested in having inforads included in their reports. The published evidence suggests that radiologists may have to stop thinking about the radiology report as their final work product and instead start thinking about the report as a springboard for becoming more active partners in healthcare, a change that will likely exert upward pressure on report quality and utility for all stakeholders. In the last few minutes, I tried to show you how structured reporting can improve diagnostic accuracy, clarity, and comprehensiveness, as well as reduce error with use of checklists, and secondly, how disease-specific reporting yields high value for providers and patients and alignment with national guidelines, and finally, by exploring the question of how to best reach patients as a direct audience in our reports, perhaps using structured language. Let's move on now to revenue capture. In this report out of NYU, Danny Kim and colleagues sought to determine if structured reporting could improve complete reporting of abdominal ultrasound exams, leading to appropriate coding and accurate reimbursement. When they compared the number of complete abdominal ultrasound exams reimbursed as limited exams before and after their template was implemented, they found significantly fewer downgraded reimbursements after their structured reporting intervention. As such, structured reporting was a tool used to maximize appropriate coding and reimbursement. Similarly, in this study by Dr. Rich Duzak and colleagues, they found that 20% of abdominal ultrasound reports had incomplete documentation, resulting in up to 5.5% losses in reimbursement. Importantly, the deficiency was usually due to preventable causes, such as not mentioning the spleen. By reminding the radiologist to comment on specific organs, structured reports can lead to improved documentation and reimbursement. This report from my colleagues and I at Montecchio looked at the frequency of addenda after the introduction of an auto-populating field for contrast dose and structured reporting templates for CT and MRI. The need for addenda interrupts the revenue capture for imaging exams. We found that the addenda rate during the post-intervention nine-month period highlighted in green was significantly lower than the pre-intervention reports highlighted in pink. And this JCR article out of Emory last year, Dr. Marta Heldman and colleagues described their implementing a large-scale enterprise-wide standardized structured template for chest radiographs by targeting the examinations that comprised the largest share of the departmental volume, approximately 19%. Their goal was to maximize the initiative's impact on structured reporting template usage, maximize radiologists' time savings, and also economic benefits. What did they find? Based on projected time savings from auto-launching of templates, they estimated eight and a half hours per month radiologists' time saved. And when they studied the economic impact looking at correct-to-bill rates for chest x-ray versus all other reports, their analysis showed an overall trend of linear increase throughout the sampling period for chest x-rays, shown in blue, and also for all modalities, shown in red. But the radiography CTB rate was higher by an average of 20%, suggesting that factors specific to radiography were influencing the greater CTB rates. Note that the rise in all modalities was felt to be due to concomitant role in structured reporting for other exams. Moving on to innovations in reporting, a number of innovations have been introduced in radiology reporting which build on the backbone of structured reporting templates and augment simpler benefits of structured reporting. We'll spend the last few minutes on this topic. We have seen software solutions be incorporated into practice by automating the transfer of metrics from DICOM data into fields in the structured reports, which has the potential to both improve radiology efficiency and workflows and reduce typographical errors. This work was described by a group at Mayo Rochester and also similarly by our group at Montefiore Einstein in 2019. Here's an example of an auto-filling report for a twin gestation ultrasound exam. Blue text indicates the text of structured reporting template. Red text indicates software auto-filling functionality. The green text is populating from the wrist and the black text was entered by the radiologist. It's easy to see in an example like this how the structured reporting template serves as a backbone for the auto-filled metrics fields which are mapped directly to the modality and which introduce radiologist efficiency and reduce the risk of typographical error. Going a step further beyond auto-populating metrics, multimedia reports can include hyperlinked text, tables, graphs, images, and potentially videos. Multimedia reports do not have to but are optimally built upon the backbone of structured reports. Here is an example of the 3D CT image with current and prior as comparison that can be included with a report which facilitates an understanding of disease progression at a glance for both the referring provider and potentially the patient and their family. Interactive multimedia radiology reporting is not yet a widespread use despite the reported potential to add value, improve communication, and increase efficiency. Maybe related to limited market penetration of the available technology but an article from UVA provides a glimpse of what is possible. It shows us that radiologists with some training did adopt voluntary use of multimedia reporting mostly for PET-CT and CT shown in green and red here implying that they found enough value in the resulting report to make that effort. Before I close, a few words about the present and future of radiology reporting and artificial intelligence. The development of artificial intelligence-enabled tools is transforming clinical radiology. The future may hold clinical workflows in which AI produces multiple versions of the radiology report designed for oncologists, surgeons, patients, and radiologists themselves which would improve communication with radiologists' various stakeholders. AI will support structured reporting but the reverse may also be true. Structured reporting allows for more easily extractable data for AI algorithms which can be used for QA projects, population health statistics, and for ground truth in AI algorithms. These developments will promote research, data sharing with biobanks, and compatibility with national registries that elevate the possibility for personalized medicine and further refinement of patient care, the patient experience, and overall improvements in population health. I hope that in the last 15 minutes or so I have been able to provide benefit evidence for the use of structured reporting radiology that may be useful in stakeholder discussions. Structured reporting provides improvements in report clarity, comprehensiveness, actionability, and clinical impact, can be harnessed to improve patient-centered care, can drive revenue capture improvements, and can serve as a backbone for innovations in reporting, ushering radiology into the modern era and in alignment with the Institute of Healthcare Improvement's triple aim of high value care delivery. Thank you for your attention. What I'm going to talk about is kind of the next step in structured reporting because I think if you're here, you're already persuaded. What I'd like to show you is where else we can take this further. This is actually a screen capture from the Facebook group on pelvic congestion syndrome support. This is a patient who received this report, and that's what she's asking. What does not appreciated means? Her report is basically saying no pathology is appreciated. This seems like English, but it's not really English to her. What does she think it means? Is it one of those options, really? Is it present, not present, not behave, no idea? She literally does not understand what it means. I would say it's not just a patient. Many referring physicians would not understand those kind of aesopian language. We really should be using some sort of common dictionary. This is a kind of a pre-step before we go into structured reporting. We should be using the same language, language that is understood not just by us, radiologists, but also by referring physicians, and ideally also patients, as we saw what Shlomit was showing us. Really, what is radiology report? We've been struggling with this question for the last 20, 30 years. I think we made those huge changes. We stopped thinking that this is our self-expression, poetry, et cetera. It's really just a product. It needs to be very clear, useful, and satisfy our referring physician, and also patients. There's a lot of clients here that we need to satisfy. This is original version of structured reporting published in 2011 in Radiology. Very simple. Just yes, no, very simple. No details, et cetera, which actually was met with a lot of resistance by many radiologists because it just looks extremely simple, too simple. There are obviously problems with structured reporting. I listed a few here, and I'm sure you can list yours as well. Those initial studies showed that despite all the problems with structured reporting, they're definitely rated as much clearer, both by radiologists and by referring physicians. There is a preference towards structured reporting, though referring physicians definitely have much more preference than radiologists. Radiologists usually don't as much mind. Why is that? Because basically, it serves as a checklist. It's organized. It's consistent. It's clear. It's all good. However, there are problems. The main problems that are there is it is time-consuming if your dictation software is not set up appropriately. It does oversimplifies a lot of things. We know that human body is not that simple and dictate appropriate reports. Sometimes you have to go into much more details than structured reporting allows you. It can be overly constraining, and some people do say that it decreases accuracy and completeness because you just basically boxed into those things. Most importantly, though, it's really not catering to specific patient problem if you're just using your standard, bland, abdominal CT structured reporting. Really, we need a special solution for specific cases, and that's what we're proposing, disease-specific structured reporting. Not for usual abdominal pain, but rather for patients, oncology patients, specific cardiovascular patients, et cetera. If you know a specific disease that you're following or want to rule out, you should be using those disease-specific structured reporting. They are very detailed and very specific, and they go to the level of details necessary required for the clinical management, for surgical planning, whatever is necessary for that specific disease process. Now, it's very tedious to design that. You can, if you have a disease process and you can't find the template online or in our book. How do you do that? You have to collaborate with right-referring physicians, and usually it's multiple. It's not just one specialty. It's usually both oncological and medical, and then also surgical, but also working with your audiologist. Don't do it just by yourself. You have to have a group of people who design that. It has to be relatively frequent study, so if you design a template that you use once in a year, it's really not going to work for anyone. Gather information, what is available in the literature, create a draft. Seek feedback, very important, both from radiologists and referring physicians, because you're definitely going to miss some stuff or misrepresent, or definitely need a lot of feedback. Publish, and then, very important, monitor, use, and audit this and change and improve. Now, the benefits of structured reporting has definitely been shown in multiple papers, but specifically for disease-specific structured reporting, there are quite a few studies as well. So, for example, here, MRI for shoulder structured reporting improved readability. They facilitate information extraction, and referring physicians, not surprisingly, again, prefer structured reporting. This is a case of rectal cancer MRI staging. This is structured reporting facilitated surgical planning, higher satisfaction of referring surgeons, and interestingly, surgeons were more confident about report correctness and further clinical decision making. So, it's not just the information, but overall structure helps them to be confident that you didn't miss something. We know when, we think we know, when we don't report something, it's not there, but they're not confident that you didn't actually look at it and didn't rule it out. So, that's why the structured reporting, if it's negative, it's personal negative, and you have to mention that. So, for example, case of pancreatic cancer staging, CTA, it's a very important study because it basically provides eligibility for tumor resection, and that, the most important thing, that will determine patient outcomes. And why we do CTA, because it allows us to delineate vascular anatomy, involvement, local tumor extension, et cetera. So, it provides all the details. However, the management is determined by multiple things, right? It's tumor-specific things, size, location, proximity, encasement of the vessels, testers, et cetera, and then the vascular evaluation. And in that area, as you can imagine, there are a lot of vessels, and so all of them are actually important. So, Al-Hawari and friends produced this template in 2014, working with also surgeons, and it was extremely comprehensive. So, this is just description of arterial involvement for pancreatic cancer. This is just venous, okay? And this is distant disease. So, it's very long, right? It's really hard to use, believe me, but, and so the question is, do you need to use it, right? So, what we did in our study, we compared completeness of reports. we actually implemented this. And we implemented actually the version that was before that that we developed locally. But overall, we compared non-structured reports before the implementation, structured after implementation. So definitely there was significant improvement. Again, if you have a structured reporting, you're kind of forced to fill it in. As you can see here, some of the things still did not appear even with structured reporting because people are people and they would just delete a portion of the templates, right? So, you know, and people would not delete things that they feel they are important, but other things that they felt not so important, they would delete them. So it's important to explain all the components of the template when you're implementing this. So we did ask the question, do we need to provide such level of detail? Because obviously it's extremely tedious, right? You all know those pancreatic cancer CTA studies stay on the list because if you have to use that template, it's extremely long. So we asked surgeons, can you determine tumor receptability based on the report? And we basically had the same study and reports structured and non-structured. An experienced surgeon on non-structured reports was able to determine tumor receptability just based on the reports in 42% of the cases. With structured reports, it went up to 60. And that's experienced surgeon. So that kind of goes to show you how much actual impact we have here. And then how much does report provide you sufficient information to decide on further management? Again, we had significant increase with structured reporting, not surprisingly. So overall, disease-specific structured reports, they increase in number of reported descriptors. They're easy to extract needed information, have sufficient information for surgical planning, and will facilitate ability of the surgeon to decide on tumor receptability, which is the ultimate reason why we're obtaining this CT. So yes, the answer is we do need that level of detail. Now, always trying to ask the question, the surgeon wants it, but does it really impact this level of detail? Does it impact patient management? And so here's the example. Replaced right hepatic artery. This is pretty frequent finding, actually. It's present in 20% of the patient, general population. However, if it's present, it makes liposurgery technically quite challenging. It prolongs OR time by 60 minutes. And if you don't know ahead of time that it's present, then it could have a catastrophic consequence. So you absolutely have to have known ahead of time. And it's still, so not mentioning this, just, oh, it's anatomical variant, who mentions those, right? It could have catastrophic consequences. Now, you can manage the replaced right hepatic artery. There are a variety of things what you can do. And you can still have a successful surgery, but for that, again, you have to plan ahead of time. Here's another variant, kind of a rare one, replaced right hepatic artery of GDA, as you can see here. And the right hepatic artery in this case was coursing through the tumor while left hepatic artery was free of tumor. So in this case, what was done was actually this patient underwent embolization of this right hepatic artery, this aberrant hepatic artery that went through the tumor and which would allow later on to remove, resect the tumor and block. While if you do it ahead of time, there are collaterals forming from the left. And so hopefully, there will be no ischemia to the remaining liver. And here, what was done, so this right hepatic artery, aberrant hepatic artery was embolized. And this is actually CT post-surgery, two months after embolization. And there was no significant hepatic ischemia, so that was good thought. But if you didn't know that there was this variant, that could not have happened for this patient. Now, additional vascular findings, again, right? So we're talking about the pancreatic cancer. But it's very important to give all this vascular details. For example, celiac stenosis, median arcuate ligament syndrome, all of those things will have, again, because you're changing so much of the vascular anatomy with ripple surgery, you have to mention those because you either need to do, you need to modify your surgery, basically. So either bypass or open it up ahead of time. And then SMA stenosis, again, could be, it's all basically fed through the celiac. And so if you don't address this ahead of time, again, this could all be very catastrophic, could be a leak, et cetera. Okay, so, and this is the paper, couple of papers from surgical literature talking about the fact that for patients with pancreatic surgery, they basically recommend to undergo angiography for vascular planning, right? And why is that? Because CT is not good. And CTA, I want to say, is great. And we all know it's great. And it can be as accurate as they say for vascular depiction. It's just a matter of reporting and the specific case structure reporting that you actually have to go as a checklist through all the vessels and make sure that there's no abnormalities there or any aberrant anatomy or involvement or stenosis or whatever it is. So you basically have to go through each vessel. Another case in point is uterine fibroids MRI, very frequent disease, involves a lot of women. We frequently do MRI for surgical planning, right? So it is a common benign disease, affects a lot of women, a lot of symptoms, don't have to tell you about all of this. A lot of treatments are often, and some of them are done by gynecologists, some of them medical, and some of them surgical. And also, IR also treats a lot of uterine fibroids. Now, so basically have two clients here. You have a GYN surgeon and you have IR that can treat those patients. And they're looking for slightly different things. Some of it are common, but some of it are very different. And when I did IR, I knew exactly what I was looking for on those pelvic MRIs. However, when I was talking to my colleagues in GYN, I was very surprised to know what they are looking when they look at that pelvic MR. And so when you create that structured report for fibroids, you need to satisfy both clients. And that's what we did. We basically combined all those lists of cases, a list of details that are needed for both IR and for gynecologists. And we created that template. We implemented this, and we obviously seen that it was much more helpful in procedural planning, both for gynecologists and for IR. And we provided a lot of details, right? So overall uterine size, largest size of the fibroid, thickness of the myometrium external to the fibroids, which was very important to gynecologists, not really important to me, but it was very important to them. Vascularity of the fibroids, as you can see here, which was important to IR. And intracavitary component of the fibroids, which was important to both. And presence of adenomyosis, which was very important to IR because the treatment is different and size of the particles is different. So all of those details, you have to talk to your different providers to figure out what exactly are they looking for in those cases. And so here is the list. We provided that list, combined it together. And overall, we saw that we were able to facilitate surgical planning and clinical decision making is the structured reporting. So before, if you want to implement disease-specific structured reporting, you're very welcome to look it up in the red report template library by RSNA, or you can see it in the textbook that we now published that has a variety of disease-specific structured reports on pretty much every area, excluding MSK. So we have it in oncological imaging, abdominal, thoracic, neurocardiovascular, and available in MSR. All right, and that's it. And thank you very much. Well, not just any virus. I'll talk about all RADs and what is the value of them. So if we'll go to the ACR website for RADs, which stand for Reporting and Data Systems, notice that we have currently 10 ACR-supported RADs. And if we go beyond that and look at all other non-ACR-supported RADs, this number's gonna be doubled. So, and more and more RADs are coming out. So you may be wondering why. So if we look at RADs as a system, every RAD has defined categories. And for most RADs, it's gonna range from one to five. Some RADs deviate from it. And generally, the approach is that with each increasing category, the probability of disease of interest increases. Each RAD provides criteria for each of the specified categories. It defines standardized terminology and provides management recommendations for each of the defined categories. This is the landing page for the ACR RADs. And notice that this kind of specifies something that applies to all RADs. And the purpose is to provide all these great things. But I wanna focus on the fact that RADs provide assessment structure and classification for various disease processes. And let's see why does it matter. So if I showed you this interesting fruit and I asked you, what do you think it is? And if you've never seen this before, you may think that this is a huge lemon. You may say, well, this looks like an orange, but it's somewhat weirdly shaped. Perhaps this is a grapefruit, but not quite the right color, maybe it's just discolored. Or you may say that this is pomelo, which is exactly what it is. But to know and to say, I'm looking at pomelo, it really requires knowledge and experience, meaning that you know that pomelo exists and that's what it looks like. And I mean, I've never seen one in real life, to be honest. And knowledge and experience of each of the individual radiologists, it varies quite a bit. And how we assess and interpret images is really depending on our training, on our subspecialization, the practice type, how often you see pomelos in your practice. You may have looked at that image and you knew exactly what it was. And really risk aversion, how much willing you are to say, I think this is what it is, and take a risk that you may be wrong. So we all, so this is where second opinion readings come in, right? So why, you know that if we have two radiologists, we have three opinions. But this is the reason why second opinions work, because they generally get input of radiologists who are very specialized in the area of the question. So if we look at the data, the rate of discrepancy between the original reading and the second opinion reading can be as high as 32%. And the second opinion readings tend to be very accurate when we look at the path and outcomes. It can change staging in various diseases in up to 18% of patients, and can lead to change in management in more than a third. And it can either avoid surgery in a substantial proportion of patients or change the surgical approach. So why, and then decrease the frequency of follow-up imaging recommendations from 22% to 6%, which is quite a lot. So what is the great about or the idea or impetus for using RADS is that the RADS systems provide assessment structures and classifications in a way that really if you follow the guidelines for any particular RADS, even if you're not an expert in that particular disease process, you should be able to correctly assess the images in a way that closely mimics that of an expert. Now, you may be saying, please show me the data to support this statement. I don't have a lot of data, but there is some data to support that this is true. And this was assessments of radiologists looking at liver lesions and contrast-enhanced ultrasound. And they looked at junior radiologists just out of fellowship, and they said, can you please assess liver lesions as benign versus malignant using conventional criteria, whatever you've learned in your fellowship. And then they trained the junior attendings on CLS-LYRADS and repeated the exercise. And then they compared the performance of the junior radiologists with the performance of senior radiologists who assessed liver lesions as benign or malignant using conventional criteria. So you can see that sensitivity of senior radiologists fairly high. And as expected, when we use conventional criteria, the junior radiologists performed not as well. However, when they applied CUS criteria, all of a sudden the sensitivity became quite comparable. Similarly, for specificity of conventional criteria, senior residents had much more experience, so they were performing very well. And as expected, junior radiologists were not performing as well. But again, when they applied CUS criteria, they performed very similarly, if not even a little bit better than the senior radiologists. Finally, positive predictive value. Again, benign versus malignant for conventional criteria. Senior radiologists did well. Junior radiologists did not nearly as well with conventional criteria, but did similarly well, if not better, with CUS-LYRADS. So you can see that at least in this one study, there's evidence that teaching junior or less experienced radiologists these standardized approaches to diagnosis really can improve diagnostic performance of the interpretation. Now, we all know that appropriate and accurate interpretation is only part of what we do. As Dr. Brooke and Dr. Goldberg-Stein really elegantly explained, we really also need to very accurately communicate our findings in the way that our referrals can understand. My mom is internal medicine, and often she'll call me and say, can you tell me what this report means? I don't understand. And this ground glass is one of the things she asked me about. So one of the things that RADS provide is they want to reduce the variability of terminology so that the communications between radiologists and referring physicians become easier and more clear. And here's an example. It's a real case from about 2015 before the structured reporting and LYRADS were implemented in my former institution. And you can see this is a small lesion, and this is really copy and paste of the report that was issued associated with these images. And notice that the impression was that this is a small lesion suspicious for a possible small HCC. This is a real text from a real clinical report. So this is non-standardized terminology, and this is free text reporting. And if this patient came in after we implemented LYRADS and structured reporting, this is what the report would have looked like. And this lesion would have been reported as 1.4-sanimeter LR5 in segment eight, and there would be a standardized template inserted into the body of the report. So this would give us standardized terminology and a template. Now I'm not gonna talk about template because Dr. Goldberg-Stein and Dr. Brook went into wonderful details, but I'm gonna focus on terminology, standardized versus non-standardized, and again, suspicious for possible small HCC versus definite HCC. Why does it matter? Well, imagine as a radiologist I see a finding, I see a fruit, and I look at it, I assess it, and I am pretty sure I'm looking at orange. As a matter of fact, I'm 100% confident that this indeed is an orange. And I'm going to say fruit, diagnostic of an orange, because this is how I describe things when I'm 100% confident that what I'm looking at. Well, the clinician, one, may look at my report and say, fruit diagnostic of an orange, okay, Victoria's about 90% certain she's looking at the orange, about 10%, this is an apple. Another clinician can look at this language and say, okay, Victoria's confidence in the fact that this indeed is an orange is about 75%. What about if my colleague looks at this finding and my colleague also looks at the finding and is indeed 100% confident that the finding reflects an orange? But my colleague has a little bit different approach to how he or she uses language, so for my colleague, fruit consistent with an orange is what she or he may use in the report to denote a finding of 100% confidence that this indeed is a finding. Well, again, if the clinician, one, sees this finding, they may think that 60% confidence level that this is an orange, and another clinician may look at it and say, okay, well, about 80% confidence that this is an orange. So notice that depending on the, like, we have two radiologists who are 100% confident agree in the diagnosis, but depending on the choice of language we use and how those phraseology is perceived by a referring physician, you can notice that the perception of level of confidence is going to be quite different. Again, you may be thinking that this is all theoretical, and there was a wonderful study that put together all these modifying phrases that we use in our reports and asked radiologists to rate them what each of these phrases means to that radiologist in terms of how confident they are in the diagnosis. And notice that there's not a single, not a single of these phrases that we all as radiologists agree that it means 100% confidence in diagnosis. And things are even worse when the same question was asked of clinician. Diagnostic for is one of the phrases I used in my example. Notice that a little bit over 50% would take it as 100% certainty in a diagnosis, and one person would take it as very unlikely that the diagnosis is certain. Even worse for consistent with, notice only 7% would accept it as a certainty in diagnosis, and a substantial proportion will take it as quite not confident in the diagnosis. Another example, reports with non-standardized description like the one I showed you possibly may represent suspicious for HCC. The hepatologists were asked to rate these reports, just the reports, in terms of likelihood of HCC, and the images were reinterpreted in Lyrads. And this is how hepatologists interpreted the reports. The observations which were on imaging LR4 or less, meaning less than 100% certainty in diagnosis of HCC, based on non-standardized descriptions, more than a third were perceived as being definite HCC by hepatologists. Vice versa, almost half of lesions which were either definite HCC or had definite tumor in vein based on imaging features, if you use non-standardized description, almost 50% were interpreted as being less certain than what the imaging actually showed. So we can do better. So if we adapt standardized terminology, we will have consistent communication. So if we see an orange and both of us agree that 100%, this is an orange, we both will say, fruit, definitely an orange, and then no matter who picks up the report, they will know that the radiologist is 100% confident in the diagnosis. Why does it matter? Well, I think sometimes we and residents, particularly junior residents, don't forget that clinicians don't look at, in most cases, on our images. They look at the report and they treat the patients based on our reports. So if the clinician gets this report and it says suspicious for possible small HCC, what should the clinician do with this report? Biopsy, treat, follow-up, what is the management of a possible small HCC? But if the clinician receives 1.4 centimeter LR5 and segment eight, the clinician can go back and look at the LIHRAD's management recommendations and these recommendations, again, are RAD-specific. And then the clinician can say, okay, this is LR5, definite HCC, no need for biopsy, patient has to be referred for, multidisciplinary discussion for treatment. And again, there is some data to show that it works in real life. This is a study that looked at nonspecific terminology. When nonspecific terminology was used, about 6% of the patients were referred to the appropriate management and that number was increased to 50% once the standard terminology was used in the reporting. I'm gonna briefly talk about another benefit of RADs and is that one of the goals is to provide infrastructure for reporting and data collection. So when we have inconsistent terminology, this is various washout definitions based on various studies. If we look at this particular case and ask ourselves, does this patient have a washout? Well, depending which study you're gonna choose, you're gonna get the different result whether or not this observation does have washout or not. So when we go and try to synthesize the data, everything is gonna be very heterogeneous and it's gonna be very difficult to get a very strong understanding of what things are. However, again, all RAD systems provide a lexicon including LIHRADs. If you go, all of these are for free at ACR website. The terminology is standardized. We have a standardized definition and now if everybody uses the same language and we try to synthesize the data, we're gonna look at the very homogeneous data points and we can synthesize and get a really good understanding of what the ground truth is. Why does it matter? Well, if we use standardized terminology and standardized templates, we will enable collection of large data sets and we can do that not only in academic institutions but also in private practices. And then we can validate RADs, again, not only in tertiary care center but in private practices. We can use this for quality assurance and improve outcomes. We can create these wonderful data points on meta-analysis of what each of the categories means. And then when the clinicians receive our report and they see fruit definitely in orange, this number no longer will reflect my level of confidence but this will be also reflecting on data and we will have a data-driven management. Okay, so to summarize, ACR RADs provide a standardized approach to diagnosis and reporting of various disease processes. All of the RADs provide criteria for diagnosis, lexicon, guidelines for reporting and management. The goal is to decrease variability in diagnosis, improve clarity and comprehensiveness of communication and to enable collection of large data sets which we then can use as validation of RADs, quality assurance tools and improve patient outcomes. With that, thank you again for your attention and for coming so late.
Video Summary
Dr. Stein's presentation focuses on advocating for standardized structured reporting in radiology, emphasizing its value in improving healthcare delivery against the backdrop of the Institute for Healthcare Improvement's triple aim: enhancing patient experiences, improving population health, and reducing care costs. He argues that structured reporting enhances the clarity, accuracy, and actionability of radiology reports, mitigating errors associated with nonstandard reporting methods. Evidence from studies shows structured reporting enhances diagnostic accuracy, particularly in disease-specific situations such as tumor staging and surgical planning, leading to better clinical management and outcomes.<br /><br />Structured reporting also supports financial success in radiology by enabling accurate coding and billing, reducing documentation errors that can lead to reimbursement losses. The approach also lays a foundation for innovations, including AI-assisted reporting and multimedia reports which further enhance efficiency and clarity.<br /><br />Dr. Victoria Cherniak emphasizes disease-specific structured reporting's role in clinical management, providing more precise surgical planning and patient care through detailed and systematic data presentation. Moreover, ACR Reporting and Data Systems (RADS) harmonize diagnostic processes across various diseases, increasing diagnostic accuracy and facilitating data collection for further healthcare improvements. Overall, structured reporting enhances communication among radiologists, referring physicians, and patients, while aligning radiology with modern healthcare frameworks.
Keywords
structured reporting
radiology
healthcare improvement
diagnostic accuracy
AI-assisted reporting
ACR Reporting and Data Systems
standardized reporting
clinical management
financial success
RSNA.org
|
RSNA EdCentral
|
CME Repository
|
CME Gateway
Copyright © 2025 Radiological Society of North America
Terms of Use
|
Privacy Policy
|
Cookie Policy
×
Please select your language
1
English