false
Catalog
Neuroradiology Reporting Do's and Don'ts (2021)
W5-CNR08-2021
W5-CNR08-2021
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
In this session of do's and don'ts, I'm gonna take on this topic of top five tips to transform your neuroradiology reports. And I'd say that the words in your reports are powerful. You are in control. You really have a chance to transform reports. And particularly in the areas that I care the most about in healthcare and radiology, that is maximizing efficiency, not only yours, but also the provider's, reducing healthcare costs that are unnecessary, and then empowering the patient. Your radiology reports have the ability to do all of these three things. And so in the next 20 minutes, I'm gonna talk about why your report matters briefly, and then mostly focus on my five tips for better reports. Now, I'm gonna take you down my memory lane. This is St. Vincent's Hospital in Melbourne, Australia. This is where I did my medical school training, my internship residency, and my radiology residency. This big building, the fancy new one, is the inpatient building. And this brown building, the older one, is our outpatient clinics. And radiology was front, in the middle, on the ground floor. And at the time I started my training, there was no PACS, there was no power scribe, there was a typist that typed what I dictated. And so in order to get the results from my oncology patients I would have to walk down 10 flights of stairs and go to radiology and get the answer there. And that was how radiology was practiced. Today it's different, and particularly during COVID, the report may be the only product your patients and providers see, and Doug's mentioned that. So this is really important, what you put in your reports. And the problem with bad reports is there's time wasted clarifying the meaning of the report. There's potential mismanagement, which leads to harm to the patient, and additional costs, additional resource utilization. It can cause patient anxiety without you knowing this. And then I've heard this at many parties that I've attended with non-radiologists, but I get teased about my correlate clinically and cannot exclude, so it's really detrimental to our reputation. So are we ready to make a change? Well, I often lecture to our residents about this at the workstation and show them examples. And I really thought, how can I put together all my thoughts together in an organized fashion? Well, how do I do that? So I turned to this pyramid, this hierarchy of needs. Does anyone know whose hierarchy this is? Yeah, Maslow, yeah, Maslow. Abraham Maslow was American psychologist, and he proposed that humans are motivated by these five categories of needs. There's the basic needs, food, water, shelter, and safety needs. And then after that, there's a sense of belonging, there's esteem, and all of these are deficiency needs that you actually have to meet each one before getting to the next. And then finally, you get to the point where humans are purposeful, able to contribute, creative, and living your full potential. So I thought, ah, that could be the basis of Hong's hierarchy of needs for radiology reports. And so here we have it. Everyone wants to add value. That's why we did radiology. We want to contribute to patient care. We want to get to the answer and help management. But in order to get to that, we have to, first of all, get paid. Your department needs you to feed them. You got to get paid. You need to generate a report that someone will actually read. Then for the sections that they do need to read, they need to understand it. And then fourth, don't cause annoyance or anxiety. And then finally, you can get to your potential and add value. So let's go through each of these. So first of all, will you get paid? As radiologists, we have the advantage of having coders and software for coding to help us. But still, it's our responsibility to make sure that our radiology reports has enough to assign two codes. And I've been learning a lot about this in the last year. One is a CPT code, which is what you and your technologist did. The next one is an ICD-10 code, which tells them, the payers, that it was medically necessary. So for CPT code, you need to put in modality, the body region, whether there's contrast. And for those CTAs of the head, you need to put in a statement that said, 3D reformatted images were generated to evaluate for vascular anatomy. Otherwise, it's going to be coded as a CT brain with contrast. And then ICD-10 code, it's really easy when you've got an abnormality, that you can say that there's a hemorrhage in the brain. But if you don't have an abnormality, you really have to put in the symptoms and signs that the patient had, rather than hesitancy language. So what do I mean by that? It's hesitancy language is a diagnosis and rule out stroke, concern for stroke. That's not enough to get paid. You have to actually say right-sided weakness. Okay, so next on the hierarchy is, will they want to read it? And this pertains to the finding section, which they don't necessarily want to read. Has anyone heard someone say, oh, they only read my impression. They don't read the findings anyway. Yeah, yeah, every day. So why don't they read it? Well, blame yourself. It's too long. It's not organized to be read. It's not relevant. And so here I have an acronym to help our residents with this. It's SOAR, keep SOARing with your reports. Be short, be organized and be relevant. This steering wheel, does anyone know this car? It's a Maserati, one of my favorite cars. So I see too often residents making Maserati reports for providers that only want to drive a Kia. And those Kia drivers are people in the emergency department. They just want to know if this 80 year old woman who fell out of bed has got a fracture. So if you have a report that looks like this, describing how osteopenia and degenerative changes reduces the sensitivity for fractures, and then you're talking about the occipital condyles, you're making a Maserati. Don't make a Maserati. Just tell them, keep to SOAR and just give a brief description of where the worst canal stenosis is, where the worst foraminal stenosis is. If there's osteopenia, then say it. Okay, because I guarantee you, they're mostly going to just read the impression. They're busy. They want to be efficient. Okay, and on that topic, the controversial topic of structured reporting templates, I'm going to admit, I am a fan of them. And this study that was a survey study that surveyed physicians on their preference of radiology report styles. One was unstructured and two were structured. And you can see from the results that radiologists prefer structured, non-radiologists preferred, so radiologists preferred unstructured, non-radiologists preferred structured. And when you look at the results for the attendings, you can see 44% of attending radiologists preferred structured, but overwhelmingly 81% of non-radiologists attendings preferred the structured. And why is it? Well, it wasn't because of the length. That was about the same. It wasn't because of the comprehensiveness. That was about the same. It was because of the readability. You see, having a structured report is short and organized and it makes it more readable. And that lends itself to a more efficient workday. Now, I talked about relevance. So how are reports supposed to be relevant in the impression? So this is an example of my PowerScribe. We have a structured reporting template for MRI brain and the header populates and the indication populates. And so there we've got the ICD-10 covered and we've got the CPT code covered. And then there's a line that the residents will see, choose from a pick list. And so what they'll get to pick is, here we've got seven templates and they're for different indications. Okay, so there's a standard brain template that fits most things, but then here's an example of our brain tumor template, where they tab through and tab where the craniotomy is, where the resection cavity is, and then comment on the four factors that are really important for determining how the brain tumor is going on follow-up. Here's an example of the headache template. You can see there are options to choose a non-contrast study as well. And then there's a standard MRI verbiage, but then there's also specific mention of relevant negatives for someone with a headache. So that's how you can be more relevant. Okay, onto our third need, can they understand it? Now, which part of the report do they actually really need to read and where you can have the highest impact? It's the impression, I've heard you whisper it, right, yeah. So look at this tweet. This person, Northwoods1980 says, what the hell does number two mean? And you wonder why people hate radiologists? What does number two read? It says, benign appearing 0.6 by 0.8 centimeter STIR and T2 hyper-intense lesion in the L1 vertebral body. Recommend clinical correlation and follow-up. Okay, so my husband is in healthcare, he's an ID physician. He doesn't know what flare is, okay? And we did med school together and I'm appalled, but they don't know what sequences mean. They don't know what perfusion metrics mean. So translate this for them, tell them what you mean, tell them what you think. And if you're gonna ask them to correlate, what do you correlate for? And when do you do the follow-up and what protocol do you do? So the ideal radiology impression to me is answer the clinical question. It shows that you've read it and you know specifically, you're looking for what they're looking for. Give clear imaging recommendations, give the protocol that you want them to do and when to do it. So two to three months, don't say short-term follow-up, that can mean a lot of things. And then for findings that are just repeatedly normal, I tell them to stop follow-up imaging based on imaging alone, okay? So the impression, also really important to apply SOAR and I tell our trainees that, imagine someone's got your report in their hands, a medical student and they're going to read it to the crankiest surgeon in the hospital. How is that surgeon gonna feel after that medical student reads your report? So there's a big difference between a report impression like this, when the medical student reads it, where this patient is post-op for decompression and we're mentioning findings that are just routine and then we're cluttering the impression. And then we're putting in signs in our impression rather than actually interpreting the signs, which is done later on. And then we're putting in things that are irrelevant and don't answer the clinical question and were present on the previous study. So calm the cranky attending, this was what the impression ended up being after I revised it. We just talked about improvement in the stenosis and in alignment and that the edema had improved but the myelomalacia remained. So clear language, say it simply, qualifies like essentially normal, they're not helpful. Providers appreciate you being definitive and use less words, save everyone time. Now up to annoying and anxiety, what can be annoying? Surely your reports can't be annoying. Well, cannot exclude can be pretty annoying because what it means is you're not really taking responsibility. You don't think it's the case but you don't wanna be taking the responsibility if it is the case. So rather than saying cannot exclude, say that what they can do to exclude it, what other imaging study that they need to do, do they need a follow-up? Do they need to do another modality? Another annoying term is correlate. I avoid this term entirely. I might say correlate for symptoms of headache if there's sinus disease or correlate for any symptoms of acute sinusitis but I don't just throw it out there. And then anxiety, one thing that definitely causes anxiety is describing a finding, whether you describe it in the impression like that other tweet or whether you put it in the findings and not saying what you think it is. Particularly for this example of degenerative disease, we see disc bulges and disc desiccation all the time. And we know that that's pretty common in elderly patients but they don't know that's the case. So I put in this macro of how common the prevalence of normal findings that I borrowed from Jerry Jarvik. He actually did a study on this and found that if he included it in his MRI lumbar spine reports, it actually reduced opioid prescription. And at a higher and a more local level, I've received feedback from our physiatrists that this is really helpful because they spend a lot of time calming the patient down and telling the patient that this is actually normal. But having this in our report really helps them give perspective to the patient. Okay, so we've finally reached the stage where we can add value. So how do we do that? I like to show this menu of a local Chinese restaurant where there are about 15 seafood dishes. And so if I went to this takeout place and asked for what's the best seafood dish and they list each of them, how annoyed would I feel? Because I know that they know what's good in the kitchen, what was fresh, what other customers like. So that's the way we should think about our list of differentials as well. I definitely advocate saying what is most likely and then saying what's less likely. And if you don't know, tell them it's not typical of the diagnosis they question and what else they can do to work it up. And then another way to add value is definitely using evidence-based guidelines. The ACR has these guidelines for the RADs as well as for incidental findings. And you got to understand that a lot of people spent time on these guidelines, a whole committee of experts, huge literature search. So why reinvent the wheel? I would say just follow the guidelines and we're lucky to have them. And so incorporate them in your practice. And one example in your radiology is certainly NIRAS, which is for follow-up of head and neck cancer that's been treated. And so here we're adding value by being very definitive in telling what level of suspicion we think the CT is after treatment. And then what should be done afterwards, so definitive recommendations. And then this is not just based on opinion, it's actually based on a lexicon of signs and flow charts. So I use this, our providers appreciate it because it avoids that situation where you're saying residual enhancement could be residual tumor, could be post-treatment change, could be inflammation, could be infection. That's just not a helpful report. So I conclude by sharing with you that we all want to add value, most certainly, but in order to add value, we also need to think about all the other needs that should be met before we reach the value stage. And always, as you're signing off your report, think about how that report would be read from the perspective of our customers, our providers, and these days, increasingly our patients as well. Thank you. Jenny did a great job, I think, teeing up my talk because I want to kind of extend the whole concept of reporting to how we can really kind of add value going forward with our reports beyond what we currently do today and the reasons for doing it and things that'll put radiology, I think, in a better position going forward. So we'll talk a little bit about our current state and I'm glad that people are more interested in reporting now than they were before because it used to be when you had talks like this, it usually infuriated a lot of people in the audience and there would be a lot of anger afterwards at the Q&A and now I think there's more people that are more open to the idea of what's going on. But we're going to talk a little bit about, I'm going to extend the topic about structured and unstructured information and why structured information, and particularly CDEs, is something really relevant to what we're doing. But for people that think that we've got this down and we're pretty much perfect in what we do in terms of reporting, there is a reason why people keep writing articles about how to create a great radiology report because we're really not quite there yet. I mean, our current state is, our principal products are the images that we create and our reports, and for people that just do diagnostic work, this is really the stuff that we take pride on. But in many cases, the images have nothing to do with us. They're taken care of, but our reports really should be our source of pride and that's a standard workflow. We perform a study, we identify the abnormality, we stick a microphone in front of our mouth and we just start talking and create one of these things. And hopefully there are some facts inside those reports. What people do with these reports in many cases, yes, they might just read the impression, but I'm willing to bet in this case for somebody who has a primary brain tumor or whatever else, they're just taking the images. It's a predictable pathway. The neurosurgeon knows what they're doing and they're going to follow that pathway and get the patient into surgery and start them on chemo and so forth. So what I call this is kind of a single use utilization of a traditional prose report. You say your stuff, it goes in the chart and then it goes into the shredder. It doesn't live on in any meaningful fashion going forward. So yeah, in many cases we establish diagnosis. It's certainly documentation for billing and so forth. It precisely defines the location of the lesion perhaps, although it does guide therapy and the images themselves guide therapy, but not necessarily the report itself. And whether or not the report has any prognostic information remains to be seen. So how can we demonstrate some better value in what we do? And we know we can always improve on our imaging by doing advanced physiologic techniques and stuff like that, machine learning, automation, automated segmentation, and so forth. But the way we report, the technology has changed drastically in the last 150 years, but how we generate our reports are very similar to what we've done in the past. And there are some really great potential uses for reporting if we could get to the data in a more accurate way. I mean, you could actually, using the information that's in the report, you could augment regional and national data registries. You could provide value, you could provide value just by knowing the prevalence of diseases based on radiology reports regionally and nationally. You could make inferences about health resources allocation, epidemiology, disease states, quality improvement processes, and so forth. You could actually do clinical and comparative effectiveness research about the impact of imaging in a much better way than we can do today. There's merit-based incentive programs that potentially could reach radiology in which having some better data in our reports would actually benefit us going forwards. And one of the key things, because everything at all of our annual meetings is all about AI, this is, reports really provide annotations, labels and annotations, potentially, for our images that we create. So this whole notion that we can extract concepts out of a report is this notion of natural language processing, seeing if you can extract these things accurately, you can actually run these down some logic layer and you can actually drive a whole bunch of downstream processes to kind of improve what we do every day, but do them in a very automated fashion. And one of the key ones is going to be AI classifiers. So you say, hey man, I hear about this NLP stuff, there's a bunch of reports on this and stuff like that. There's a bunch of lectures about this here at RSNA and at other meetings, but you can't just repurpose reports by stuffing it into an NLP engine. And one of the problems is is that there's still complete non-uniformity on how we report diagnostically. Heterogeneous styles, the reports may be reflective of local clinical cultures. The vocabularies we use are, some of them are very, are non-standard to begin with. There's a lot of hedge terms to express uncertainty. Some of the things that Jenny brought up in her talk, and it's difficult to extract specific concepts in many cases, because we use words like this, that are comfortable for us, but they don't necessarily carry a lot of meaning, mild to moderate approaching, severe focal bulging, lateralizing. It's infuriating to the clinicians. And the literature is replete with this. I mean, this was an article that just looked to see if there was a value in doing CT perfusion or MR perfusion for stroke. And they said, in doing the systematic review, they said there were no studies that supported whether perfusion imaging improved any outcomes at all. And primarily, one of the things they pointed out is that there were 18 different definitions in the literature for tissue at risk. So it was so heterogeneous that probably there was no way to do a good meta-analysis one way or the other. This one, I love. You know, Dick Herzog, this was a brilliant idea of his, and I appreciate the fact that he let me tag along, but this was the best secret shopper study I've ever seen done, where he basically took an employee from his shop who had degenerative lumbar disease, scanned her twice at his shop, and then sent her around to 10 other centers within three weeks, scanned them, and then got the reports and dissected the reports for the imaging findings. And what he found was absolutely astounding. He pulled all of the imaging findings out, 49 distinct reporting findings across 10 reports. He found zero findings common to all 10 of the reports. About a third of the findings appear only once in the 10 reports. And the agreement, the kappa values, were like abysmal between these reports. There was no way you would ever see agreement at all in this. And the miss rates were astounding. So things like nerve root involvement, which we think is important in disc herniation, these are really horrible numbers, central canal stenosis and so forth. So it led him to the conclusion to say where your lumbar spine MRI is performed and who interprets it. The study has implications on diagnosis, therapy, and outcome. And considering that this is how we are being viewed, this is our work product, this is how we're being viewed by the clinicians, this had downstream implications because this made it into their literature, leading them to say, these are coming from the surgeons themselves, saying the findings demonstrate why spine surgeons should never trust a reading from a radiologist without viewing the MRI itself. When you see stuff like this, you wonder, well, you know, how do I justify even getting paid for the work I do? And we all know if you do clinical research, you know, either prospectively or particularly retrospectively, you can, there's no value in the reports themselves. There's usually a specific finding or a series, you know, a sequence of findings that you're looking for specifically that are never mentioned in the reports because everybody does everything differently in the, you know, the way that they want to do it. So exams all need to be reinterpreted a second time. Wouldn't it be great if you could just take the reports and do a study instantly on the exams after the primary clinical reading to begin with? So they really don't have any innate value. You usually have to repeat the measurements because they're never done the same way, just like when we do comparisons. So everything really needs to be reread and remeasured. The data demands for medical AI, as I'm sure you've already heard, are huge. There's huge data silos in every one of our organizations. You know, it's tough enough to get the data, but the data needs to be labeled. If you're going to train a stupid AI algorithm and create a valuable model, it needs to know what you're looking for and what you're trying to classify very specifically. So medical imaging AI research is really data starved right now. And it really, there's a huge demand for expert labeled examples of diseases, but all of these require the same process. Let's pull out all the data. Let's get experts to label all the data again for the second time. And how do you do this with reports when the labels are really kind of imperfect? So there's this whole notion of we are awash with data where we don't have a whole lot of knowledge. There's not a whole lot of information because the information is crudely represented in our reports. So what would it take to leverage our reports for AI? So this is how we all work as radiologists. I mean, this is kind of an oversimplification. If you're trying to compare cats and dogs and apples and oranges, every reader has their own criteria in their own mind for either broad or narrow classifications for each one of these classifiers, these classes. In some cases, the boundaries between these might be very indistinct for one reader, but a robot, an automaton, a medical imaging algorithm might have very discrete boundaries between the classifiers, very well defined. And guess what? This robot can do it the same way every time. Show it the same case every time you get the same answer, whether you have it installed in your shop or someone else's shop potentially. So the thought is, is that, well, let's see if we can build that kind of thing. So this is called deep spine. This is a convolutional neural network to look for degenerative disease in the lumbar spine. And sure enough, that's what this thing is designed to do. If you watch it, it scans through the lumbar spine and you see these, the gratings between spinal canal stenosis and foraminal stenosis are changing dynamically. Whether it's getting it right or wrong, it produces the same result every time, which kind of says, well, having consistent results might be better than always having the best result. So maybe we need to meet halfway, speak the same language. What can we do to kind of change what we do? Use the same nomenclature as one answer, use similar concepts, similar format instruction or structure in order to do this. And as Jenny already pointed out, referring MDs prefer consistent formats in the reports. And we all know what everybody's attitudes, radiologists attitudes are towards standardization of reports, but I'm happy to say I think there's been more of a drift towards more standardization in practices, at least in my opinion. And I think the secret sauce in doing this, because everybody is a fear to this because they think what we're saying is we're taking away my ability to generate pros, and nobody is suggesting that at all. The secret sauce to this is the thing, is this entity known as a common data element, which is not a report, and it's not a terminology. It's a very succinct observation, finding, or feature expressed in a controlled manner, meaning it's almost like a multiple choice question. You embed it in a report. You can still have pros in there, but you use it the same way whether you're in Fargo or you're in New York City. It's used the same way by radiologists everywhere else. So it's really like a subcomponent of a report, and we all know what these things are. These are things that we, if you're in a training environment, you tell a resident, you can't dictate a stroke head CT unless there's an aspect score in there. An aspect score, a number from one to 10, it's a CDE. It's a single element CDE, but yet compliance for even using aspect score across the country is pretty poor, but yet if you ask some people, not everybody, they'd say having that in a report adds value to the report. It never, I never understood why not just include it, you know? What's the problem with doing it? But it's really all the observations you know in order to do this, and it's almost anything we can think about, its enhancement characteristics, its foraminal stenosis. It's all the RADs that Jenny was just talking about. Those are all CDE modules that have been time tested, shown to be validated, verified being reproducible amongst multiple observers. So you could generate a report with free text. You could have CDE concepts as separate subheadings, or you could bury it inside of the report, but having it in there in a very reproducible fashion makes it easy for natural language processing to discover it, extract it, and get it highly accurate. So you'd say, well, you know what? There's been a lot of advances. Why isn't this problem solved already? But you have to think about this. The NLP engine can only find concepts that are in there. If you don't use it, it's not going to be able to infer it. So it has to be in the report, and here's some nightmares from my own shop, the stuff that drives me crazy. You know, some, one of the trainees just says there's an infarct in the left frontal parietal temporal region, and they're done, you know? They move on, period, and, you know, sign the report. It doesn't infer the size. There's no way to infer aspects. Do I even know if there's hemorrhaging here at all? I don't even know the location because there's no such word as this. How about there's mild to moderate disc herniation at C4-5, period, next level? Well, it doesn't, there's no inference about compression. I don't know about stenosis. You know, nothing. NLP can't solve that for us, so that's the moral. If the concept is not unambiguously stated in the report, NLP can't find it. So it's a garbage in, garbage out phenomenon. But there are amazing results with NLP engines, and this was a study with, like, close to 180,000 reports where they trained it with an algorithm and then trained it, they labeled these reports using, you know, multiple experts, multiple radiologists to do it, to, and then had them read reports to see how well it found a lot of the most common findings in degenerative disease, like compressive nerve roots, fractures, disc herniation, and so forth, and using a machine learning approach instead of a rules-based approach where you actually proscriptively write out a rule. The machine learning approach has incredible performance. You know, specificity, area under the curve is pretty high on all these cases, but the finding had to be in the report in order to capture it. That's the moral of the story. So what have we been doing about these common data elements? Well, it turns out the ACR and the RSNA have been working on this for about five years. First, we recognize this as a gap and an opportunity with reporting to begin with, where we wanted to develop a common schema for creating these common data elements and have a way to represent them, I don't want to say electronically, but a way to represent the knowledge model, the information model, in such a way that they can be consumed by people, us, and people who read our reports, but also by machines, and how to incorporate them inside of our vendor applications. So it's really both for storage recovery and discovery, or storage discovery and retrieval of these concepts. And this has been up for a while, and a lot of it has been populated with many of the CDs that we know and love today, but there's still a lot of knowledge gaps. Fast forward, there was a collaboration with this project, with the American Society of Neuroradiology, where we made an initial attempt to create a whole bunch of CDs for some of the most common concepts that we use every day as practicing neuroradiology. There's at least 242 elements, probably more in 20 sets and modules for use, and there are posted inside this thing called the RAD Element website. And this has an API to it, so it's actually been used by vendors. Actually, here at RSNA, there's the IAIP, the Artificial Intelligence Imaging Demonstration that's down in the South Building right now, uses our common data elements to actually drive a bunch of AI applications as they talk to each other. So is there good content out there? You've already heard about it. The ACRs reporting and data system, the granddaddy of them all is BI-RADS, which has been time-tested, but as Jenny pointed out, there are many others that are highly valuable and has a whole bunch of additional value, not only in describing findings, but also in the probability of disease. It helps with report organization and really kind of improves communications to providers, but yet there are many others out there. This is something developed by the Emory Group called BT-RADS, just for assessing post-treatment glioma that has also been tested and validated with multiple observers, and it works reasonably well, yet it is not really adopted universally at all. If you go to the National Institute of Neurologic Disease and Stroke, they have an entire website devoted to common data elements, could be used for research and for reporting on a whole bunch of neurologic diseases, a whole bunch of instruments that have already been done through consensus panels and through the literature. And then if you go to our own literature, a lot of good work has been done in the past that has not been adopted in general reporting for things like central canal stenosis, lumbar foraminal stenosis, with really good agreement between multiple readers, root compression, all the things that are important to us and important to the surgeons, but yet we don't use these in everyday life. It's not like we have to invent this stuff, it's already out there. And then we got a whole bunch more out there. The more you look, the more you realize a lot of good work has been done in the past by many of our colleagues that we probably could incorporate without a whole lot of work. And I know Wendy's going to talk a little bit more about some of these. The ACR has a thing called the Data Science Institute, and they have a thing called the Define AI Initiative. And this is really to codify specific clinical scenarios or use cases where AI can improve medical imaging. To kind of come up with the concept, if you're one vendor that's looking at whatever, spinal stenosis, and you define it one way and vendor B defines it another way, there is no way to compare vendors with your data. You bring both vendors into your shop, they're going to get different results. Why not tell them exactly what we're looking for as radiologists, how we define severe central stenosis, and then you can see how well they actually compare. So the whole notion is you just substitute the robots for the humans in here, but unless we all work off of the same criteria for any type of imaging study, there'll be no way to do any benchmarking. And the Define AI group has actually done this fairly successfully. There's a whole bunch of these in here on their list. I welcome you to go look at it. And are there any success stories with structured data? Well, our friends in pathology, the College of American Pathologists, have this notion of synoptic reporting, which they have adopted, by the way. And they tout it as improving completeness, accuracy, ease of creating reports, accuracy of data extraction, facilitates data extraction for research, cancer registrars, and so forth. And this is what a synoptic report looks like. It's terrifying, isn't it? It's a list of features, findings, observations, and an answer, a structured answer. This is an example that they call not acceptable synoptic style reporting. Looks a lot like our reports. And the reality is, is that this is a marriage of exactly what we do. Images to labels apply to the images. But the amazing thing is, is the clinical product right out of the gate has immediate secondary value for a whole lot of other things. You could use it for research. You can combine it with the results from another institution without doing any massaging of the data or rereading any of the pathology images. So it's amenable to disease registries and so forth. And so the whole notion is, is that we want to take all this rich metadata in our reports and use it to drive downstream processes. And there are vendors out there right now. This company, Smart Reporting, is in the North building, and they've already completely rethought the way we do reporting, where we still generate prose text, but there's a way to inject CDEs directly into this report in a very highly well-defined way to generate value. And if you think there's no success stories out there, this one was published a few years ago by Mark Mamluk out in California, and it's showing the contextual, he called it the contextual radiology reporting for neuroradiology templates, where he basically developed an acceptable, an accepted reporting system with buy-in from all his clinical stakeholders, and he used this to kind of deploy the reports in this health system. It elevated the value of the reports. The radiologists liked it better. Most importantly, the clinician stakeholders liked it better, and they used it as a model for disseminating reports across the organization. So I'd say don't fear change. So in summary, I'd say there's a lot of useful information in radiology reports created by domain experts. Turning this into information is still challenging. It could be transformed into knowledge if the radiology community would at least make a compromise in terms of agreed to using some consistent terminology for the features that are most important in the reports. Free text and liberal reporting styles just have to go out the window. They don't provide any additional value. CDEs is a great way to do this, and they add value because then we can participate in data registries, do instant research and compliance work, and they really form the basis for machine learning classifiers. So thank you very much for your time. If you're interested in participating in our little initiative between ACR-RSA and ASNR, just give me a call or contact me, and if you're interested in reading a really nice short book about the radiology report by my friend, Kurt Langlotz. I get no money for this at all, but it's a really great book, and it's a little bit humorous. Thank you. So about four or five years ago, I gave a talk at ASNR about, it was spine oncology, and afterwards, Dr. Flanders came up to me, and he said, you know, the way you did this talk, it'd be amenable to a common data element macro, and when he was first starting that project, and he invited me to join, and that actually, for me, has become kind of my passion. I think this is, everything you've heard here today is some of the most important stuff that we need to work on in radiology. We have a lot of different issues that are important, but I think this is kind of the future, and we need to figure this out soon. So what can I do? I don't have anything to add over what they already said, because they said everything you need to know, but what I wanna do is give you a clinical example, maybe a success story, we'll see. So I am gonna give you a case. So this will be a little bit of neuro, too. You get a little bit of regular neurospine in addition to your reporting. So this was a 56-year-old woman who came into a busy county ER, middle of the night, underserved community, didn't speak English, had back pain. I was reading with my fellow. My fellow said, you know, all we got is back pain. Should I call down and see what else is wrong? I said, no, there's only one thing this can be, and that's osseous metastatic disease. Bear with me, because I'm gonna tell you how this fits in with the reporting. So this is our typical findings for osseous metastatic disease, sagittal T1-T1 hypo-intense. You see our stir with the edema. You see the enhancement on our fat-saturated post-contrast image. So my fellow wrote a beautiful two-page report, and I guess it was a Maserati. Is that right, Jenny? Yes, it was a Maserati report. Beautiful, but I deleted it. So his impression was suspected metastasis, pathologic fracture L1, retropulsion produces, moderate compression of the conus, 75% height loss, additional lesions. That's pretty good. I mean, that tells them what they need to know. This was my impression. So tell me which one you think is better. Unstable spine, SIN score of 13, and this is what I'm gonna tell you about in a minute. Recommend prompt surgical consultation. So when this report gets signed right away, it doesn't matter whether it's the ER doc, intern on his first day, or something, the custodial staff, they know the urgency of this situation, and they know exactly what they need to do. Okay, so number two, lytic metastasis L1, pathologic fracture with retropulsion, greater than 50% height loss, involvement of bilateral posterior ailments. Each of those phrases is from the SIN score that I'm gonna show you in a minute. Number three, high-grade conus compression, epidural spinal cord compression, grade two. That's another scale I'm gonna show you. So now again, when the spine surgeons, we just heard that, both of you just said, oftentimes people don't wanna read our reports. The spine surgeons think they need to read it themselves. They need to look at the images because our reports are worthless. For this, from these phrases, this is from their literature, these two scales. They can look at those two points. They know exactly what they're gonna get and what they need to do. So we're actually saving them some time. They don't even need to see the images. So my goal is to help these patients get optimized treatment quickly with this clear communication that you've been hearing about. So oncologic instability was the topic of my spine talk that Dr. Flanders said is amenable. I'm just giving you a little background here. I'm not gonna go into too much detail because it's not a spine talk, but it's important that we identify this early because the consequences of spinal failure in oncologic instability, pain, paralysis, death, more important than their cancer, you wanna identify an unstable spine. So in neuro or maybe all of life, all that matters is cord compression. Two ways this can happen, pathologic fracture with retropulsion, compressing the cord, or epidural extension of disease. These are addressed by the SIN score and the epidural spinal cord compression scale from the surgical literature. And again, we're talking about how can we provide value to our clinicians? This is what they wanna know and this is what they use. And I'm hoping I can make this easy for you to use. So again, I wanna facilitate rapid triage of these patients to surgical consultation. Their cancer doesn't matter as much as their unstable spine in this case. The spinal instability neoplastic score. So again, I'm giving you background so you can see how we create these macros, the CDE macros that Dr. Flanders was just talking about. Five rheologic components, one clinical component, which is mechanical back pain. The higher your score, the more unstable your spine, and the more urgent your surgical consultation. And this was important because it was published in 2010. Before that time, there was no consensus on the definition, assessment, or reporting of oncologic instability. Sounds familiar, what we're talking about. Even in the surgical community, there was no consensus, no standardization. But they came together and made some decisions, and in their literature, they have defined these things. Now it's our turn to adopt this so we can optimize patient care. So these are widely used now by all the treatment team in spine oncology, surgeons, oncologists, radiation oncologists, validated, excellent inter-reader reliability across multiple specialties, and incorporated into multiple treatment guidelines. Now that last page just had what it looks like in the real paper. I made this little chart because it kind of goes along with what Dr. Flanders was talking about, the common data elements and the macros. So in the SIN score, you're asking six questions about the worst level of the spine. You might have multiple METs. All that matters is the worst one's gonna fail. Your six questions, location of the metastasis, quality of the metastasis, lytic or blastic, alignment of the spine, how much collapse do you already have, involvement of the posterior elements, and pain if we know it. So those are your questions. You have a set, defined number of answers, just like Dr. Flanders was talking about. You don't get pros answer, you have to choose, multiple choice. And depending on your answer, it's assigned a certain number of points. So if it says your junctional spine, it gets three. I'm gonna show you this in a second too. Semi-rigid thoracic spine, you get one. For collapses, the more than 50%, you get three, and so on. I'm gonna show you examples, but you add all these up at the end. 13, 18 points, unstable spine, urgent surgical consultation. Now, this is kind of a different important point. From the surgical literature, we get to make this decision. We say there needs to be a surgical consultation. We're telling the ER that. People say, oh, I don't wanna tell my surgeons what to do, or whatever. But from their literature, five of these components are from our images and our reports. So we get to say this. I think that's, hopefully I'm conveying that correctly, because it is an important point. We get to make more decisions, and contribute more to patient care than we think sometimes. So I'll just go through these examples just really quickly, because again, not a spine talk, but maybe a little bonus. So spine location, different areas of the spine. When the spine fails in these areas, when you have cancer, different impacts. So if it's at your cervical, thoracic, thoracolumbar, lumbosacral junction, or at the occiput in the cervical spine, much worse if it's somewhere like the rigid sacrum. And these are examples. This axial and sagittal CTA. You can see a renal cell metastasis, completely destroying and replacing the posterior elements. And when the spine fails here, you can see on the sagittal, nothing's holding the head on the spine. That person's in trouble. Contrast that with the melanoma met over here. It's very large, T1 hypo intense, and on the CT. But when that fails, that's not gonna cause pain, paralysis, or death like this first case. So these get a different number of points. Now, when we're going through the SIN score, next one's lytic versus blastic mets, each of these points is justified in literature. So they have their reasons why each one gets the number of points it gets. For us, this is a great teaching lecture for me to give to people. The point here, though, is this isn't something we're making up. I'm not saying, please report it this way. This is from their literature, and this is what they use. So we're not only addressing the different features of each met, we're actually getting to score it the way they would score it. So in the ER, or even in tumor board later, when you're in there with the neurosurgeons and the radiation oncologists, same thing. This is what we're using. So lytic mets get more than blastic mets. Here's a nice case of a blast metastasis, coronal hyperdense mets, sagittal T1, sagittal T2, hypointense. You often need bone scan. Here's one that I often get asked this question after I gave this lecture. What if you don't know if it's lytic or blastic? I said, how could you possibly not know? And then I got this case like a week later. So sagittal CT, I defy anyone to see any problem here. Called it normal, which it looks normal to me. But then they got their MRI, and indeed they have a met. So what would you call this? And this is kind of an aside, but that would be mixed. We know every met really has a mixture of osteoblast and osteoclast working. And this looks pretty equal. So that gets mixed. So alignment is that already subluxation translation, they're already unstable. That gets the highest number of points in the whole scale. Is it normal? You get zero. Posterior element involvement, we all know that's important. Very important to the surgeons. In the lumbar spine, it's the pedicles. Thoracic spine, it's that costovertebral joint. Again, because thoracic spine, the ribs and the sternum convey extra stability. So bilateral posterior elements in the lumbar and thoracic spine get three. Unilateral one. Vertebral body involvement is the next point. Is it already greater than 50% collapse? You're in trouble. You get the highest number of points there, which is three. How about no collapse, but greater than 50% of the body involved? You know that guy's going next, so that gets one. And then pain is something we may or may not know. It's mechanical back pain, just like if you had degenerative disease. It's not the cancer pain that you get at night that's because of that cancer microenvironment. This is actually mechanical pain with compression that's better when you lay down at night. So if you talk to the physician or you know in the chart, you might know the pain score, but if you don't, you can still report this even without it. So do I have time for a quick example, Doug? I don't know, because it says I have five seconds. Okay. So this came in not too long ago. This was a 40-year-old, no medical history, back pain, radiculopathy. Got their x-rays. Their x-rays were what is normal. Looks pretty normal to me. Six weeks later, still had radiculopathy. It was getting a little bit worse. Got this MRI. So what do you think? Osseous metastatic disease, it has to be, right? So transitional anatomy. So this is L5. So T1 hypo intense, ster hyper intense. And you can see on this axial wide, they have the radiculopathy. Huge mass in the neural frame and compressing the nerve root in the paraspina soft tissues. So we read these really quickly and I was able to actually call the guy that ordered it and say, you might want to catch this person. They might still be in the parking lot. Get them in if you don't know they have cancer, which it didn't look like from the chart. Get them back in for a CT. And they actually managed to catch them and get them in. So what is the most important thing that this person has to worry about? Is it their cancer? Or is it the possibility of spinal failure, pain, paralysis, death? I would argue the latter. So this is how you would apply the SIN score to this patient. So L5, junctional spine. I've showed you their chart up here again. And this is actually something I put into PowerScribe. So this is getting to also be how this is easy for you to do. L5, junctional spine gets three points. Lesion quality, I guess mixed. Couldn't really see it too well on those x-rays that I showed you first. Can't see it too well on the CT. So that gets one. Alignment is okay, so zero. There's no collapse yet, but there's greater than 50% involvement, so that gets one. Unilateral, bilateral, posterior elements involved, this pedicle, so that gets one. And we knew from the clinician, well, this was the reason for the study. They had severe mechanical back pain, so that's three. So their SIN score was nine. So that falls in the indeterminate category, but right now in the surgical literature, more and more of these indeterminate patients are going right to surgical consultation and they're operating on more and more of them. But this is something I could put in my report. They went to see the spine surgeon. Then they got worked up by the oncologist, actually kind of all at the same time, but we wanna make sure that they're not gonna have spinal failure first and foremost. So very quickly, the other aspect, I told you cord compression's all that matters in life. We don't want that, pain, paralysis, death. Told you about the fractures with the SIN score. The other thing is epidural extension of disease causing cord compression. So what do you think about this case? This is renal cell carcinoma, axial post-contrast BATSAT. You have a little bit of epidural extension of disease here. You have your sagittal. I don't know what it's doing there, but think about what you would call this. What would you say? Is it mild compression of the cord or would you say epidural disease or what would you say? I think everybody here probably say something different. And if you think about the people in your group, you may all say something different than each other. And as you just heard, that's a problem when we're all seeing different things. Not only for the clinicians, when they see our reports, we're all seeing different things. They have to go read it themselves because we're all seeing different things, but also for the robots that Dr. Flanders is going to send to help us. So here's another case, metastatic angiosarcoma of the breast. How about this one? So epidural disease is all around the cord, circumferential. I don't know, mild, moderate, severe? I don't know. What do you think? And would your partner say the same? And would Dr. Flanders say the same thing I do? I don't know. I would say the same thing Flanders does. But again, you need some kind of standardization. And in the surgical literature, they created this. So again, 2010, the Epidural Spinal Cord Compression Scale was published. Six point scale for planning surgery or radiation. They have standardized this. They're using it widely in all their treatment algorithms. We can use it too. And it makes our lives easier, actually. We don't have to guess. I don't have to call Dr. Flanders and say, what would you call this? Would you say this is mild or moderate? Because I have a scale that's gonna tell me. So this is just a little background. It matters because the treatment depends on how much epidural extension of disease there is. And I won't go into detail about that because I'm almost out of time. But if the person has, like this renal cell carcinoma, big met coming out, compressing the cord, high-grade cord compression, that person actually needs surgery to take out that tumor before they get radiation therapy. This is an example of that. Separation surgery. That's from the surgical literature. You'll have to come to a different talk to hear about this. If they do not, if they have low-grade, zero or one, which is either zero is confined to the bone, one is in the epidural space but not compressing the cord, they go to go right to stereotactic radiosurgery radiation therapy. So huge difference. These people need surgery first. These people can go right to radiotherapy. So what we say makes a difference. So do you want to say it in your report? Or do you just want to write out your Maserati report and let them decide and do all the work again, right? Redoing all the work. So this is that scale. So like I just said, here's some examples. Grade zero is in the bone only. You have some epidural extension, but thecal sac deformation. Actually, I think it should just be zero and one. It was, but then the radiation oncologist added that A, B, and C because they think they can tell the difference. Hopefully there are no radiation oncologists in here, but I think it's hard to tell. Anyway, low-grade versus high-grade. And this is the high-grade example. So compression of the cord. If you have a little bit of CSF left, that's grade two. No CSF, that's grade three. These people need a separation surgery. Take out that tumor before they can get their radiation therapy. So these are those two scales I just talked about. Important for us to report both on any case of osseous metastatic disease to optimize treatment of these patients. This is what we talk about in tumor board. Anybody who goes to Spine Oncology Tumor Board, this is the discussion. But even if you don't, if you're reporting this stuff, you're helping the patient and you're helping the people in those treatment situations. So this is the algorithm that is widely used to treat the patients. There are four categories for any individual patient that they think about to make their decision, whether it's radiation, whether it's surgery, whether it's something else. Neurologic status, how much cord compression? Oncologic, what is the histology? Is it radiosensitive? Mechanical, mechanical stability, is the spine stable? And the systemic status of the patient. So two of the four categories, our imaging and our reports addressed by the SIN score and the Epidural Spinal Cord Compression Scale. 50% of the data that goes into treating these patients comes from us and our reports. And I threw these in at the last minute because nobody believes me. When I've been talking about this for years and people are like, oh, that's nice. I've gotten to work with a lot of spine surgeons lately and these are pictures from Twitter because they talk about this stuff all the time. And Paul Park from Michigan, very famous spine surgeon, was just on last week on one of our conferences saying, I wish the radiologist would use the SIN score and the Epidural Spinal Cord Compression Scale and a bunch of other ones too. And John Shin, my hero from MGH, says the same thing. So that's just kind of trying to give some justification to what I'm telling you. But you've heard it from these two, so you probably believe me now. Anyway, so I think this is a success story. So I started talking about this four years ago. Nobody had heard of these things. Now more people, I think, are using it. It takes time, but you can see the value. It makes it easier for us. I actually didn't even show you how I created a macro in PowerScribe. It's very easy. You don't have to memorize that. You don't have to download the papers. This is an example of what Dr. Flanders was just showing. Text report, regular prose report, with a macro that's inserted in. All I say is PowerScribe. I use PowerScribe. PowerScribe SINs, PowerScribe Epidural Spinal Cord Compression Scale. Puts it in there. It's a pick list. All you have to do is click on it. You don't have to memorize anything because everything's written down. I click, this is the level, this is the alignment, this is the pain, whatever. Click on it. All I have to do is add it and I can have the rest of my prose report. So it's not taking away any of my freedom, but it is adding the information that the treatment team wants. So that's all I'm going to say right now. I hope I didn't go too far over time. So use the standardized language. Use what we are learning from our surgeons. They want to know. 50% of this data comes from our reports. And remember that we make important patient decisions about stability and assessment and management. That's it. Thank you very much.
Video Summary
The video discusses the importance of improving neuroradiology reports to enhance healthcare efficiency, reduce unnecessary costs, and empower patients. The speaker emphasizes that words in reports are powerful and can transform healthcare outcomes. They outline a hierarchy of needs based on Maslow's concept, adapting it for radiology reports to include getting paid, ensuring readability and understanding, avoiding anxiety and annoyance, and ultimately adding value. The speaker suggests using structured reporting templates and common data elements (CDEs) to achieve these goals. CDEs provide standardized observations to streamline report creation and contribute to data analytics and AI. Examples include SINs and epidural spinal cord compression scales in assessing osseous metastatic disease, providing clear communication for surgical consultations. These approaches promise valuable insights beyond traditional reporting, allowing for better patient care and data utilization in research and AI applications. The overarching message is to adopt consistent terminology and structured data elements in reports to improve diagnostic accuracy, clinician communication, and overall healthcare delivery efficiency.
Keywords
neuroradiology
healthcare efficiency
structured reporting
common data elements
Maslow's hierarchy
diagnostic accuracy
patient empowerment
data analytics
AI applications
RSNA.org
|
RSNA EdCentral
|
CME Repository
|
CME Gateway
Copyright © 2025 Radiological Society of North America
Terms of Use
|
Privacy Policy
|
Cookie Policy
×
Please select your language
1
English