false
Catalog
QI: Value in Imaging 1: Value in Radiology | Domai ...
MSQI3118-2022
MSQI3118-2022
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Today, we have three talks during this session. I'm going to be talking about my concept of what value is in radiology. Things are pretty good for radiologists right now. All of you, I'm sure, are enjoying a very good life as a radiologist. Growing job opportunities. You just need to look at any of the markets right now. This, I've noticed in many, many years. Increased demand for our services, where in fact we're burning out trying to keep up with the volume of cases and patients. Emerging disciplines within radiology, which is terrific. Far more than any other medical field can claim. AI, data science, machine learning. Really, really exciting new innovations in our field. Expanding educational programs. A new IR residency. The medical students are back. We're getting more and more applications now than we've had in many, many years. And the quality of candidates are absolutely outstanding. We all know that the IR residency actually is the most popular of all medical residencies and the most competitive. We're seeing terrific technical and technological advances. Robotic interventions is becoming more and more prevalent in interventions. And the professional salaries are growing. This is a pretty good picture that we all should be very, very happy about. However, this is what you think the real world of radiology is. And you're living in la-la land. You have to be very careful, because if you start looking through all of this at the horizon, there are warning signs. There really are challenges arising over here. And it doesn't take any expert to pick up any of our journals to see some of the challenges that we are experiencing. Radiologists burning out faster than predictive. Correct. Estimates are that 40% of imaging studies are still unnecessary. So why are we doing these? Outsourcing of radiologist work has no boundaries. You can read each of these for yourself. Beware of rays. We've heard some federal talk about radiation not being as dangerous as we know it to be. Radiologists' salaries soon to plummet. This is great for us in academia. We might be able to recruit again. Radiologists are still not getting the message. They're reluctant to embrace change. Why is it that for five years in a row, RSNA has had talks and sessions on value? What is it that we're not getting about value? Why do we keep on needing to talk this? Why are we not walking that talk? So let's try and understand what it is. And let's go to an expert in lists of causes. And what are the top 10 reasons that we think radiologists have not quite embraced this value paradigm? We're not sure what future reimbursement models might be. This will never happen in my practice. There is no evidence that value drives improved outcomes. They've been talking value for years and they will continue to do so. We will continue to do so. Uncertainty over about accountable care. Fee for service will never go away. Well, all you need to do is to visit Boston, Massachusetts if you want to see to what extent it's gone away. The path of least resistance is the easiest one. If it ain't broke, why do we even need to think about fixing it? And the phrase value proposition just doesn't make sense. It's not part of our lingo. It's not something that we understand. What is the main number one cause of why we think radiologists have not embraced value? Well, we don't really know what value is. We don't know how to measure value. We don't know how to provide it to our customers. In fact, there are some excellent talks this week on who our customers are. If I were to ask each of you by the show of hands, which I won't, who are our actual customers, there's a lot of confusion in this. Is it the patient? Is it the referring physician? Is it our trainees? Is it the family members of our patients? So the first step is we have to have a shared vision. We need to know what value is if we want to start moving into this domain effectively. So your team needs to know what value is and need to agree on it and practice it together. Creating value is not just a marketing phrase. Best value is not an accreditation plaque on your wall. Anybody can get a plaque of excellent service on their wall. There's very, very low value, very low actual boundaries for getting that. It doesn't mean you're providing value. Adding value is not a checkbox on your annual operating plan. We're all striving for this, but that's not what it is. Providing value should not be a future goal that you're gonna strive for in the future. To be of value should not just be a mission statement. You need to actually get there. And it's very, very, very confusing as to what value actually is. How do we define it? I know that there are people in this audience who've got brilliant definitions of what value... Excellent, David. So Dave Larson, brilliant on this, right? So value is consistently delivering excellent service. Think about it. That's a very simple statement. All the time, you deliver to your customer excellent top-level service. But then you can argue about each of these and what these mean. Imagine next year, instead of everybody wearing badges, you can all go to the ribbon wall and download or grab whatever ribbons you want so you can go all the way down to your knees in future. What would we want an RS&A member to claim about what they're most proud of? We provide timely service. We know who our customers are. We fix our mistakes effectively, which means we actually know what our mistakes are and we know how to fix them. We go the extra mile. We add value. Wouldn't this be a nice little set of ribbons to walk around with in future, something for us to strive for? The next step is you've really gotta know what your customers think of value. It's not good enough for us to describe what we think is value. We have to ask our customers. We have to know how to do it. So what is our customers' perspective on value? Well, here's a definition I use. Value is the efficient and cost-effective acquisition and flow of relevant imaging information to improve patient outcomes. This really gets into the whole world of informatics. And each of these, to some extent, can actually be measured and managed differently. Efficiency can be measured. Cost-effectiveness can be measured. Acquisition of data can be managed. Flow of data, this is where informatics is becoming such a useful support tool. Relevant information. What is relevant? What is not relevant? Think of all the data we think is not relevant that we don't put into our reports, which might be relevant in 10 years' time. That abdominal CT scan to characterize a liver lesion where there's coronary artery calcifications, you're not likely to put that into your report. But when that patient comes back in 10 years' time with chest pain, those referring docs want to know about that. How can we start capturing this seemingly irrelevant information at the time of the current studies that can be useful in the future? Improving outcomes. How do we know we're improving outcomes if we're not actually measuring it? So there are lots and lots of different ways of measuring the different components of value. But the trouble that we're all falling into is that we're using a slide rule. We're all measuring it very, very differently. And nobody in this room, I can guarantee, would agree on how to actually go about simply measuring value. So what is the actual accepted formula for it? Very simple. The value equation, appropriateness by quality, by outcomes, by experience over cost. What this means is if you do an inappropriate study or if your recommendations are not appropriate, there's zero value. If you do a CT scan and you're then gonna recommend an MRI that's unnecessary, that is no value. If you do a CT scan that's not indicated, no value. Quality, we need to be able to define. Outcomes, we've been talking outcomes for years, but in fact, in terms of actually measuring effective value-added outcomes, very, very difficult. Is a short turnaround time a good outcome? Is a satisfactory thyroid biopsy an outcome? You need to think about how to define it. In our shop, the way to define effective outcomes, we typically use the example of the patient who's got a cough and a fever, who gets a chest radiograph that shows pneumonia on a Sunday afternoon. How are you gonna actually communicate to the referring physician that that patient has pneumonia, needs to get antibiotics, and needs to get them from a pharmacy that's probably closed so they can start treatment in a timely manner so it doesn't progress? That's how you start to measure outcomes. What about experience? How are you all measuring and managing your patient experience? Are you simply managing the complaint letters? Or are you using press gain in other metrics where you're not looking at the data and acting on it? So this is one helpful way of looking at it. I like to share my definition of quality only because it gives a gazillion different ways of measuring and improving quality. I define it as the extent with which the right study is performed the right way at the right time. You can read it for yourself, at the right location, correct patient, correct accurate diagnosis, and an actionable report is accurately communicated to the right provider in a timely manner and acted upon appropriately to improve outcomes and experience for the patient. That's a complicated definition, but think about it from running a practice. You can pick off two or three of those and just work to improve them. That's how you add value. That's how you improve the experience for the patient, for the referring physician. That's how you improve outcomes. Just take experience. How do you know what your patients value? How do you know what your patients want? Do you know what matters to your patients? It doesn't help to tell your patients what you think matters to them. Do you know what it is that they want and what matters to them? We have these interesting kiosks in our department. We get very, very helpful information from the patients. Interestingly, those younger patients with children are never gonna bother with this. They want to get out. They want to save money on parking. The patients have a bad experience. Typically, they're willing to make the time to give you that feedback. What have we heard from our patients? They wanna know how long they're gonna wait. They shouldn't be waiting. The concept of waiting rooms is old-fashioned. In the lean world, there should be no waiting rooms. They tell us about the environment of care. They don't like dirty johnnies. They don't like tissues or things on the floor. They don't like paying money for parking. They don't like parking. The first experience that most patients have with a hospital is parking their car or the cab rider or the Uber driver. Believe it or not, that's their first experience. Childcare options. Timeline for results. Tell a patient when they're gonna get the result. There was a great discussion this morning at the quality breakfast with a show of hands showing an equal number of people would like to get results right there and then, an equal number would like to wait three days. Tell the patient how the results are gonna come. If you go to your gastroenterologist, they give you a result before you leave the office with a report and with photographs. The reception staff, how are they trained to interact with your patients? We get great feedback about our breast and IR teams because those radiologists are interacting with the patients. I get almost no feedback from patients about other radiologists who don't have direct care with the patients, but I certainly get feedback from our referring physicians about the reports and the attitudes and the availability. They ask where the radiologist is. They wanna know what a radiologist is. A lot of patients are unable to distinguish a red tack from a radiologist. They also ask us, what do we do with all this feedback? Patients are getting tired of all these questions. Survey after survey, what are we doing with it? And I'm just gonna share with you some interesting letters I've had from our patients, which I find the most helpful feedback I can possibly get. I'll use Kurt Langlotz because he's just an easy target right in front of me, or Lane Donnelly. Okay, dear Dr. Donnelly, made you most surprised at being able to valet park my car during my recent visit to your department at Stanford, right? And free of charge, better still, a childcare technologist to take care of baby Samantha while I had my IV line inserted, and it was painless this time. Wow, chatting to the radiologist about my study was the highlight for me, and now I truly know what a radiologist should do, and just how skilled all of you are. This is a great letter to get. It's a real sort of feel-good letter, you know? Well, just so you know, absolutely none of this happened. But as I reflect on my experience, which was dismal, I'd like to provide you with feedback. So to a large extent, these types of letters are really helpful. They give you interesting feedback. Pick up the phone and talk to the patient. Get them to come in, go and visit them, see what it is. And what this letter really illustrates is issues relating to costs, quality, experience, and outcomes, which is really what we're talking about today. Listen and hear your patients. Don't just listen to them. Act on the information you get. We use mystery shoppers in our institution, which give us pretty helpful feedback. Ask us what matters to us, not what matters to you. Why must we wait for our results? Why can't we schedule our studies like the airlines do? Since I'm paying a little, thank you goes a long way. Boy, you get umpteen thank yous and you walk out of a store. Do we thank our patients for coming in? I keep on filling out surveys. Why, what are you doing with these? Surveys are past tense. Social media are more relevant. You've gotta be very, very careful about social media. It's now the means of sharing. Angry patient. And this is a legit case. If you want a large, painful pneumothorax after PearlTap, go to Stanford University Hospital and ask for Dr. Donnelly. So this is a case that happened in Boston at one of the large teaching hospitals. A very unhappy patient. Wasn't explained the complications that can occur after a PluralTap and was very painful and was short of breath and wasn't satisfied with the experience. And this patient goes on and on and on. If you are gonna play in the social media world, manage it effectively. It's very simple for patients to share their experiences now. This is another real one. Hi, sorry for the issues you're having. Do you want support to reach out to help or you're just looking to publicly shame us? This is not a good response to a patient, all right? You want support to reach out? That's not exactly a personal approach. So be very careful. If you're gonna use social media, manage them effectively. Dissatisfied radiology customer. I had an ultrasound which showed something above my kidney. Likely an adenoma, consider CT. I had the CT, which was expensive. The technologist said consistent with adrenal adenoma suggest MRI to characterize. So I had the MRI with along with a $500 copay. Now the tech says, but notice tech, not radiologist. Now the tech says benign adrenal adenoma as was shown on prior ultrasound and CT suggest six month follow-up. Nothing you wanna be proud of, right? Do these technologists know their business? They don't know what they're doing. Seems like unnecessary imaging expenses. How can I go about getting my copay return? Spread the word, don't have your imaging done yet. This is what patients are putting out there. You're not gonna avoid this, but respond to them appropriately. See how you can do it. If you break each of these down, it all boils down to appropriateness, customer experience, outcomes, quality and cost. In other words, value. We really can't continue making excuses. Dear, I'll use my own name, Johnny, I've been trying to contact you all day to get further information on the report you sent me. I've tried beeping, calling, emailing and even texting you. I stopped by the reading room, which was locked. Your admin assistant's phone goes to an answering machine. Very frustrating and time consuming for me and I'm concerned about my patient. Is there anybody else I can review the study report with? You don't wanna get emails like this. You don't wanna get texts like this. This should not be happening. You have to have systems in place to have people there. We have to think about the patient. We have someone in our department now, Jim Rawson, who's so centered on patient-centered care. And Jim has said some great things on this topic. Oh, here it is. Thank you, Jim. It's very hard to put the patient in the center of healthcare if we are standing there ourselves. You have to see everything from the patient's perspective, not your perspective. It's not only complaint letters we get. Sometimes we get very glowing appreciation letters. And when we get these, it's also very helpful not just to do a root cause analysis when things go wrong. Try doing a root cause analysis when things work out really, really well. Dear Johnny, the ICU staff asked me to convey their appreciation to one of your techs, Patty, who came up to the ICU this week to help us insert a very challenging feeding tube. She's always responsive, excellent at what she does, a pleasure to work with. We wish we could hire her hospital. It's blessed to have experts like Patty care for our patient. I called them up. The truth was they couldn't get the resident to go and put this tube in on the floor. They couldn't get the fellow to put the tube in on the floor. They couldn't get the attendings. They were all arguing. So eventually one of the techs just went up and helped. Important to get that. We've all been to med school. We need to be the physicians that we chose to be originally. How do you engage with your referring physicians? When was the last time you had a face-to-face with one of your patients? What efforts have you made to chat with one of your patients? When last did you survey your referring providers? When last did you round on your patients? When last did you actually round? It was Gary Glazer in 2011 who brought this up, the invisible radiologist. We've been talking about the invisible radiologist now for many, many years and not much has changed. We've got to get out there. We've got to change the approach. And here are the standard three doors that I like to teach about. So a lot of our referring physicians see the radiology reading room as this. Sorry, we're closed. Out to lunch. Dark room in use. Innovative science put on there so no one will come in. Some very clever strategies. Big cactus at the door. Go away. And it sort of saddens me to see studies on how to minimize disturbances in the reading room. How to stop telephone calls. That's what we're there for. People are calling or coming because they want information for their patients. They want something like come in, we're open. Please disturb us. We're here to help. Welcome, come inside. But maybe the door needs to be open. The radiologist needs to be there so they can be there to help you. Learn from all of us. Learn from your competitors. Dear doctor, I had an ultrasound in your department last week to look for gallstones. Half a day off from work, $26 to park. I met the tech. I don't know why you're billing me, what your role was. I'm trying to understand your report which makes no sense. But some words concern me. How do I go about doing clinical correlation? What is a prominent bile duct? You leave no contact info on the reports. I search these on Google. Could I have hepatitis or even cirrhosis? One final question, do I have gallstones? That's what I came in for. And if you think about it, it's about costs. It's about quality. It's about customer experience. It's about outcomes. These four metrics really are value. This is a patient telling us you're not providing failure on any one of those metrics. Communicate properly in your reports. Very, very important. Work to reduce variation in your reports. Stop using recommend, suggest, consider. Standardize it. Stop the vague terms, interval, short term, non-urgent, routine, follow up. People don't know what it means. Be specific about what you're recommending. And of course, suggesting clinical correlation, that honestly to a large extent is the diagnosis of the intellectually destitute. Because to a large extent, they're getting this study because of that. Satisfactory biopsy is not a process. Adequate material is an impression, not an outcome. Findings as described above, why are you bothering with that? Put the findings in the body and put the impression in the impression. Why not, I think this is a terrific PQI opportunity. I wanna share a real statement that came out of a real report that was sent to me. This is what put on the bottom of the report. Voice recognition software was used to create portions of this document. An attempted proofreading has been made to minimize errors. Occasional interpretation errors may inadvertently occur due to limitations in the software. If however, the reader notes inconsistent information and or needs to ask questions concerning any part of this document, please contact our office. This is what's being put on reports out there in the new era of voice recognition. Are we proud of something like that? Are we proud to even think that we could put something like that on our report? Our product is our report and honestly, we've gotta get to the point of not making any more excuses. We have to own and learn from our errors. So we've put a lot of efforts into improving our reports. I think from a radiologist perspective, what we see often from a report is our turnaround time. We look at the data we're collecting, we try to reduce our turnaround time. What the referring doc looks at is something like this. A poorly edited, vague report that's gotta work through that makes very little sense. I have a dashboard in my office that shows me all the recommendations coming out of our faculty every single day and it's an amazing improvement tool for me to look at it. And there are things in there that we can provide feedback to our radiologists. MRI can be obtained. Should it or shouldn't it? What does that mean? Ultrasound could be considered. CT, abdomen, pelvis for further evaluation of obstruction. When, today, immediately, in a week? Be specific. Another one, ultrasound could be performed. MRI in, absent comment, could be considered. Ultrasound or MRI. So great opportunities to improve. And we're playing with the tool now. Here was one I was involved in. The recommendation was cross-sectional imaging with multi-phase CT or MRI. Not really helpful. We're playing with this tool, a way to give radiologists feedback. I need to be more specific about whether to follow up, what study to get. I'd make sure there are no typos. In other words, one of my faculty's been quite polite. It's probably my quality director, Bettina, being quite polite about me being a bit more fastidious about what I put in my reports. Were you aware of prior studies? How often do you say, if prior studies are available, I'd be happy to look at them, hoping that they never will be made available for you to look at? Be careful. I'm playing with something that gives me feedback like this. This is Jonathan Kriskill. This is a six-month thing. Standardizing all my feedback using FOX. Use standard algorithms for adrenals. Reduce voice recognition mistakes. That's a very polite way of saying, check your reports. Review prior studies, if they're available, and improve specificity. Great tools that we can play with. Specificity really is the way that we can reduce the equivalent of the radiology readmit. So stop inappropriate recommendations, avoid repeats, non-diagnostics, procedure complications, customer experience, poor follow-up, adverse events, delayed reads, and vague reports. This is the contemporary version and definition of the patient readmit in radiology terms. It's adding expenses, and what are each of you doing to manage each of these? Each of these is a practice improvement opportunity. So what are you doing to improve radiologist performance? And I'm gonna end up with a little bit of my pet peeves over here, peer learning. So peer review, honestly, is a process metric, that's it. Peer learning is gonna take a while for us all to start practicing, and I hope that as many of you in this room as possible are starting to practice peer learning. You can look at errors, you can safely admit to them, and you can share them in a learning perspective. You require a just culture, obviously. Learning leads to improvement, and consider the personal practice patient impacts of bearing your errors, and improving your performance certainly adds value through service excellence. So really, in summary, I've provided you with a couple of different definitions of value. I've given you the actual equations of it. I've described different ways that you can measure value very simply, and many of you are probably doing it, but not seeing it through that value frame of reference. And I've shared with you some strategies for improving the way that you can deliver to it. I'll remind you of Dave Larson's definition, consistently delivering excellent service. So I'll end up on my last slide over here. Eight strategies for thriving in the value area. Appropriate and efficient care. Anything you do that's not appropriate, there's no value. Focus on improving efficiency. Deliver excellent service to your patients. Know what the patients value, know what matters to them. Help your physicians to drive discharges and prevent readmits. Be data-driven and data-supportive. The two following talks now will focus on informatics and how we can do this. You need to be visible and voluble members of the patient care team. It's not about a radiologist team, it's patient-centered, patient-focused care. Be available, be accessible, and be affable. Else no one's gonna reach out and wanna work with you. Continuously learn and improve if you really want to provide value. Provide unambiguous reports and communicate effectively and commonly. And remember to engage your customers effectively. Thank you very much. So as Dr. Kruskal mentioned, if you start to look for the definition of value, you may find a number of different answers, not all of which are concordant. Some of the simplest definitions tell us that value is quality divided by cost. Others tell us that we should include service as part of that equation. And still others recommend that access to care is another component of this. But where do we actually factor in our patient's understanding of value and specifically the value that radiology brings to their care? So when I was asked to prepare this presentation, I felt like a little bit of an imposter because being a radiologist, I understand where radiology delivers value in patient care for myself as a patient, for my family members, and in my profession as well. And so I really didn't think that my opinion was the one that mattered here. So I went and found some experts. And I found them in the best place that I knew to go looking for experts on social media. And so I asked some of these questions. I'm going to share some of the answers that I got with you. But I really liked this cross section of people that responded. Some of them are actually radiologists. Some of them are non-radiologist physicians from other specialties. Many are patient and patient advocates. And I would say solidly about 50% of this group are people that I actually don't know in real life. And so, and from all around the world as well. And so I asked them, the first question was, what value does radiology bring to your care? And there was a lot of consensus actually on the answer to this question. People immediately cited the value of minimally invasive testing and diagnostic screening, not only for evaluation of acute illness, but for screening for potential illnesses. And one other thing that they really appreciated was the role that radiology plays in putting an acute illness or a new problem in the context of the larger picture of their health care, of their care journey, of other medical problems that they might have. The one quote that I will directly mention came from our very own Rasu Shrestha, who said, radiology is like a GPS as to what's going on inside me. And I thought that was a very nice way of describing it. But there was many similar themes from others as well as to the fact that radiology really helps our patients in terms of guiding the next steps in their care, whether it may be along the care journey for chronic illness or some sort of evaluation for an acute process that might be going on unexpectedly. And there was agreement on the fact that radiology really defines care and that few management decisions now are made without guidance from imaging. So at least on this first question, there was a lot of agreement, a lot of consensus among the folks that responded to me that there is value that radiology provides in medicine and in patient care. And I don't think any of us would disagree with that. This is where it got interesting. I asked Twitter what some of their IT-related challenges were as they tried to navigate care in radiology. And overwhelmingly, I summarized them into these five, but overwhelmingly, they had to do with getting access to images and to reports. And while many health systems now offer personal health portals through which patients can get their reports, I had anecdotes of patients saying, well, they could only get some of them, but not others. They could never get images. And that tied directly into the second point, which was that when it came time to transfer images between care facilities or between physicians, still in 2018, that required going to the imaging center, physically obtaining a disc, and taking it to the next doctor's visit, specialist visit, tertiary care center, whatever it might be. And in many instances, patients would discover that the disc didn't have the entirety of the study that was needed. It didn't have the study at all in some instances. It had compatibility issues so that the physician that they were taking it to couldn't actually look at the imaging. And all of this starts to cast a negative light on the patient experience with respect to radiology. We know, and we've talked about at this meeting and at many other meetings, that we have historically always written our reports to be consumed by other physicians. But our patients are so much more engaged and involved in their care now than they were previously, and they get access to their reports, and they want to know what they mean, and naturally cannot understand a report that is written in technical medical jargon intended for another physician. However, this tends to put up another barrier for our patients between radiology and them, in that they don't really understand what we're contributing and what it actually means for the next steps in their care. They understand that we have some influence, but they don't really understand what that is. I've heard from both referring physicians and from patients and patient advocates that a lot of times there's confusion as to the rationale for choosing a particular type of imaging, or choosing parameters of imaging, for example, whether or not contrast is needed, and that can cause a little bit of discontent as well. And it's certainly an opportunity for us to provide more education to our patients, to our referring physicians, except for this last point. For those of us that are non-procedural radiologists, we very rarely get to talk to our patients directly. And our patients actually really want to have that conversation with us. I've had patient advocates, I've had colleagues trying to navigate care for their aging parents, for their children, for their spouses all tell me, you know, it would make it so much easier if I could just talk to the radiologist. A lot of times, and you all, but for the radiologists in the room, you've probably had this experience as well. You as a radiologist can help to consult for someone who's had an imaging study that's been interpreted by some other radiologist, because the patient can't actually go and speak to that radiologist. And so, with all of these challenges in mind, we actually set out to try to address two of them. And so I'm gonna take you through a couple of informatics initiatives that try to address some of these issues that our patients encounter. And the first centers around helping patients to better understand their reports. So we developed Porter, the patient-oriented radiology reporter, and this was actually the brainchild of Dr. Chuck Kahn, who's the vice chair at Penn Radiology, and also the editor-in-chief of the new Radiology AI Journal. And what he wanted to do was actually inspired by a conversation that we once had with a patient who said, you know, every time I get another imaging study, I take it and I look up all the terms, and I compare notes with a friend of mine, and we kind of laugh over the things that the radiologist has said about us this time. Both she and the friend have conditions that require fairly regular imaging. And she was telling us a particular instance of when she got a report that said that her stomach was decompressed. And she got genuinely concerned that there was underlying pathology that nobody had talked to her about that was sort of hidden in this report, and that could actually impact her care, or her health going forward. And so we started to think about, you know, could we try to annotate our reports at least? This is not intended to be a replacement for a conversation, but at least a way to try to deliver more information beyond the technically written radiology report. So the third participant in this project originally was Dr. Sung Oh. He was one of my informatics fellows at the time, and a musculoskeletal radiology fellow. So we started with his clinical domain of interest in EMRI, took a corpus of reports, extracted all of the terms, created or looked for lay language definitions of these terms, and also looked for associated public domain images from Wikipedia, as well as actual Wikipedia pages that we could integrate into all of this. And to spread the word to our patients that this was a resource that was available, we embedded a short statement in the radiology report so that if the patient looked up the report on the portal, they would get a little note that said that they could go to a website and access this information. And all of this was stored securely even though it was outside the institutional firewall without any protected health information compromised. And so when the patient actually went to that URL, they would find the actual text of their radiology report, but now with some annotations. So you can see here multiple terms hyperlinked that if the patient moused over a particular term, they would get a lay language definition. In some cases, a link to a Wikipedia page for that term. And in other instances, if there was an image or an illustration to accompany the term, would then get the image along with the definition. And again, to the point that Dr. Kroskow made that oftentimes our patients don't know that we are actually physicians, in order to remind them that this was actually an interpretation that was generated by a physician, we would put the headshot of the interpreting radiologist here on the page as well. So we did a pilot survey with a small number of patients who accessed Porter when they got this in their portals. And overall, the feedback was really positive. One of the patients said that it was a great service for those who didn't have anatomical knowledge, that they found it very interesting and helpful in understanding their diagnosis. But I always share this second comment as well, because I think it really drives home the point of what we were and weren't trying to achieve with this, that the plain English translation didn't really prepare another patient for a more in-depth conversation with his orthopedic surgeon. Again, remembering this was in the MRI that we started with. So I think this is important to remember that this is one attempt for us to try to disseminate a little more information to make a radiology report less obtuse to a patient, but it's not intended to replace that face-to-face conversation with any physician, radiologist or otherwise, but hopefully transmits a little bit more information than we have been previously. So this has been expanded beyond the MRI. Dr. Jennifer Levy, who's one of our residents at Penn, is doing a pilot study on using Porter for screening mammography. So a similar sort of thing where if the patient mouses over a term, they'll get a definition. But one of the interesting things that Dr. Levy found while she was working on this is that there are certain terms that in breast imaging have a very particular definition that may be slightly different if you're talking about the same term, not in a non-breast imaging examination. And so she actually had to first go and update the glossary of terms and modify the system to know whether it was looking at mammograms or non-mammographic examination. So that was an interesting change that we didn't anticipate. There are over 13,000 terms now in the glossary covering all body parts and all study types. We are actively working to get it integrated into the patient portal so that you wouldn't have to send a patient out to a separate website. And there are a couple of pilot studies pending. One, as I mentioned, Dr. Levy's work in breast imaging and Dr. Brian Park, who is a future interventional radiologist and currently one of my informatics fellows is looking at piloting this with patients who are undergoing ultrasound-guided biopsies. And it's worth noting that this isn't really truly report translation, right? We're not taking it from one language to another. There are a lot of nuances around the types of phrases and conventions that radiologists use that we don't have standardized that need to be taken into account. You know, I could envision a day where the radiologist generates a report, there's a version that's automatically generated for a specialist that might have particular interests and particular questions that need to be answered. There's another version that's generated with detailed measurements and quantification for the next radiologist that reads the follow-up study. There's a third version that's perhaps generated for the primary care physician, maybe even a fourth version that's generated for the patient. We're not there yet, but I could see us being there someday, but we're not there yet. And so in the meantime, we have to account for the fact that the reports that we generate are being consumed by our patients and consider what we need to do to make them a little bit more accessible. So the other initiative that we piloted was to attempt to address the fact that we know that our patients want to talk to us. And could we break down some of the barriers associated with that? There have been a few different pilot studies showing that patients have had greater satisfaction with their overall radiology experience when they've had the opportunity to talk to a radiologist, either immediately before or immediately after the examination, even in cases where they didn't necessarily sit down and review the images together. There's literature that shows that even recently, I think, I can't remember where from, but that patients want to get their information about their imaging results from their physicians because nine times out of 10, they don't actually know that there is a physician behind that interpretation. And so with all of these things in mind, we thought, could we actually connect patients with radiologists in the busy workflow of a large tertiary medical center at an academic institution without adding increased effort to the radiologist? And so again, to spread the word to our patients, we embedded a note in the radiology report. So we actually wanted to see if it made a difference if we explicitly mentioned the name of the interpreting physician, the interpreting radiologist, in the note. We didn't end up getting enough of a data sample size to be able to test that, but that was the reason why there are two different versions of the message here. But basically, the message was placed at the bottom of every report that was used in this pilot study and said, here's a phone number. If you have questions, you wanna talk to one of the interpreting radiologists, please pick up the phone and call. And what we did was we mapped Google voice numbers to the radiologist's personal cell phone numbers. And this was very useful because what it allowed us to do was to also set up a customized voicemail so that if the radiologist wasn't actually able to answer the phone when the call came through, patients would be directed to a voicemail that would promise a return call within 24 hours or for whatever reason that radiologist was away for an extended period of time, the patients would be instructed to call the radiology department directly. And so we had, it also gave us the ability to track the phone calls and the length and that kind of thing, which was very useful. So we had four fellows participate in this effort. These were actually my informatics fellows and this was their group project. And so over the course of about eight months, we logged every call that was received and interpreted by this group. We logged what the call was about, who it was that was calling, because it wasn't always patients that called, how long the radiologist spent on the phone and how much additional time they spent to actually resolve the issue and what sorts of activities they required, that they needed to do to actually resolve the issue. So we interpreted, or this group rather interpreted just under 4,000 exams over that period of time, predominantly outpatient studies, predominantly cross-sectional imaging, and related to these studies got 27 phone calls, five of which actually were from physicians. The calls were about seven minutes long and on average, the radiologist spent another five minutes to actually answer the question effectively and resolve the issue. So looking just at the calls from patients, they were about just under nine minutes and patients wanted to know what the report meant, sometimes had questions about the terms that were included, a question about something that perhaps was not explicitly mentioned in the report, questions about the impact or next steps based on the results of the interpretation, and occasionally called to mention errors in the report that they had discovered. In looking at the radiologist's time, most often they were either reviewing the report, going back to actually review the images, or looking at priors or looking at the patient's chart. This added usually anywhere from five to six minutes to the process, but only affected a small portion of the cases. So when you looked at the additional time spent over the total number of cases identified, it really was only a matter of seconds added to each individual case. So we thought this was a successful pilot and we're hoping to get buy-in from leadership to actually expand the scope of this initiative further. There's a lot of work in this space related to making reports more readable. There have been a number of readability analyses in terms of the reading grade level at which resources targeted at patients are currently written. The recommendation in the U.S. is that they be tailored to a fifth grade reading level, but you can see that even things like MedlinePlus are aimed significantly higher than that for the majority of the terms that are used. We intentionally, with Porter, tried to make sure that at least 50% of the terms were at or close to that fifth grade reading level. And this is work by Dr. Martín Carreras, another one of our residents, who recently published a paper online about integrating Wikipedia resources into an information resource for patients as well. I'm actively doing some work with Josh Cho, another one of our residents, using Amazon Mechanical Turk to crowdsource patient sentiments towards different styles of radiology reports and exploring responses to different terms that we conventionally use in our reports with the goal of developing a set of best practices for radiology reporting to make reports more accessible and understandable to patients. There was a nice paper last year about online crowdsourcing to assess a web-based interactive mammography report. So that's in the JCR, if that's something of interest to you. And there's more and more work being done looking at using multimedia reporting. This is work from Arun Krishnaraj and his colleagues at UVA showing their example of a multimedia report for thyroid ultrasound. And I believe they also have one for lung cancer that you will probably see later this week as part of Dr. Krishnaraj's FAST-5 presentation. So the FAST-5 session is this Thursday at 1.30 in the Ari Crown Theater, and there are two presentations related to patient and family-centered care, one being Arun's patient feedback for radiology reports and the other from Andrea Barondi-Kitz on patient-friendly imaging appropriateness criteria summaries. And there's also ongoing work from Dr. Kadom at Emory looking at InfoRats, again, a different crowdsourcing effort trying to get information from patients about how we can better customize radiology reports to make them more understandable. So to conclude, we've talked about how our patients do appreciate that radiology delivers value and has an important role in their care, but they want better access to a number of things in radiology, to their images, to their reports, and to us, their radiologists. And to an extent, informatics can facilitate and drive more patient and family-centered care radiology practice forward. But if you still need more motivation, I want to share with you a tweet from Andrea, actually, that I gathered as part of this process. And I should note, she, yesterday, became, I think, the first patient advocate to speak at the RSNA in a session related to informatics and innovations for improving patients' access to radiology. But it's particularly this very last statement. As a patient, I don't feel your presence or contribution. And it certainly motivates me, and I hope that it motivates all of you to take some of what you've learned in this session and try to apply it so that we really do make an impact and make a direct connection with our patients so that they do feel our contribution to their care. This is a timely topic, the intersection of machine learning, artificial intelligence, and quality and value. So I want to take you through a brief, five-minute introduction to AI machine learning, and then give you some examples from the AI research we've been doing that illustrates some of the ways in which AI and machine learning is likely to help with quality and value in the future. So I think this is familiar to many of you, the high rate of diagnostic errors, and in particular, the rate of errors in radiology interpretation, clinically significant errors, three to 6%. So that's something that certainly motivates many of us, even many of our computer science students who are interested in developing AI algorithms, the notion that they can improve patient outcomes and value. So just starting with some definitions, the broadest term that we think about is artificial intelligence, and if you go to Wikipedia, it says something like, AI is when computers do things that make humans seem intelligent. And that's, there's, it's a very broad definition, obviously, has some drawbacks. The lay public tends to layer some additional expertise on programs that do very narrow things because of the human anthropomorphic sense of artificial intelligence. The other is that it's changed over time. So back when I studied AI in the 80s, it was all about playing chess or planning routes on a road. Now that's routine. We talk about the AIs driving the car or interpreting a mammogram. Machine learning is a specific type of artificial intelligence which requires a lot of data. So we feed positive and negative examples into the computer and it learns how to distinguish positive from negative from those training examples, as opposed to having a human hand code the artificial intelligence. There are many forms of machine learning. The simplest you may have learned in Statistics 101, it's called regression. So you feed examples of middle school kids, their height and weight, and out the other side, you get this mathematical equation that can predict height from weight. That's a form of machine learning, very simple. Neural networks are just a very sophisticated, complicated form of machine learning. And then we talk about deep learning. Deep refers to the number of layers that these neural networks have. If you're working with images, all your networks are deep. So really in radiology, deep learning is just kind of a nice rebranding of neural networks. So when we talk about deep learning, we're talking about neural networks. Why now? Why is this such an important issue? It has to do with work that was done with something called ImageNet. So ImageNet is a database of about 14 million images, photographs from the web. Each contains a dominant object in the center. So on the left is the garden spider, and you can see the various orientations, different backgrounds. So it's a challenging visual recognition task. This database was put together by Fei-Fei Li, who's a professor of computer science at Stanford, and lots of labels. So 800 types of birds, 900 types of trees, 157 musical instruments. So this is a really sophisticated detection classification task. Each year, they hold a competition among the computer scientists to see who can accurately classify these objects, and this is the error rate over time. So in 2011, you see everyone was using what I would call old-fashioned non-neural network techniques, and the error rate was about 25%. In 2012, a lab from University of Toronto first used these neural network techniques, and the error rate dropped substantially. The next year, everyone was using neural networks, dropped again, and continued to drop over time. Now the companies got into it. Google won it the next year. Microsoft won it the next year. In the following two years, teams, I believe, from China have won it, and the error rate's down now about two to 3%, and people believe this task has really been played out, that all the information you can possibly extract from these images has been extracted. And then you see the red line there that one of Fei-Fei's postdocs said, all right, I'm gonna train myself to learn 300 types of dogs, and really did a lot of research, and then figured what his error rate was. Turned out to be at 5%. So it was human-level performance. They then went on to look at more complex tasks, so things like automatically captioning images with the objects in those images. So here, this is, again, out of Fei-Fei's lab. These are all automatically generated captions. Bottle of water, cup of coffee, plate of fruit. So when radiologists see that, we say, hmm, that's what I do, right? I look at images, I describe what's there, and that's when people got very excited about applying these methods in clinical imaging. Just to remind you, these are neural networks, and I have here these coefficients. So this is all just math. The neural network is just a big, giant mathematical equation of one form, and the thing that's changed over the last few years is the computer scientists have figured how to modify those coefficients a little bit each time based on whether the computer gets the answer right or wrong. So it just incrementally tweaks the coefficients a little bit through each iteration, and over billions of iterations, you get excellent performance of the neural network on the other end. This is what these look like. This is one that happens to be from Google. There are a number of these open source. You get off the shelf. Each box here is tens of thousands of parameters, so all together, this is millions of parameters. Most of these models are very large like this, and if you look at how these work, the early layers of the model at the bottom here are doing things like edge detection. So that's why they're sometimes called convolutional neural networks because they're doing convolution. Then you'll go partway through the network. You're finding substructures and then superstructures and then ultimately the full object detection. So that's a brief introduction to AI machine learning, and now I wanna talk about the different ways in which these AI machine learning methods could improve quality and value. So improving the quality of the image. Something called precision radiology. You may be familiar with precision medicine. I'll describe that. Measuring the quality of images and improving those. Augmented human classification, so making us better at what we do in terms of finding things and classifying them. Prioritizing images on our work list to make sure the right, most urgent work gets done first. Global health, so areas that are underserved by radiologists and then improving our reporting methods. So I'll give you a brief example of each. So this is a work of Greg Zaharchuk's lab at Stanford. So here we have a pre-contrast MR of the brain and in many of the vascular perfusion studies now they do a test dose of contrast. So this is 10% dose of contrast. And what he does is use one of these neural networks to predict, to synthesize what the full 100% contrast dose would look like. So this is a synthetic image and here you can compare it to the actual full contrast dose. So there you're seeing the way in which, and he's doing similar work with MR sequences where you can predict a, from a noisy short MR sequence you can predict a higher resolution MR sequence. So reducing contrast dose, reducing imaging time, reducing radiation dose. This is our bone age model, first model that came out of our research center. This is the work of David Larson. And as you may know, this is an X-ray of the patient's hand looking at the bone development, trying to determine the physiologic age of the child and comparing to chronologic age in children who may have developmental delay. The state of the art here is a book by Gruelich and Pyle which incidentally happens to be the largest revenue producing book in the history of Stanford University Press. Because every radiologist has to buy it, I guess. And, but the reference data in that book is taken from 300 Caucasian children who grew up in Cleveland in the 1950s. So we thought this was ripe for application of AI so we built a neural network model to predict the bone age from the images directly. Here you see a heat map showing the areas where the model was looking to give you some comfort that it's looking in the right areas. And then here in the lower left you see a plot. So these are mean absolute differences between the model which is the light red and the human radiologist which is the dark red. So lower bar is better here and you can see the model is equivalent to two radiologists and a little better than two others. So near human level performance. Now the nice thing about the fact that this is an electronic model is that we can scale this. So we can produce many of these. You can imagine producing one of these for each demographic group now and we know that bone development varies across demographic groups and you can get the right answer for each patient. So that's the precision radiology aspect of this. This is some very preliminary work, also the work of David Larson and others. Really work in progress I would say. But what we are trying to do is to look at the various criteria for what constitutes a good mammographic image. And we have a sort of a coaching system and a peer review system now which looks at a small fraction of images. But again, if you have an automated AI method that could look at an image and describe whether or not it meets each of the criteria for a good mammographic image, you can imagine doing that for every image and giving feedback on every image in real time. What we found so far is that we find that the AI systems are better at detecting some of these things than others. Some of that we think relates to the amount of data that we have, so we're continuing to work on this and I think with more training data as our human training program produces it, will give us the opportunity to build a model that's accurate for all of these different aspects of the mammogram. This is the work of Bhavik Patel and this is how AI can help humans classify abnormalities better. So this is Niyamar looking at ligaments. This is an ROC curve of a randomized trial. Upper left is high accuracy. This box in the center is the magnification of the upper left corner. And this was radiologists and orthopedic surgeons looking at these images either with or without artificial intelligence with a time for forgetting in between, so each radiologist, each orthopedist looked at it in both conditions. So the red dots here are without the machine learning and the green is with the assistance. And you can see that each individual tend to be moving up and to the left, which is good, that's better accuracy, or along the ROC curve towards better specificity, which is also a positive factor. So these systems can certainly, I think this gets the most visibility, but these systems can really help us with decision support. This is the work of Matt Lundgren, so chest radiograph interpretation. We built a neural network in collaboration with our computer scientist, Andrew Ng, who's one of the luminaries in this area, to detect 14 common abnormalities on chest radiographs. So everything from atelectasis to pneumothorax. And we found that the machine learning system was equivalent to radiologists in 10. It was better in one and not as good in three others. And this was just published in PLOS Medicine in the last week, with a larger group of radiologists. So comparing to, I believe it was a dozen radiologists, residents and attendings, and showing similar performance. What are the implications of that? Well, you can use those kinds of methods to prioritize your work list. So I'm a chest radiologist, I get 100 ICU films every morning. Some of those contain bad things like tubes out of place, new pneumonia, pneumothorax, and I sure would like to read those first, rather than pondering, do I sort the reverse chronological so that I get the most recent ones first, or do I, the ones that have been waiting the longest, I can now read the ones that are more likely to have abnormalities. We're also looking at discussing with an integrated delivery system about a pneumonia pathway. So patient comes in with possible pneumonia, cough, fever. They get a battery of tests. The one that takes the longest to come back is the chest X-ray. Could we run the algorithm on the chest X-ray and get them started on antibiotics if it's positive, and then have the human reading later? And then global health. This is something that, through Stanford's Global Health Program, we are piloting in places like Tanzania and Indonesia where they don't have enough radiologists, and you have mid-level providers or general practitioners trying to decide whether to send a patient back home who may or may not have tuberculosis, and can we help them by allowing them to take literally a photograph of the film, upload that to the cloud, have the AI run, and give them a bar graph, as you see there, of what are the likely possibilities. Radiology reports. So this is really the first foray that we've made into this, but I think this is really powerful. So the analysis of text is having the same moment now that the analysis of images had back, as I showed you in 2011, 2012, where people are realizing how these neural networks can be applied to text analysis in an effective way, and it involves vector representation of words, believe it or not. This turns out to be a very powerful method, and then neural network methods that are tailored for sequences of things instead of pixel matrices of things. So we took a look at our, these are plane radiographs of various body parts here on the left, chest, abdomen, pelvis, spine, knee, ankle, shoulder, and so on, and we found 87,000 of these in our database and divided them up into training and test set, and we applied one of these neural network methods called an LSTM that is designed to help process sequences of words, in this case, and what we found was, so we gave it a bunch of training data. So we have lots of reports which have a body and an impression, and we said, okay, we're gonna ask this neural network model to learn how to generate impressions from the body of the report, see if we can generate that summary automatically, and so we did that, and then we asked a radiologist to compare. Here are the findings. Here's one summary, and here's another summary. One of these was generated by a human. One of these was generated by the computer. The rater didn't know which one and had to say which one is better or are they about the same, and what we found was, ah, first we just looked at a pure mechanical analysis. What's the overlap in terms of text? There's something called Rouge score, which we used, showed that our model did pretty well relative to some basic lexical techniques, but then this survey that we did, we showed that about half were the same, so no preference. About a third, the human summary was preferred, and about a sixth, I guess, 16%, the machine was preferred. So we're getting two thirds of these now the machine is as good or better than a human in producing this summary. So this is something that you could imagine giving us some more consistency in the way we generate our impressions, some rapidity in how we do it, and I think if we use some templating to structure the text, we'll get some increased, some improvement in the number that are preferred, machine preferred summaries. Okay, just in closing, whenever we talk about AI, we always talk about, well, this sounds really powerful, is it coming for our jobs? And if you look at, this is the Gartner Hype Cycle, the very top here, right at the peak, deep learning, and then you see here we have the trough of disillusionment, that's where we're headed, and then we'll come out of it to the plateau of productivity and here it says maybe two to five years. I think that's actually a pretty accurate assessment, but I wanted to give you a sense of the future of this, where we may be headed with radiology. So today we're seeing a lot of non-medical network architectures, so you're seeing people take models off the shelf that were designed to look at ImageNet, which is 256 by 256 pixels color photographs. Increasingly, we're seeing laboratories developing network architectures that are designed for our images, so multimodal, multichannel, 3D, 4D, high-resolution images. There's been a real focus on, frankly, paying radiologists to, or in our case, we try to get volunteers to label these studies. I think in the future you'll see a focus on using some of those text analysis techniques to extract information from the report so we can very rapidly, although noisily, label many, many images, and that'll give us very large training sets. There's some good data to show that noise in the labeling data in the training set doesn't matter very much in terms of the performance of the model. We've had static training data sets created at great cost, again, because we pay radiologists to create these data sets. I think in the future we'll start to see some of these newer reporting techniques, like structured reporting, where radiologists can specify, let's say in a recommendation, as we saw earlier in this talk, what test is recommended and how long of an interval should there be until that test is done, and then that can serve as machine learning, so we could then build an AI model that might predict or suggest what would be the appropriate recommendation in the particular situation. We have a lot of private data sets from single institutions due to our privacy rules and just the difficulty of making these data sets public, but I think we'll increasingly see public challenges using multi-institutional data. I know we've tried to do that as part of this RSNA meeting. We had a pneumonia challenge. We announced the results yesterday and had 1,400 teams that were working on this pneumonia detection task over the last six months or so. We've talked about single apps built for the average patient, this bone age example. I think you'll see that again. Because these techniques are scalable, we can build models that are tailored to individual groups or to even individual institutions, the prevalence of disease that you have, your scanner types. So rather than one size fits all algorithms, we'll start to see healthcare organizations develop their own algorithms with the help perhaps from other experts and vendors to label their own data, label your data, produce a model that's tailored to your prevalence of disease, your patient population, your scanner. And lastly, I think we're past the point where we are gonna fear these systems are gonna replace us. It's very hard to build these, as you've seen some examples where we've really had to work through challenges, and I think we should start to embrace these systems because they will, in fact, in the end, make us all better on behalf of the patient. So I like to say, will AI replace radiologists? The answer is no, but radiologists who use AI will replace radiologists who don't. So thanks very much.
Video Summary
In this session, the importance of value in radiology was highlighted, with a focus on emerging disciplines such as AI, data science, and machine learning. Despite the current prosperity in the field, challenges were also noted, such as radiologist burnout, unnecessary imaging, and the reluctance to embrace change. The concept of value was dissected, emphasizing the necessity to understand and measure it accurately to improve services.<br /><br />A key topic discussed was the need for radiologists to identify their real customers – be it patients, referring physicians, or others – and deliver consistent, excellent service to them. It was highlighted that value is the efficient acquisition and flow of relevant imaging information to improve patient outcomes. The talks emphasized transforming vague concepts into measurable and actionable strategies, ensuring that radiologists walk the talk regarding value addition.<br /><br />The session also explored the burgeoning role of AI and machine learning in radiology, providing insights into their applications, such as improving image quality, enhancing diagnosis, and prioritizing urgent cases. Importantly, AI was portrayed as a tool to augment radiologists' capabilities, fostering improved patient care rather than replacing human jobs.<br /><br />Overall, the session highlighted the necessity for continuous learning, improvement, and embracing new technological advancements to provide high-value radiological care. This approach not only benefits patient outcomes but also strengthens the radiology profession in the face of rapidly evolving healthcare landscapes.
Keywords
radiology
value
AI
data science
machine learning
radiologist burnout
unnecessary imaging
patient outcomes
continuous learning
technological advancements
RSNA.org
|
RSNA EdCentral
|
CME Repository
|
CME Gateway
Copyright © 2025 Radiological Society of North America
Terms of Use
|
Privacy Policy
|
Cookie Policy
×
Please select your language
1
English