false
Catalog
Artificial Intelligence in Radiology: Managing Pro ...
R5-RCP20-2021
R5-RCP20-2021
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Welcome, my name is Dr. Hanneman from the University of Toronto, and I'll get things started today. We'll be talking about a nutshell of AI and specifically ethics of AI. So very briefly, the intent of this session is not to discuss how to do AI, but very briefly, of course, AI, as we know, is intelligence very simply defined as intelligence demonstrated by machines, or a nice quote I like is, machines that can mimic cognitive functions that humans associate with the human mind, including learning and problem solving. And I know we've all seen this figure so many times, it might hurt. But you know, usually when we're thinking about AI in the context of radiology, we're often thinking about machine learning, a subset of AI techniques. And if we're performing AI on our images themselves, often using deep learning. We all know that this has really proliferated both within research, but also just in the media and our own practices and awareness in the past few years. So if you do a PubMed search by year, and you search simply for machine learning or deep learning, you can just see an incredible increase in the number of studies over the past couple years. And this is current only to 2019. I do believe that there is a potential for AI to transform medicine, and much has been written about this. There is a great potential, I think, to potentially reduce medical errors. And I think this is very important in diagnostic specialties like ours, where we can hopefully reduce both false negatives and false positives. And there's been many papers on this. This was one that came out to great fanfare, published in Nature, talking about AI system for breast cancer screening. And there was a lot of interest in this and responses. And the authors state, here we present an AI system capable of surpassing human experts in breast cancer prediction. What about patient-facing AI-based applications? At my center in Toronto, there's a lot of interest in wearable AI technology, particularly at the Heart Center, where I practice as a cardiac radiologist. And in fact, Forbes identified that wearable AI was one of five trends in digital health transformation, and particularly patient-facing AI. So what is helping to push consumer AI further into healthcare is use as a popular application for wearables, is one quote from this article. I think that the other main potential for AI to transfer medicine, in my opinion, is decreasing the time that we do for very rote and repetitive tasks. And so one example might be the ability for automated protocoling or image post-processing, things that we may normally need a human to do, that a machine could do probably faster and eliminate the human component in some cases. And this is a nice article that discusses artificial intelligence and the future of radiology, and they talk about some of those potential applications. And here's a nice summary of some of the impacts in terms of big categories that AI might have on us. So how does this relate to us in radiology? Well, it may result in improved image quality if AI is helping us to tailor our protocols in real time. It may allow us to acquire images faster, and may help us to select the appropriate protocol in less time, but also perhaps the best and most appropriate imaging test for a particular clinical indication, integrating clinical and blood biomarker factors, for example. Hopefully, it will allow us to be more accurate in our diagnosis and improve our ability for risk stratification. But with all of this promise, very little AI has been integrated into clinical practice, and even less has been shown to have an impact on our clinical care yet. And so this was an article, a bit provocative, saying AI is a long way to go before doctors can trust it with your life. So I'm a cardiothoracic radiologist, and I was wondering, what on earth is he pointing there at the first rib? I'm not sure. Perhaps something was missed that the AI picked up. And so we think about the ethics of AI medicine. This is what we're going to do a deeper dive in all of the future further lectures today is thinking about these specific issues. So I think there's a really important need to think about the issue of informed consent to train data. So where are we getting the data from? We know the data is very valuable. We especially know that big data sets that are appropriately labeled is incredibly valuable and in high need. And so how do we address the issue is that patients who had those imaging tests done may or may not have been aware, may not have explicitly consented to the use of their data to be used for training AI systems. A separate related issue is the issue of informed consent to use AI. So let's say we have a great AI algorithm. We want to deploy it in our systems, in our hospital. We want to use it to help us choose the best protocol or to detect breast cancer. Do we have an obligation to tell the patients that we're using AI when we're interpreting their imaging? What about safety and transparency? If we don't understand and we can't explain how the AI model is working, how do we know that it's working appropriately and how can we should be sure that that is maintained over time? I think this issue of transparency and the tension between that and privacy is a huge issue that many people are grappling with. Many people in the open text comment at the beginning talked about bias, and I think this is really important when we think about how can we develop and use these tools equitably. We know already that many of the large data sets that early AI tools were trained on did not reflect diverse patient populations, and we know that then those tools may work well in a patient who looks like the patient population where the data was trained on, but it may not work well at all. In fact, it may give misleading and erroneous results if we try to apply it in a more diverse patient population. So I think it's really important to think about health equity and that we are thinking, very forward thinking, even before we develop our AI models in terms of equity and fairness and justice. And finally, we did touch on this a bit already, but data privacy. So how do we deal with the fact that we need to have this tension between transparency and ensuring adequate privacy? We're going to touch and do a bit of a deeper dive on all of these issues in our further lecture. So thank you so much for coming today, and if you'd like to connect with me, this is my contact info. Thank you. Hi, everyone. I'm Tessa Cook from Penn Medicine in Philadelphia, and I am delighted to join you all to speak about managing patient privacy in radiology AI, informed consent, and potential safeguards. Data commoditization has really increased in recent years as the recognition of the need for good data to train these AI models has become more and more evident. Freely available data certainly benefits patients in the greater good, but at the same time, you can't generate good training data for free, and what do we even need by good training data? Robust training data that truly represents the question or the problem that's trying to be solved, expert annotated data, but labeling and curating and expert annotation are expensive, and who's going to manage and monitor and curate this data? And for that matter, what data are we even talking about? It's not just images anymore. There's radiology reports, there are the annotations, some of which are pre-existing and are actually generated in the clinical workflow, and others that need to be added after the fact, and then there's all the metadata about the patient, and I'm not going to get too much into data ownership, but that certainly is a question, and that is a question with very different answers across the world and different parts of the world. Certainly, even within the U.S., there might be slightly different answers to that question, but I want you to keep that in the back of your mind, because that's an additional factor as well. And when you think of these data, are they equally valuable and important in isolation, or is it really the sum of the parts that's actually worth more? And when you think about sources of data, data can come from an imaging facility, it can come from the patient, and it could go to researchers, it could go to registries that are maintained by third parties, it could even go to vendors. And so it is not just imaging facilities that are sharing data, but patients that are also sharing data. And so there are a lot of considerations around regulating that data sharing that we should think about. For us in research institutions or clinical practices, we have a number of agreements that we have to follow to make sure that if and when we do share data, we do it in a way that is safe for patients and that is approved by our organizations. So we have an alphabet soup of IRBs and BAAs and DBAs that cover that. But then, you know, that's really just assuming a broader consent on the part of the patient. I don't think most patients realize that there is often a waiver of consent offered by healthcare organizations for research on de-identifying data. Who's checking how de-identified that data actually is, we don't know. Should we instead be consenting patients for each use of their data, or for specific uses of their data? And what's the exclusivity on the part of the patient? And these are all questions that are raised in the multi-society ethics statement that was published a couple of years ago. There's also no consensus about directly approaching patients for data or notifying them about how their data is being used, which is perhaps another thing that we should be thinking about. And in considering that the whole is greater than the sum of its parts, there's so much other non-imaging data that could pose a significant privacy breach and risk to patients if they are successfully combined. So let's say that you're a researcher and you have data. What is your responsibility? Well, it's always important to know that the data you have represents what you think it does, and that you have the right data to answer the question that you're asking. But then you also have a responsibility to clean and organize the data, normalize it, de-identify it, and that last one is really where it starts to get a little tricky. So de-identification of medical data, HIPAA offers two options in its privacy rule. The expert determination, which I think very few of us actually use, I will be honest, I have certainly never used it, or the safe harbor method, which I think is much more common, where you recognize and remove 18 identifiers described by HIPAA, and there's no way that residual information stays in the data. But when you think about de-identification of images versus progress notes in the medical record versus radiology reports, each of these kinds of data, and this is just three kinds of data, again, remember, there are many other kinds of data that could be used. How do you effectively de-identify this data, and I think this is something that we need to be more mindful of as researchers in this space. Also consider unexpected sources of patient identity in the data. Clothing, jewelry, faces on head CTs, de-identification is always more complex than we might initially think. And then we have to consider what we do with large datasets, because we often sometimes put so much effort into organizing the data for the research that we may not think about how we should be protecting the data, and that should be a priority. It's important to know who has access to the data, who needs access to the data, to make sure that data is transmitted in an appropriate way only to people that actually need it. And if there is data with some limited identifiers versus no identifiers at all, that's another thing to be mindful of. The data might be dynamic. It might be coming into a system in real time or pseudo-real time, and so someone has to make sure that data integrity is preserved, that if there are more data points coming in for a patient who already exists in the dataset, that they're appropriately linked to that patient. And so there's periodic quality assurance that has to be done, and this has to be somebody's job, and sometimes it's something that's overlooked. Anytime you think about sharing data, it's always important to think about who is receiving it, what they're going to do with it, what they're going to do when they're finished with it. Some of these things were required to actually very explicitly describe in all of our documents and agreements around data sharing. But putting it in a document is one thing, and being mindful of it when you're actually doing the research is another thing entirely, and I would encourage everybody to focus more on the latter. I happen to be the vice chair of the American College of Radiology's Patient and Family Centered Care Commission, and in my work with the PFCC over the years, even before this vice chair role, I was very fortunate to be able to talk to patient advocates about this exact question. And so a few years ago, I asked them what their thoughts were about their data being shared for AI research, and these are some direct quotes. I'd be happy for you to share my data if you ask, but it feels creepy to think you're using it without asking me. Is this going to cost me more money as a patient? We hear about stolen data all the time. Is my most private health information going to be stolen? If anyone knew about my condition, they could use it to discriminate against me. How often when we're doing research do we think about the patient voice and the patient perspective about the data that we're using? And so I share these quotes, again, to stimulate some thought and some consideration of the fact that what we look at as data comes from actual people with medical problems and lives and concerns about their privacy and their protection, and I think we have a responsibility to address that. I was fortunate enough to co-author this paper with Dr. Amy Katsanas and others about rethinking patient consent. Obviously, we can't do a show of hands because I'm not there with you, but I would love to know how many of your organizations have something more than a blanket consent for medical research, teaching, and patient care. And I wonder if in the future we might move towards a more customized approach where patients are actually kept in the loop about what we're doing with their data and have the option to consent or not consent for particular use cases. So in conclusion, I think data commoditization has really changed the way that we do things and that it's important to remember that there are patients behind our data and that we have a responsibility to them to protect their privacy. And we do what we do for the patient with the help of the patient. And so thank you again for the opportunity to speak in this session. So my topic today is mitigation of bias in radiology artificial intelligence. So artificial intelligence-based systems have been adopted worldwide for computer-aided diagnosis and imaging-based screening. So people often ask, are AI-based systems generating fair and unbiased results? The answer is, unfortunately, that's not always the case. So AI-based systems can be subject to systematic errors in classifying patients, estimating risks, or making predictions. So these errors are commonly referred to as bias, and it can be introduced in various stages of AI development. There are several steps we have to go through in order to develop an AI model. So I'll go over those following steps to explain why bias can be introduced in those steps. First, we have to collect clinical data and prepare the data for the training. Then we need to select and train our algorithms to develop the AI model. Once the model is developed, we need to test our model performance in independent cohorts. Then finally, we can deploy the model in clinical practice. I'd like to discuss a couple examples to show how bias can be introduced in those steps. So first is regarding the data. Research have shown there is an AI prediction algorithm assigns black patients the same level of risk when they are actually sicker than the white patients. So why is that? This is because the algorithm use the health costs as a proxy for health needs, and less money is spent on black patient with the same level of needs. The next, bias can also be introduced when we develop the AI algorithm itself. A typical AI program will try to maximize the overall prediction accuracy for the training data. So if a specific group appears more frequent than the others, the program will try to optimize for the majority of the individuals, because this will boost the overall model performance. And when we're evaluating the algorithm on the test data set, which are often just a random sample of the original training data set, which contains the same bias, so we won't be able to detect this problem. So from the above examples, we can see when AI model identifies statistical patterns in human-generated training data, it is not surprising that the human bias, such as gender and racial bias, can be reflected in these algorithms. So if those biased models are implemented in our clinical practice, it can reproduce those social stereotypes and underperform in minority groups, which can be potentially dangerous in healthcare. So also in radiology, AI often developed by large academic or private healthcare entities or large companies. Smaller or resource-poor hospitals may lack the technology, skills, and resources to manage AI systems. So this may further exacerbate disparities in access to radiology AI. So the FDA has recognized those challenges and released an action plan in January 2021, emphasizing the importance of identifying and mitigating bias in AI-based systems for medicine. So how do we mitigate bias in AI? There are several things we can do in each stage of the model development, and I will go through them one by one in the following slides. First is regarding data collection and preparation. We have to be transparent about our training data selection. So what are the patient characteristics, such as their age, gender, race, and ethnicity? Also, we need to compile a diverse and large dataset to better represent all patient groups. Finally, we need to monitor our error rates of the AI program, not just at the overall performance, but to different patient subgroups, and identify the low-performance subgroups. In order to do that, some reporting checklists have been developed to assess the risk of bias. This can help to guide the FDA in authorization process to understand the risk of potential bias, and it can also allow end user, such as a physician ourselves, to understand whether AI-based system is actually suitable for the subgroup of our patients. Regarding model development, there are definitely certain mathematical approach, such as adversarial debiasing or oversampling to force the AI model to account for those underrepresented groups. Systematic errors might also be reduced through continual learning of the AI program when the model continuously update itself through new data. Regarding model evaluation, we have to evaluate our model performance across subgroups of the patients to see if our model predicts incorrect outcomes. Also, it is very important to understand how our AI model reach the outcomes so we can ensure that model include all the known risk factors, and revise the model accordingly if certain factors are not included. So when we deploy the model in clinical practice, bias can also occur when our clinical patient cohort differs from the training data, which is known as domain shift. It is crucial to monitor AI-based systems, including our clinical patient characteristics to see if they match with the training dataset, also to see what are our patients' risk factors, and whether they are at the same risk level as the training data. Finally, we also need to monitor the model's overall performance, as well as the model's performance in subgroups of our patient to identify the bias. In addition, when the AI outcomes influence clinical practice, new bias can be created from those feedback loops, so we have to monitor those feedback loops and address its impact in clinical practice. So in summary, bias can hide in our data algorithms and clinical applications. Those bias often reflects the deep and hidden imbalances in our institutional infrastructures and social power relations. So technical care and social awareness must be brought to identify, mitigate, and eliminate bias in AI systems. Thank you so much for your attention. Hello, my name is Bradley. I'm the Chair of Radiology at the Dynastyle Healthcare Network. I hope you've been enjoying the session so far on managing professionals' challenges in artificial intelligence and radiology. In this session, we're going to specifically consider transparency issues in AI, particularly as it relates to patients and providers, and in particular, radiologists. As AI becomes more and more a part of the radiology landscape, it's important to be cognizant of the ethical issues around the deployment of AI. We're going to specifically go into a little bit of detail regarding the transparency issues of AI. As a spoiler, I want to warn you, I will be raising more questions than we have answers to, as we are only at the beginning of deploying AI in the radiology day-to-day workforce. The issues, and in particular, the ethical issues surrounding AI, are only now starting to become asked and discussed, let alone the answers that might ensue afterwards. We will find, the longer we deploy AI in radiology, the more these questions will evolve, and perhaps some of the answers that we thought we had will also evolve. So, just again as a warning, we're going to really pose more questions than we have answers to. To understand the full extent of how artificial intelligence touches radiology, let's consider the radiologist workflow, which in previous days was simply just a report of what we looked at. But in the modern view of the radiologist, the ACR 3.0 view, we do so much more. Everything from making sure the ordering process is correct to having some clinical decision support to making sure that the correct protocol for a specific study is ordered. Then how we actually acquire the images themselves is very important. And a big part of what we do is making sure that acquisition is the correct acquisition. Of course, then we have our traditional interpretive role as radiologists in finally reporting. And in recent times, this whole concept of communication being extremely important, because if our report doesn't get to the referring doctor, and in many cases, the patient as well, the report is really of no value. Nobody knows about it. And finally, after we've created the report and we've communicated it, what are things we can do retrospectively to help us learn from any mistakes we have? So that we can see there's a large workflow that is part of what the radiologist does. And look at all these different steps here. We can see that AI can, and in fact, has, started to touch each one of these areas of workflow. Because of that, we need to consider F-core applications each step of the way of all these different steps in the radiology workflow. And before we get to transparency itself, I want to talk a little bit about this concept of fairness and equality, which we talk a lot about in AI. But it's important to remember that these are not AI concepts. Fairness, equality, benefits, just. These are human concepts. And this point is made very eloquently in this multi-society joint statement out of multiple societies in the US, Canada, as well as in Europe. So I would urge you all to take a look at this, this white paper that was put out in 2019. It's a great paper outlining all the ethical issues, not just transparency. So let's start our discussion on transparency. And we're going to talk about transparency not only from the point of view of patients, but also to the radiologist deploying these algorithms. When we talk about patients and transparency, one of the fundamental things we should first talk about is how well are we communicating our use of AI to the patients we serve? And I would bet that most of us here probably do a better job of ensuring, at the very least in our reports, that we are using these AI tools as part of our daily routine, in fact, we're using them. And this is something that is really paramount when we talk about transparency, that patients understand that we're using these sophisticated technologies. How about from a radiologist point of view? Radiologists should also be, to the best extent possible, understand what the nature of these deep learning algorithms are. However, when we talk about deep learning algorithms, there's a fundamental problem. A lot of these nodes in between, in the layers of nodes where a lot of the logic resides is really in a black box because we don't understand, even the programmers don't really understand in many cases what is happening when they first program the algorithm to what it actually learns down the road before it would get to the output. So this is a really hot area of research right now is to try to dissect, understand the different layers that go into how these algorithms figure out what are the relevant parameters to get to the output. And this really gets at explainability. Well, what do I mean by explainability? It's pretty much what it sounds like. The ability to explain, either to a patient or a radiologist, why a model does what it does. So, how did the model make the decision? And in the case that the model doesn't work correctly, why? Can we figure out why? And this can be very difficult. We don't understand what went into that black box as to how it came to a conclusion. It can be very difficult to answer that question as to why. And a lot of times, perhaps the best question to ask is by looking at the output and seeing if we can figure out different trends and try to work backwards. That may be a more practical way of trying to figure that out. How do we evaluate clinical effectiveness? Do we just take the vendors' word for it? No, we should really do our own internal validations because in the end, we don't necessarily know all of the training data that went into it, which unfortunately may be different than the population in our specific practice. And how do we monitor for ethical behavior? How do we know the algorithm doesn't bias against certain populations or certain social demographics or economic demographics? How do we get at all those questions? And ultimately, how do we monitor performance over time? An algorithm may work well at the beginning, but the performance could degrade over time. Perhaps there's data drift and the population changes over time. Does it work as effectively? Well, we have to actively check these things, the clinical effectiveness over time and not just at the beginning or at a specific set point in time. And algorithms can degrade not just because of data drift, but by planned attacks. And it's important that patients and the radiologists using these algorithms are cognizant of the fact that algorithms can be potentially hacked, potentially for a monetary gain from unethical third players. And so these are things that we have to do to make sure we are transparent with the radiologists using these algorithms. There is this very nicely written paper by Finlayson et al. They describe many ways in which algorithms can be hacked. And although we think of data transparency as something a vendor must be cognizant of, it's something that we as users of the algorithm also need to be cognizant of. Have we given our patients informed consent when we use these algorithms? Have we taken appropriate measures for data protection, for privacy? And ultimately, who actually owns the data? And this is a subset of a much larger issue of healthcare data and who owns it. These are all issues that we need to be cognizant of, but we also need to make sure that we are protecting our patients adequately when we use these AI algorithms. And ultimately, how do we document and notify patients about their data, whether we're using it perhaps in training models or when we're using it specifically in their case as a clinical use, are we adequately reporting these on all our employees? And when we talk about automation bias, we also need to be cognizant of transparency. And just as a recap, automation bias, as you may have heard earlier, is the tendency for us as radiologists perhaps to favor an AI-generated algorithm simply because it's an algorithm. And this can lead to different classes of errors where if the AI algorithm fails to report something, we also fail to report that same thing. Or if it reports something false, then we also report something false, which is a permission error. And we have to be cognizant as radiologists, and this is one of those knowing is half the battle. Radiologists have to understand that we cannot take the AI results at face value and that we still need to vet it. And so ultimately, this really is an education for the radiologist in understanding these issues. Are there any downsides for transparency? I've talked a lot about how transparency is really necessary in many ways, but there are some downsides. The more transparent our algorithm is, potentially the increased risk. We place our algorithm for adversarial attacks because the algorithms are more transparent and are more easily accessible. At the same time, this could also, when we have too much transparency, could potentially raise privacy concerns could potentially raise privacy concerns. Whereas data should be de-identified when using the algorithm, if the algorithm is too transparent, perhaps it increases our risk for privacy, privacy loss. And also when an algorithm has a lot of transparency, this may lead to intellectual property, which has been hacked or perhaps stolen because of the transparency of the algorithm itself. So we have these competing interests for having as much transparency as possible, balancing that with some of the downsides of transparency. And ultimately, is AI invaluable? Of course the answer is no. But we have to remember, neither are we. And that's why ultimately, it's really the combination of understanding that these are tools for us to use. And while we can't just use it blindly without vetting it, we also want to embrace what we can and to use it intelligently. And that's why the concept of augmented intelligence to me is a much better term than artificial intelligence. That the combination of us as the radiologists using the tools of artificial intelligence is much more powerful than either the radiologists or the algorithm by itself. So thank you, I hope that this has caused you to think a little bit more about how we deploy algorithms.
Video Summary
In a comprehensive session on AI and its ethical considerations, experts discussed various facets of AI in radiology, highlighting the transformative potential and ethical challenges. Dr. Hanneman emphasized the capability of AI to reduce medical errors and optimize workflows, but noted its limited integration into clinical practice, raising trust issues. The discussion broadened to patient privacy, where Dr. Tessa Cook addressed the complexities of data commoditization, informed consent, and the need for robust data protection measures, urging for transparency in using patient data for AI. Bias within AI was another critical topic, as errors in data and algorithms could perpetuate existing healthcare disparities. Solutions to mitigate bias include ensuring diverse training datasets and continuous monitoring of AI performance across diverse patient groups. Dr. Bradley focused on transparency, stressing the importance of both patients and radiologists understanding AI applications. He pointed out the ethical dilemmas, like potential biases, data privacy issues, and the necessity for informed consent when deploying AI systems. Overall, the session underscored that while AI holds promising potential for enhancing radiology practice, it must be approached with careful consideration of ethical and transparency issues for broader trust and efficacy.
Keywords
AI in radiology
ethical challenges
medical errors
patient privacy
data bias
informed consent
transparency
RSNA.org
|
RSNA EdCentral
|
CME Repository
|
CME Gateway
Copyright © 2025 Radiological Society of North America
Terms of Use
|
Privacy Policy
|
Cookie Policy
×
Please select your language
1
English