false
Catalog
The Future is Here: Artificial Intelligence in Car ...
T8-CCA06-2023
T8-CCA06-2023
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
So, there's going to be three talks. The first talk will be on cardiac MRI, and the second on cardiac CT, and then the third will be Dr. Van Essen on more future directions, including radiomics. I think this cardiovascular imaging is a really interesting field in a lot of ways because it really is an opportunity to champion and pioneer the next generation of what's to come for every other subspecialty in radiology. It pushes the cutting edge both in imaging technology hardware, as well as imaging technology with respect to software, machine learning, because there are very complex workflows, lots of quantitative data that we have to extract in order to guide patient management. So I'll talk about this particular topic, looking at how AI is already shaping the way that we perform cardiac MRI and how we interpret it from scan protocol to characterizing disease. So I think you've seen this slide. Many people have seen articles like this over the last 10 years, that AI is going to take over all of radiology, and we're all out of a job as of five years ago. Unfortunately, we have exactly the opposite, where we have a labor shortage. So it does, you know, the future does not look so grim after all, despite the doom and gloom. I think what was kind of remarkable around the time that people started to say all this, there were many articles about how pigeons could also do our job or the job of pathologists, and no one seemed to believe that. What has changed, what changed the field so that people felt like there was going to be a rapid change in the field of radiology? I think it was because machine learning technologies used to be very, very complex, very, very difficult to do. It would take years to develop algorithms. And then quickly, in a short span of a few years, it became possible to train algorithms very quickly, very efficiently, without as much computational expertise. So really, what you were replacing was not radiologists. You were replacing a deep body of expertise that was previously required for machine learning, and allowing the machine to learn itself. And so that's actually allows high school students to train algorithms. Now, we've trained a lot of algorithms along the way, from algorithms for x-ray to algorithms for measuring cardiac strain. So we'll get there, and I'll show you exactly how that pathway has emerged. So the question that we should ask ourselves is, how do we move from this world of sci-fi, where you take images that we get every day? This is not the best image, right? This is kind of an average image. We got, like, a rhythmic artifact, and it doesn't look so great. Maybe there's some breathing motion. How do we move from current-day practice, where we have function images that look like this, delayed enhancement images that look like that on the right, and be able to add to it, add to our imaging using AI? How do we accomplish that? This is an example of a deep learning algorithm that we've developed in our lab that allows us to measure myocardial strain from routine SSFP images. This is purely using AI without any human intervention. And it seems to work quite well. You can see the regions of the myocardium that are contracting turn red, and the regions of the myocardium that don't contract stay blue. And that's the ideal thing. You want a technology that helps augment your ability to interpret the images that you can visually confirm and say, hey, you know what? I actually agree with that. That portion of the myocardium isn't contracting. And then you see on delayed enhancement, hey, guess what? It was right, you know? So you have it in your day-to-day practice, and you can confirm it's a reality. And then in the patients that you don't have the ability to give contrast to, you can really have the confidence that it works. So that's kind of the idea. So what are the opportunities for AI in diagnostic imaging? It's really across, of course, the entire imaging value chain. From the imaging modality itself, you can plug in AI onto the scanner. You can plug in AI to the data center where the images live. And you can plug in AI to your workstation to make it much more efficient, allow you to do your work quicker and more accurately. The reality of it is, it's not that easy. There's lots of technology that has to be developed that has to pass through FDA. And so it's gonna take a couple of years for all the technologies to make their way into your hands. But if you're developing them, of course, you get the opportunity to use them earlier. So what have we done in the space of cardiac MRI? What are the advantages of cardiac MRI? And I think a lot of you are familiar with this. We're really great at measuring flow and function. And we can tissue characterize, look at myocardial scar. Those are things that we can do with MRI that we can't do with other modalities, not CT, not really with echo either. But the biggest problem with MRI is it's very slow. It takes a lot of supervision, and that limits its accessibility. A lot of places can't do cardiac MRI at all. And part of the reason is because it's very complex and many multi-planar images that you have to acquire. And it takes some expertise to recognize how to acquire those images. And there aren't that many texts that are already trained that you can just plug into your practice. So unfortunately, that's a big part. And another big part that limits cardiac MRI is that you have to do a lot of post-processing, segmentation. And in the old days, you'd have to do everything by hand. You measure face contrast by hand, torture. It's almost enough to lead you not to want to do any cardiac MRI in your practice, cuz it would take hours of your time. You could be making more money doing something else, treat more patients, whatever the case may be, go home, any of that. So what we've done over the last ten years is really change our paradigm for cardiac imaging so that it's more efficient, more accessible, and doesn't require me to be at the scanner. I think that's a major issue. Allows me to read other studies, do something else, go home. So 3D Sine is one of those techniques. It's a volume imaging technique. It's a single breath hold, inject contrast, and then you get what looks like a cardiac CT. That's basically the idea of that technique. Then we followed that with a ten minute 40 flow. You can still get the conventional imaging planes. Now we leverage AI to auto prescribe those imaging planes using a series of multiple neural networks. And that's useful for things like conventional imaging, SSFP, or delayed enhancement, or even T1, T2 mapping. So you can have a one click cardiac MRI now that the techs can perform without that much expertise. And then the post-processing, we no longer do that by hand. As of 2015 or something, when the first FDA cleared products came into our hands, you no longer have to do that segmentation by hand. You just make a few edits at the basal cut maybe, and that's pretty much it. So that's essentially the idea. How did we get there? That's the main question. And I think if you're looking at other areas of imaging beyond cardiovascular MRI, this is kind of the paradigm. We want to make cardiac MRI simpler, get rid of all this extra stuff that we have to do to bring our technologists up to speed, remove that limiting factor. And so what's a major limiting factor? The multi-planar nature of cardiac MRI. So how do we go about doing that? We trained a series of multiple neural networks that recognize specific anatomy in the heart, the apex, the mitral valve, and then use that to plan short axis stack images, and then prove to ourselves that we can reliably detect those structures to then plan cardiac MRI. And these are some of the images of a couple example cases of those imaging planes when using that system of multiple neural networks. That's not enough. If you can prove that you can get those imaging planes ex vivo on historical image data, that's fine, but you really wanna see it in your practice so that it's actually usable. And so we have a partnership with GE that allows us to plug in those algorithms directly on the scanner and then run them. This is an early prototype from, I think, five years ago, with one of our first patients where we just clicked a button and allowed it to prescribe one particular short axis stack. But that's no longer necessary. You can run the whole cardiac MRI using the whole system. What's another big challenge for us is selecting the inversion time. It takes a while to figure out where the optimal inversion time is, but we can train a neural network to basically do the same thing that we train our techs to do. Identify that inversion time, use a structure, architecture, and a problem definition that allows the neural network to solve that problem and select the best inversion time. And we, in this paper, devised an architecture that was an ensemble of two other architectures in order to get the optimal performance. What about flow? Well, most people aren't doing a lot with flow, but we do because we have a pretty robust congenital heart practice. So we absolutely rely on that. And that 40 flow technology has a number of workflow issues that require a lot of expertise to do. So that's another opportunity for deep learning to improve and make easier, and so we've done that. For those of you who don't know much about 40 flow, it's a technology that lets us measure blood flow and visualize it and diagnose everything from valvular regurgitation to shunts and quantify them for surgical management. So I'll show a few examples of that. This is a technology that honestly has been around since the late 80s, early 90s, but didn't really have a chance to mature and become a usable clinical modality. It's now become a clinical modality because we've developed a lot of software that allows us to reformat imaging planes. Also because we've now been able to use very advanced imagery construction techniques, acceleration techniques that provide beautiful images in a relatively short amount of scan time, ten minutes, usually around five to 15, depending on how you set up your protocols. And that's a really nice technology. Hopefully you'll have a chance to see more of that in the future. We had an earlier session today where I went through it. A powerful component of that is being able to reliably measure blood flow more accurately, more precisely with less error than conventional face contrast MRI. Some work from long ago, about ten years ago. All right, so I'm gonna skip past this. The key to most modalities is being able to quantify blood flow, or most, sorry, key to this modality is being able to quantify blood flow reliably. And in AI, as much as in 40 flow, you need some sort of reference standard to compare against. In 40 flow, what people are used to as a reference standard was invasive catheterization. So you can measure blood flow with invasive catheterization and show that this modality is producing the same answer as that modality. And so that's really formed the basis of how we approach our validation of AI algorithms, is we kind of look back and say, what is the modality that, what is the gold standard that people believe in? And if we can compare against that, then people will believe that this algorithm actually works. The same was true when we did this work with 40 flow. And I'll keep moving on, because this is more 40 flow specific. I'm not sure how these slides stayed in this talk. And I'm wondering if the wrong doc was loaded. Okay, here we go. So 40 flow is a technology that also benefits from AI. And one of the main components that really limits our ability to run 40 flow in parts of the body that are outside of the chest is phase error correction. For us to be able to measure blood flow reliably, we have to do this software correction that is actually difficult for me to teach somebody else, difficult for me to train somebody else to do. So instead of training people to do it, we can train AI to do it and make it much more accessible. And ultimately, it has to be very robust and reliable for us to trust a software algorithm to do that. But basically involve training a neural network to look at the images, recognize the velocity field correction that need to be applied, do a regression to correct it, to smooth it out, and then apply that algorithm. And for us to really trust it, we wanna see that the algorithm performs well across a wide number of patients. It's not good enough for something to be just on average better. I think the sort of pharmaceutical mindset is, you have population A, they get treated with a drug, population B, they don't get treated with the drug. On average, population A does better. But I don't think that applies to us in imaging. If, for example, we have a technology that says in three patients, the technology improves your diagnosis. In seven patients, it degrades your diagnosis. But on average, it's better. That's not good enough for us, right? We want it to be better in all circumstances. So we really want robust technologies. So we have to show that it's a highly reliable system. And that's the purpose of this particular slide, showing that this algorithm actually works across a wide variety of patients reliably, consistently, not just on average. So anyway, so I'll move on from here. This is just a couple examples of the 40 flow with the correction that we applied on top of it to diagnose things that we couldn't before diagnose very easily without the 40 flow. I think that that's a great concept for what we should aim to do with AI as well. AI should be able to give us the ability to diagnose things that we could not diagnose before. In some circumstances, not every, but in some circumstances. And this is a case of pelvic congestion, May Thurner. Here's another technology. Now, you may have heard of things like super resolution or deep learning reconstruction. This was our effort in doing that and taking relatively low resolution images and applying a technique we called super resolution to it. You can see that the edges get enhanced, the morphology is preserved. It's not creating any kind of hallucinations. That's a technique that actually mirrors some of the deep learning reconstructions that you're seeing from some of the vendors. All right, so with respect to computational analysis, I do think that, again, it's on that theme of things that AI should be able to do that extend our abilities. We're traditionally expected to measure ejection fraction, left and right ventricular ejection fraction in all of our patients that come in with congenital heart disease and usually at least the left in most patients. It's great that algorithms can do this and requires a little bit of editing. I'm not gonna go through the details of this, but we now know this technology exists. But that shouldn't be the end. Just reproducing human behavior, reproducing and making simpler is only just the beginning. We should be doing something beyond what humans can do or beyond what humans have time to do. So there's been this thought that we should be able to measure strain and different technologies have been proposed for measuring global longitudinal strain or global circumferential strain. But I think these are not quite the right end point. We don't want measures of just global function. Ideally, we would want to measure regional function. What parts of the myocardium are normal? This goes back to that first slide. What parts are normal? What parts are abnormal? Because why? That gives us segmental granularity so that we can then say, that's an LAD territory lesion, that's a RCA territory lesion. We think there's ischemia or infarct in that area, even from the wall motion of the myocardium. So what is strain? It's the method which physicists, engineers, describe a length shortening or a lengthening of muscle. And it's just a calculation. Traditionally, there are MRI methods that you would use to measure that, but they take a lot of effort in a special sequence, adds time, so nobody uses them. Well, except for a research setting. So nobody really uses them in clinical practice. And if that's the case, even if you have a technology, if it doesn't work its way into your practice for these reasons, it doesn't become useful. So what we thought is that there should be a way that we can apply deep learning to measure myocardial strain. There are these feature tracking methods. We didn't think they worked very well. We wanted to improve upon them. So, and this is just some data showing that they don't work very well. Moving on. Okay, so deep learning strain, though, is a different approach. What we did was we said, hey, we have a lot of 40-flow data. This is this concept of ground truth. We have a lot of velocity data, which then we could maybe leverage to study myocardial strain. It's high velocity data, but if we can look at that and cut a cross section of it, co-register it on short axis SSFP images, we can use that to train a deep learning neural network to then predict the myocardial velocities. So this is work that we just published about a year or so ago. We take this SSFP images, these SSFP images, apply some physical constraints in a neural network that then can predict the myocardial velocities using that training data that we got from a different source, merging information from two different pulse sequences. And then apply calculations on top of it, so that we can measure strain and strain rate. And that then allows us to decompose a motion. So I'm not gonna go too much more in detail, cuz I'm out of time here, but you can see the nature of that velocity field inference. And it really smooths out the noise that's in the underlying training data. So I'll give you the opportunity to hopefully look into that paper and radiology cardiothoracic imaging. But what we found is essentially that we can reproduce what expert radiologists see in the wall motion of the myocardium. That's one important aspect. And more importantly, perhaps, is we can see things that radiologists could not see, localizing ventricular ectopy to certain parts of the myocardium. And we're hoping to continue to study that further, including things like bundle branch block and other diseases. All right, thank you. Thank you, good afternoon everyone. I'm honored to present our work of the field in this topic of AI and cardiac CT, are we there yet? Very challenging and thought provoking title that I was given. I think I will start with the scope of AI. This is from NHLBI AI workshop, which was run last year in June 2022. And the proceedings have been published, so this is kind of a consensus statement of many people working in this field. So this is a multimodality slide, you'll see CT is here. So basically, AI is in all aspects of cardiac CT in image acquisition, image reconstruction, image quality, segmentation and quantification. And then finally, in the more challenging task of classification and reporting of disease and prognostication of management. Whether it's be prognostic risk prediction, surgical intervention, evaluation or evaluation of medical therapy. So AI is in all phases of image acquisition, image reconstruction, image quality improvement. So you may not even know that you're using AI, it is built into the scanners. So I will start with quantitative plaque imaging, which has been a very challenging field and is made significantly easier by AI tasks. This is before there was AI, and this is our work with Scott Hart, which is a randomized control trial of cardiac CT. And this is the CT arm of the Scott Hart trial where 1769 patients were evaluated for quantitative plaque, and so you can see one of the examples here. Here we have the red outline is non-calcified plaque. The orange is low attenuation plaque, which is a surrogate of the lipid core, and the blue is the lumen. And the finding in this study was that patients with large or low attenuation plaque, the surrogate of the lipid core were nearly five times more likely to have heart attack than that outperformed all the currently considered cardiovascular risk factors, luminal stenosis severity, the CT calcium score, and the total plaque burden. But this semi-automated quantification took many months by a team of expert fellows. So this is something that cannot be done in clinical practice. And AI has helped in this significantly because the task that needs expert readers and needs this highly trained expertise can be done quicker. So this is AI-enabled coronary plaque analysis. This was an international multi-center study, which was run by our lab. So this is 921 cases were trained, and then there was expert validation against expert readers. So deep learning by convolutional neural network algorithms were found to show excellent correlation with expert readers, and also in a subset with invasive IVAS quantification of total plaque volume from IVAS. This is an on-site analysis, and the mean per patient computation time, which was at least 30 minutes per case by experts before, was cut short to only about six seconds by deep learning. And then further, this deep learning algorithm was applied to the Scott Heart trial, and here are the results for total plaque volume quantified by deep learning for future heart attack prediction. So this is total plaque volume using a cut point of 238.5 millimeter cubed. This is from the ROC analysis of total plaque volume from this cohort. And you can see that there's a clear separation of the risk of MI in this population, and this persisted when adjusted for stenosis or clinical risk scores. So quantitative plaque is done by many others from coronary CTA. It's actually a very active field with very interesting data coming up every day. So here are some of the packages which are already out there. Q angiomedis, clearly coronary, vasocup elucid. Autoplaque, this is from Cedars-Sinai, is here. CT shore plaque, vital images, and single via from Siemens. And there are still more packages. This has been nicely summarized by Dr. Michelle Williams here. And so this is AI-enabled coronary plaque analysis over the entire coronary tree. So this is one patient from Cedars-Sinai Medical Center who actually had no symptoms but a fairly high coronary calcium score. And you can see with this image quality of coronary CTA that plaque can be measured non-invasively over the entire coronary artery. There are some other cases which have been summarized by Dr. Van Essen in European Heart Journal Review. This is a study by clearly showing baseline and 18-month follow-up scan. With showing reduction of non-calcified plaque. And another example from elucid. And a recent study with heart flow plaque analysis. So you see one example on the left here with the vessel wall in red and the lumen outlined in orange. And here, from over 10,000 patients, nomograms were established for men and women undergoing coronary CT angiography for standard indications. So this study shows, because women undergoing coronary CTA have lower plaque volumes of all subtypes, so a 60-year-old man and a 60-year-old woman, the highest percentile, the 90th percentile of total plaque volume will be quite different. And that needs to be taken into account in clinical assessment of coronary plaque. Can AI-enabled coronary plaque in randomized control trials be used? And there's several examples of that. Most recently, one has been published in Jack Advances by Dr. Hasific, which looked at a randomized effect of vitamin K2 and D supplementation on coronary artery disease in men. So in this study, 304 men were randomized to placebo or vitamin supplement, and a CT was performed at baseline one year and two years. And the primary outcome of this clinical trial was the change in coronary calcium score over two years. So you can see that the coronary calcium score decreased slightly, but it did not reach significance. Full plaque analysis was done, and while there was a lower trend in total plaque volume and non-calcified plaque volume, also the trend was not significant. However, in patients with a coronary calcium score greater than 400 agastin units, the primary endpoint was met, and coronary calcium progression was lower with this intervention. And so this hypothesis-generating data has led to more clinical trials which are ongoing. And interestingly, the number of events were smaller in the intervention arm in the very small number of events, but the number of events was smaller. So we are into the kind of the final frontier for AI, which is prognostication and management, and that is prognostic risk prediction and evaluation of surgical intervention or medical therapy. And I want to just mention CTFFR, which is in practice today, and very much AI-based. So there is heart flow, and this is an example from on-site analysis from Siemens CTFFR. But has there been a randomized trial of AI-enabled CTFFR? There's actually one which has recently just been published. This is called the TARGET trial, which is 12, 16 patients with stable CAD and intermediate stenosis only, and this was randomized to on-site CTFFR with AI or to standard care. This was done in six medical centers in China and has just been published in circulation this year. So the primary outcome was the change in the standard care group of invasive coronary angiography without obstructive CAD, or invasive coronary angiography with obstructive CAD who did not undergo revascularization. So this primary outcome was significantly different between the standard of care group and the CTFFR group, and it was driven by the invasive coronary angiography without obstructive coronary artery disease was significantly reduced by the use of CTFFR. So secondary outcome was the MACE prognostic trial or prognostic follow-up in these two groups, and here there was a trend for CTFFR care group for a lower hazard ratio of 0.88. However, the p-value did not reach significance in this trial. So with the patient-centered risk prediction, there is a real need to actually integrate all clinical data, imaging data, and perhaps biomarkers for a holistic risk prediction for the patient. So I will spend, I'll give two examples to just talk a little bit about this, and this is also where I can play a significant role. And this study has been done by Andrew Lin and published in Cirque Imaging from our lab where plaque and stenosis features were combined objectively using machine learning to predict FFR-defined ischemia, and this is an external validation study in the PACIFIC trial. And you can see the highest predictor in this cohort was a quantitative stenosis followed by low attenuation plaque volume, luminal stenosis features, and as well contrast density difference and luminal area. Contrast density difference is the difference in the contrast across a lesion and has been shown to be important for prediction of FFR-defined ischemia. And so the machine learning prediction is here in red with the AUC of 0.92 and was similar to that by FFR-CT in this population and significantly higher than stenosis. But importantly, what we can do is we can do per patient prediction. So these are two cases. In this case, the ischemia was significant. Yeah, I'll just move it over here. And this was due to, when we look at the machine learning prediction, we can analyze why the patient had this high ischemia risk. And so in this case, it was because of the maximal diameter stenosis and the low attenuation plaque volume. And area stenosis. And another case where the probability of ischemia or the ischemia risk was low. And this was, so the blue arrow shows what can detract from that risk. So there was much less non-calcified plaque and much less low attenuation non-calcified plaque. And that is what the patient-specific risk prediction shows. I will show another example of this from non-contrast CT. We are also, we have a much application of AI in coronary calcium scoring from gated and ungated CT. And importantly, from non-contrast CT, we can measure many other biomarkers. And one of them is epicardial adipose tissue. So what we can do here is to combine all the imaging biomarkers such as coronary calcium, epicardial adipose tissue, and to try to predict risk. This is in the Eisner trial, which was run at Cedars-Sinai Medical Center by Dr. Daniel Berman. And here you can see that when we combine all the imaging biomarkers, clinical data, as well as a wide range of circulating biomarkers, this can be all combined objectively using AI to predict risk for the patient. And so machine learning has the higher AUC at 0.81, and was significantly greater than coronary calcium or the AUC with the risk score. And here you see the relative importance of all the different biomarkers with coronary calcium score still holding strong here, age being the highest predictor. And among the biomarkers, LDL is a very strong predictor still of these events. So again, for per patient, we can look at what has caused the high prognostic risk in particular cases. And this can be very useful for when used prospectively, because the machine learning models can be used, incorporated back into software. So this is a 74-year-old female patient in the Eisner trial with a high machine learning score, and MI about 8.8 years. And here, so the contributors to the high risk for this patient was age, coronary calcium score, and HDL was high, and so that detracted from the risk. So to try to summarize, quantitative CT, we know it improves diagnosis and prediction of events, and it can enhance cardiac CT as a precision medicine tool. And AI, I think we have seen over and over, is to the physician, it can eliminate repetitive activities and give the gift of time. It's important to think of AI as a tool, not a creature, and just be resigned to use it in your everyday practice, because it will give you time. And are we there yet? So this is a very interesting question, and I think I can give a very real-life analogy to state that AI is very young. So from the example, from an example from my life, I know that much careful training and handling is needed till AI can be your helper and your constant companion. So we have a long way to go. We are not there yet. Thank you. Yes, so I'm sure some of you were expecting Dr. DeCecco here. He is unfortunately not able to come, so I'm presenting his slides, but I made a few changes to it, but credits for the baseline of the slides. So we've seen a lot of AI applications being FDA approved. A lot of them were discussed in the previous two. So last night I spent some time, the FDA always releases an Excel file with all FDA approved applications. So I took some time to go through it. This is from last night. So currently there are 693 applications in there. This is an older comparison where it shows that radiology has the majority of AI applications followed by cardiovascular. I looked at it last night, and this is a very bad screenshot from my Excel file, but there were 531 out of 692 were radiology. And then there was another 70 that were cardiology. So there are a lot of applications already out there. And as we've seen, we've seen them on every part of the workflow from indication, paging, scheduling, acquisition, image reconstruction and quality, segmentation, classification, reporting, and prognosis. So if we look at AI and cardiac imaging, and this is gonna be a little bit of a summary of what we just heard, is we see a lot of biomarker quantification. Calcium score, plaque quantification, volume function, perfusion. And most of these biomarker quantifications are biomarkers we already know are prognostic or help diagnose. And we have some sort of visual way of quantifying them or at least identifying them. And then we have a section on radiomics that we will discuss. And a lot of these biomarker quantification are being used for prognostication. So a few examples, and I'll go through this really fast because we've seen most of it, but this is from our own institute. We use an AI, Siemens AI algorithm for automated cat rats. So what we see is people, when they use it as an assist, so this is not independent, they just use it as an assist in their decision-making. So in blue, we see the unaided and in yellow or orange, we see the aided. And we do see that there's more agreement. So we do see that AI helps also standardize some of the measurement and reduce variability. Again, a lot of plaque quantification. As we saw, there's many algorithms out there. This is from Siemens. This is from Clearly where we, I think we saw the same example where this patient was on a PSK9 inhibitors. And we saw that it actually was very helpful for this patient. So we can use these types of plaque quantification to select which medication is best for which patients and if they're actually working or we should try something else. We know that plaque volume in itself is a prognostic marker. Again, we've seen this slide, but we do see that an increase in plaque volume overall signifies worse outcomes. Here we see total plaque volume has a higher hazard ratio than obstructive stenosis. And that's something that we use. And the assigned score, which is a more clinical score. And then I think something novel, what we haven't seen a lot and what we do see, we see more and more follow up in CTA where patients are follow up with the CTA scan to see if their plaque volume is going down or if we see changes over time. And with that comes also the use of kinetic or dynamic features. So what this study did, it looked at a baseline and follow up. And then here on the bottom, we see baseline, we see follow up measures, and then we see the change. So they looked at the dynamic changes of CTFFR and of change of calcium volume and of some of the other high risk plaque features. And they actually showed that the change was very predictive of acute coronary syndrome events. So that by adding more data to it and look over dynamic longitudinal changes, we were better able to predict outcome. And this is something we'll see more with lower dose CTAs where we do more of them and we take over some of the invasive functions. We do see a lot of MR coming out. This was a systematic review on all functions where we do see a lot of function AI. We do see automatic slice selection and LV segmentation. We do see a lot of automated perfusion where we used to do a lot of tracer kinetic modeling. We do see AI taking that over, creating higher resolution in the curves to quantify myocardial blood flow. And then radiomics. This hasn't been talked about that much, but radiomics basically what we do is we create a very big data set based on voxel data. So we have first order statistics and then second or higher. And these are very abstract features that don't necessarily correlate with what we see, but it's data that is in the images. And then we use that to create profiles. So what, for example, we can do, and this is on visual assessment of no plaque and then different types of plaque. And we do see that some of these features show very different profiles. So what we can do is use radiomics to identify high risk profile plaques. And we can use that to see which plaques are at high risk of causing an event and intervene on those. So there are some studies showing, and this comes back to reference standards, that compared to sodium PET, OCT, or IVUS, radiomics actually was really good in finding plaque vulnerability and significantly outperforms conventional quantitative and qualitative high risk plaque features. So it performs better than radiologists saying, hey, this is a high risk plaque feature because of remodeling or necrotic cores. So they create better predictive, more personalized profiles. There are some other non-plaque related radiomics. So one of the big things is perivascular fat, where we, this came from one of my own papers as an example, but fat around the coronaries, we've seen fat around the heart with AI, but fat around the coronaries is used as an indirect marker of inflammation because inflammation prevents fat cells to mature. So we see differences in adipose tissue even before a plaque gets developed. So there are different ways of measuring it, but it's very time consuming to create that. So we do see with radiomics or AI, we can use that to now automatically do that. And with radiomic profiles, so here they looked at low fat attenuation risk profiles and high fat attenuation risk profiles, and we see differences in adverse outcomes, especially in the low risk profiles. And then if they add the high risk plaque features, so they added spotty calcifications, necrotic cores, remodeling index, we see that we are even able to identify different profiles of risk based on these radiomic features. We do see it less in MR, we do see MR radiomics. So they tried to create radiomic signatures on, for example, the biobank, where they use different segmentations and radiomics from that. And you can see that it actually predicted clinical risk factor. So instead of using it to predict outcome, they tried to see if they could predict risk factors. And you do see with, well, more radiomics features than clinical indices, they could predict that. So that is very interesting. There is one major disadvantage of radiomics so far, and it's very operator variable. So we do see, for example, in this study, where they had an AI based segmentation, and they had four readers. Due to segmentation, we see varying volumes of plaque volume. And then when they looked at total plaque volume, non-calcified plaque volume, they see various ranges. And we would think that AI or an automatic approach would actually help standardize this, but when they looked over different readers, they actually saw that the deep learning had more variability with the readers than the readers with themselves. So this is one of the cases where AI is just not performing as we would hope to. And one of the issues is with this variability, it makes it really hard to get reproducible radiomics. So external validation or standardization of your radiomics procedures is definitely needed to make this work. I think especially with plaque quantification, where some of the volumes are very small or with perivascular fat, where the difference between patients isn't that big, a standardized approach is definitely needed to make this clinically applicable. There's also a bunch of other markers. We've seen body composition. This is also done on non-cardiac scans. So we do body composition from abdominal scans, and we can use that as a risk predictor. And we see a lot of also non-imaging factors like EKG. I think chest CT, we do a lot of chest CTs in general, and it can be used as sort of opportunistic screening. So this study did a deep learning analysis of chest radiographs to triage patients with acute chest pain syndrome. So they trained a deep learning model to predict which patient had acute coronary syndrome based on their X-rays, and they were actually pretty successful. And there's another study that did a validation study to classify cardiac function. So what we see here is they gave it the X-ray, and they were very high accuracies, were able to predict left ventricular ejection fraction, but also some of the mitral valve regurgitations. And they show pretty good accuracies across the board for functional measures. So this could be used as very early screening tools or opportunistic screening on patients where you do chest X-rays for other reasons. There is a little bit of an issue with this. This is from Dr. Kachoya, one of our coworkers at Emory. So what they looked at is what, and they used chest CT and X-ray databases. And they looked if AI was able to identify self-reported race, where clinicians are not, most radiologists are not able to tell race from just an X-ray or a CT. And AI was actually able to do that, and they couldn't find out why. So there are no direct features that would lead to it. So even though it's not in itself important that AI can detect race, it will impact model deployment. So we need to be careful with how we train these algorithms in the future. And representative training datasets are definitely an issue I think we need to work out. And this also plays into explainable AI. In this study, they were not able to explain why or what it picked up on, but there is something in the images that allows AI to make that decision. So it's really important to get some insight in those decisions. And we've talked a lot about new developments of imaging, but we shouldn't forget what we already have. We have a massive amount of clinical risk scores available. In this study, this is an older study, but it's interesting that 81% of people between 45 to 79 had a risk factor, which is a massive amount. But if they correct it for that, so if they say, hey, if we would eliminate all risk factors as in these patients were become completely healthy according to our traditional risk factors, they could only explain 50% or 55% of the events. So this definitely shows that imaging or other biomarkers could play a role in explaining the other 46% of events. They tried machine learning, they were only able to increase that with 3.6%. So that's not that much. So that's where fusion modeling comes in. And we've seen some examples. This is from a publication in Circulation is the idea is to combine imaging studies. So plaque quantification, MR-based function, all of the information we get from imaging with also with reports, and this is a lot of like large language models, but we can do pathology reports, radiology reports, but also physician notes about symptoms. And we can combine that with our EMR data that contains the traditional risk factors, but also medication and comorbidities lab results. And we can combine that and use that for risk prediction and outcome, but also for drug development. We could use all of the data to see which drugs would work best for these patients and which drugs are candidates for these groups and for personalized treatments. So there's different ways to do it. I will not go into details, but it's basically late fusion where you have two models, one for imaging, one for clinical data, and you merge the outcomes. And then there's an early fusion where you put all parameters in one model and you let that model predict outcomes. Graph neural networks are definitely something that are being used for this that's more novel. And there's a bunch of examples. And I'll go through this very fast because time is running out. But this was using clinical variables, calcium score, and non-calcium variables that came from imaging. So thoracic, aortic calcifications. And they actually showed that machine learning using the clinical and the imaging parameters outperformed the clinical and the imaging parameters alone. They showed the same when using CTA. So this is for machine learning, but also Framingham Risk Scores, the Modified Duke Index. And here we see, again, this information gain. And green is clinical, blue is imaging. We see a variety of them in the top, meaning that we probably need to use a combination of both. And we see the same, this is for perfusion imaging with SPECT. Again, we see clinical, we see stress, clinicals in green, stress in blue, and then imaging in red. And we see a variety of parameters popping up, also indicating that fusing more modalities is probably the best way to do risk prediction. We do see it with opportunistic screening in abdominal where combining that works. And even when we combine EKG and PCG data, we see an improved risk prediction. And I think the future will be to combine EKG signals, opportunistic screening, lab values all into one risk model and use data from multi-modalities to actually diagnose and perform prognostication. One of the others, and this is gonna be the last thing I'll talk about, is digital twins. With the information we have now, we're able to create digital twins. This is a Siemens example, but there are more examples out there. So you can use your imaging, do some anatomical modeling, add your electrophysiology data, and then you're able in a digital twin of your actual patient to identify targets. Plan, for example, ablations, visualize outcomes, see if it's successful or that you need to reevaluate your planning. And then you can take that virtual twin into your procedure and see if you can mimic exactly what you have planned out. And there's a bunch of challenges, data accessibility, but for the sake of time, I'll skip through some of the barriers. So there's a lack of infrastructure to collect all of these data, create solid data sets. Regulations are lagging, but I think we're definitely seeing some progress in that. We see a lot of statements coming out. Funding and reimbursement, I think in the end it's the question, how are we gonna pay for all of these AI algorithms? Is it gonna be in reimbursement or do we see so much improvement in patient care that we can fund it another way? Healthcare personnel training, a lot of the AI requires a little bit of information on how to use it. And patient education, when it shows up in reports, we see it with CHED-GPT, but we see it definitely with other AI. If it shows up in your report, it's nice if patients know where it comes from. And some data protection and cybersecurity, especially when we work with algorithms that come from other centers or that come from industry where we have to send data and get it back or when we do large collaborative projects. So for the conclusions, machine learning and radiomics can play a significant role in the identification and clinical application of novel biomarkers for outcome prediction. The thing we need is multi-center trials and clinical validation in a real-world environment before broad implementation is possible. Thank you.
Video Summary
The video provides an overview of advancements in AI and cardiac imaging, focusing on cardiac MRI, CT, and future implications involving radiomics. The discussion begins with the transformative potential of AI in streamlining processes such as disease characterization in cardiac MRI by automating complex workflows and reducing laborious tasks. It emphasizes how AI enhances radiology by replacing the need for deep computational expertise with more intuitive algorithmic training. Examples like deep learning for myocardial strain assessments show improvements in interpretative accuracy and efficiency without human intervention.<br /><br />Further, the video explores how AI has been integrated within cardiac CT workflows, impacting image acquisition, quality improvement, segmentation, and automated plaque analysis. AI algorithms, such as those used for coronary plaque quantification, drastically reduce expert time from minutes to seconds while maintaining accuracy. Future possibilities include personalized risk assessment and management by combining AI-derived imaging insights with traditional clinical data.<br /><br />Radiomics, the extraction of vast data sets from imaging voxels, offers another layer of predictive analytics, enabling the identification of high-risk features beyond the scope of human recognition. Despite the promise, challenges such as variability in data integration, the need for standardized procedures, and infrastructure limitations are highlighted as barriers to the widespread adoption of AI in clinical settings.
Keywords
AI advancements
cardiac imaging
cardiac MRI
cardiac CT
radiomics
deep learning
automated workflows
image segmentation
predictive analytics
RSNA.org
|
RSNA EdCentral
|
CME Repository
|
CME Gateway
Copyright © 2025 Radiological Society of North America
Terms of Use
|
Privacy Policy
|
Cookie Policy
×
Please select your language
1
English