false
Catalog
Advanced Techniques for Abdominal Imaging (2021)
R6-CGI12-2021
R6-CGI12-2021
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
I'm Meg Lubner from the University of Wisconsin, and I'll start us off here. I'm going to be discussing quantitative CT imaging. I just wanted to acknowledge several authors. We did do an educational exhibit on this for RSNA 2019 and have included some of the concepts from that exhibit. So for the next 10 minutes or so, I just want to discuss the spectrum of CT imaging metrics, and if there's time, touch briefly on dual energy CT at the end. We know that CT is a common exam. Surpassing abdominal x-ray is the most common exam in Medicare-aged adults in 2012, and this trend has continued, particularly in older adults through 2016, as shown in both of these recently published plots. The more data that we can wring from our CT images, the more value we can add, and there are lots of places where quantitative CT data is valuable. Regardless of the indication for the scan, a variety of body composition markers can be used for screening and risk assessment, and my colleague, Dr. Pickard, an expert on this topic, will discuss this in more detail later in the session. We can also use quantitative imaging to guide therapeutic decisions about whether a patient is eligible for or needs surgery, whether additional imaging or image-guided biopsy may be helpful, or whether conservative therapy may be most appropriate. We can use it for longitudinal monitoring of conditions over time, ranging from benign conditions like renal stones to complicated oncologic conditions in patients on multisystemic therapy. My colleague and co-presenter, Dr. Smith, will speak in more detail on this later in the session as well. And finally, we can use quantitative tools for quality improvement to make sure our dose prescription and our contrast delivery are optimized to give us the best images. There are a variety of CT metrics that we can evaluate. Many are very basic and have been around for a long while, like those in the blue circle, things like size, attenuation, and morphology. There are some that are newer and more advanced, like some of the imaging and post-processing techniques delineated in the orange circle, things like radiomics, deep learning, liver surface nodularity. And most of these can be obtained retrospectively after the images are obtained, but some require a hardware solution or prospective acquisition as depicted in the pink circle, things like multi-energy CT or CT perfusion. And in addition to new tools for quantification, a lot of innovation has come in improving or automating existing tools and in compiling better reference data around what the numbers mean. So size is a metric that's been around for a long time, but there are lots of ways that we can quantify it. And in many scenarios, a unidimensional measure may be sufficient, but for objects that are more complex in shape, volume may give us a better representation of size. So we can look at the liver as an example. I always find it challenging to assess for hepatomegaly on CT, even though there are some unidimensional measures out there in the literature, just seems like volume may better capture the size of the liver. And there are a variety of organ segmentation tools that have come out, some semi-automated, some more fully automated. This is a study that our group did using a fully automated AI-based tool that was developed by Ron Summers and his group at NIH. And this allowed us to assess the livers in over 3,000 asymptomatic healthy adults, look at what normal size is and what demographic factors are most impactful in liver size. We found that this automated tool performed very well compared to some of our manual or semi-automated segmentations, and that weight was really the most impactful factor in determining normal liver volumes. So this allowed us to create a simple nomogram for normal liver volume stratified by patient weight category. In addition to total organ volumes, we can look at fractional volumes or volume redistribution in the setting of developing liver disease, keeping with our liver theme. We know that as the liver becomes cirrhotic, there's hypertrophy of the left lateral section and caudate with atrophy of the medial left and the right lobes. So redistribution of the volume into segments one through three compared to four through eight. And we can capture this as a fraction, the liver segmental volume ratio. In looking at a cohort with intermediate stages of fibrosis, you can see that LSVR increases as fibrosis stage increases as more of the liver is accounted for by segments one through three compared to the other segments. In addition, there may be slow development of portal hypertension. So you can see a similar trend with splenic volume across the same cohort of fibrosis stage changes. And when we look at these two volumetric metrics together, we can see that we get a pretty good AUC of 0.91 for significant fibrosis. When we first started doing liver segmental volume ratio, it was highly time consuming and labor intensive. So it was hard to imagine it translating into the clinical setting. But Ron Summers' group has also now totally automated this tool. So here's an example of a patient with stage four fibrosis and the automated liver segmental volume ratio shown here. He's also automated the splenic volume so we can capture both of these in a totally automated fashion from imaging, giving us a full picture of fibrosis stage. And both of these automated tools have performed extremely well compared to our prior manual or semi-automated segmentations. So this really now makes this measurement much more feasible in the clinical workflow than it was previously. We know that there are other morphologic changes in liver disease, including changes in parenchymal attenuation, parenchymal texture, and liver surface morphology. So we know, for example, that patients develop liver surface nodularity in the setting of worsening hepatic fibrosis. And while previously we may have subjectively assessed whether we thought this was present, my co-presenter, Andrew Smith, developed a tool specifically to quantify this. And the way that it works is you just paint along the liver surface and there's edge detection in the software that measures the distance between the detected edge of the liver and a generated smooth polynomial line to generate a liver surface nodularity score. And this is a very robust measurement, easy to measure on lots of different types of images. And in a HCV cohort, a Dr. Smith's institution was an independent predictor of liver-related outcomes like hepatic decompensation, actually performed better than common clinical scores like the MELD score. We also looked at this in an intermediate stage fibrosis cohort. And you can see on the left, this was a pooled cause cohort. You can see an area under the curve of 0.93 for advanced fibrosis greater than or equal to stage F3. We also looked at a NAFL cohort, which is the plot on the right, didn't perform quite as well because perhaps liver surface nodularity developed slightly differently in this group. And now Andrew has put together a diverse multi-institution cohort, 200 patients from five different institutions. You can see that this metric really holds up quite well. You still see it increasing across liver fibrosis stage. And when you combine it with a simple clinical measure like a FIB4 score, you can see an area under the curve of 0.9 for significant fibrosis. And of course, many of our cirrhotic patients unfortunately develop hepatocellular carcinoma, which is a hypervascular tumor. They're often treated with local regional therapy. Dual energy CT enables the creation of virtual monoenergetic datasets across the energy spectrum from 40 to 140 KEV, as well as generation of material basis image datasets that allow the identification and quantification of iodine. These low VME datasets close to the K edge of iodine and iodine material density images can be really helpful in this patient cohort. For example, in this patient, in delineating and separating the devascularized local regional ablation zone from residual or recurrent hypervascular tumor, which we see later here in the same patient with improved consecutive, these hypervascular lesions on the low virtual monoenergetic and iodine material density images. But unlike the metrics I've discussed so far, this has traditionally been a hardware solution and there are a variety of hardware solutions out there that requires prospective acquisition on a scanner with these capabilities. However, our medical physics group in conjunction with our informatics group has designed a deep learning network that allows the theoretical extraction of dual energy type data from conventional CT images. And we presented some of the initial results in the scientific session this morning, but this slide is a very high level overview of the AI based network that they've designed. Single KV or sort of conventional CT images and the associated projection data are fed into a trained material basis image generator, which creates two material basis pair images, which is then fed into a virtual non-contrast image CT generator. And the output is essentially a slate of images similar to what you would see with a dual energy acquisition. This is some of the preliminary data from the validation cohort. This is looking at water and iodine material basis images. There were 74 patients in the validation cohort, which are across the x-axis here. Each number is one patient. You can see the heat map represents, the colors represent the frequency of the differences and the aqua line is the mean across the patients. And in this table, you can see that the difference was fairly small. For example, on the iodine images, there was a difference of 0.22 milligrams of iodine between the AI generated images and the true GSI images. Here's a bland Altman plot of the virtual non-contrast images with measurements taken in varying tissue types. And you can see that there is some variability on average, for example, in soft tissue. The virtual non-contrast generated by the AI system was within about five to 10 hounds for the units of the true non-cons. So room for improvement, but similar to what's been reported in the literature. Here's an example. This is a 65 year old male with a BMI of about 40. He had a true non-contrast exam followed by a portal venous conventional CT and the virtual non-con images in the middle panel were derived using the AI based system. And you can see that there's subtraction of the iodine with some preservation of the aortic calcium. And when you look at the plot of the CT numbers on the right, the virtual non-contrast images generated by the AI system are in blue, true non-contrast are in orange and you can see that the numbers are fairly close. Here's another example. The top row shows spectral CT images that were generated from a fast KV switching dual energy acquisition. The bottom panel shows the corresponding images generated by the AI network and you can see that they're fairly similar in their appearance. And here's just one quick clinical example. This was a patient who was found to have an incidental right adrenal nodule, had a conventional CT, measured about 50 hounds field units so it would have been indeterminate and would have needed some additional study. We retrospectively ran this through the AI network, generated a virtual non-contrast image which showed that this lesion measured less than 10 hounds field units, suggestive of an adrenal adenoma and the patient did go on to get a confirmatory MRI which showed loss of signal out of phase compatible with intracellular lipid. So in summary, there's a lot of quantitative data in our CT images that we can extract with the improved tools and image processing algorithms that are currently in rapid development. The more streamlined, automated and standardized they become, the more feasible they become in our busy clinical workflow. And tools that historically have required a hardware solution or a prospective acquisition like dual energy data may be able to be retrospectively generated from conventional CT using AI based solutions. Thanks very much for your attention. So we're going to talk about quantitative MRI. So the goal of medical imaging in the context of this image that's kind of lurking in the background, we want to be able to diagnose, monitor and treat disease in a non-invasive manner or a minimally invasive manner. And it seems to me that biopsies are not making that possible and they should be considered imaging failures. So one of the questions that I've been asking myself is can we be definitive and can we measure our uncertainty in our diagnosis? And when I talk about quantitative MR in this context I'm talking about pixelized maps of interesting physical or physiological properties. I'm not touching today on radiomics and some of these other topics that are also quantitative. Now, when we do these kinds of measurements and these maps, we have a responsibility to know our variability in measurements, test, retest, intra and inter scanner through vendors and so on. There's a number of areas that we need to worry about when we do this. So who's responsible for this? Well, to a first approximation, the people who are writing these papers need to worry about this. And I find a sad lack of due diligence in some of the papers I see come across my desk. The vendors also I think have a responsibility to try to standardize across their platforms. These are physical properties that we're measuring and there's not enough of standardization effort on part of the vendors. The second question that arises when you have these maps is how do you use these numbers? And if you look at these three states, this could be blue could be a disease and green could be two different states of the disease or blue could be one disease and the two greens could be two different diseases. If you start using these red cutoffs, it becomes the overlap between these groups is too great to be able to really use them meaningfully. But if you use cutoffs as in the yellow, then you might be able to separate say blue from most of green or the dark green from the light green at least below that level. So you have to be very thoughtful about getting actionable cutoffs that make clinical differences. Now I'm gonna give you some examples of quantitative techniques that are out there and perhaps my favorite example is elastography because Dick Eamon has done such a wonderful job of taking it from start to finish. So in an MRE exam, you use mechanical waves to kind of jiggle the liver and then quantify the degree of elasticity or mechanical variability in the liver. And so when you're doing this, this can be done in a very short acquisition imaging time of 15 second and you measure the stiffness of the liver and then use it for clinical staging and in this case, staging of hepatic fibrosis. This is the most commonly used technique. And Dick Eamon has done an incredible job of applying this to fibrosis. But before it goes to that point, a whole lot of due diligence on repeatability and reproducibility. This includes then taking the technology to a biomarker committee at the Kiba and getting this properly aligned as a biomarker. He has also done an incredible amount of work trying to standardize these measurements across the MRI industry and all three major manufacturers have it on their platform. The standardization means that you can use it anywhere. That's a massive contribution to the field in my opinion. He has taken it one step farther and worked to get a CPT code so we can get reimbursed for the work we do. So really from soup to nuts, this technique has gone all the way to being able to be reimbursed for it. And I think this is kind of a gold standard of where we might be able to take some of the work. Now, the next example is courtesy of Scott Reeder, another person whose work I respect a lot. And this is on fat, water, and iron quantification. Here what Scott has done is that, one of the things that you worry about is when you have normal liver going all the way to cirrhosis or the dreaded complication of a fatty liver, which is hepatocellular carcinoma, you need to be able to image various stages of progression from normal to non-alcoholic fatty liver disease to non-alcoholic steatohepatosis and then to fibrosis and cancer. Now, when you look at biopsy in this situation, biopsy is really problematic actually because you can have geographic differences in fibrosis. This is a biopsy in the same patient from two different parts of the liver. And again, you see that you could have stage one and stage four fibrosis in the same person. So there's a great degree of sampling error. Not only is it invasive, but it is not really the best test. Imaging can be superior to biopsy. Now, Scott has taken a multi-echo chemical shift imaging approach, and then he uses this to get both the PDFF map, which is proton density fat fraction, and the R2 star map, which being one over T2 star, which can be used for our iron quantification. So this is two birds in one stone in effect. A lot of work has gone into correcting for confounders in MRI and PDFF mapping and T2 star mapping. And so correcting for these confounders allows for more accuracy and more repeatability. And so another piece of what he has done, which is to look at multiple manufacturers and make sure that the work is reproducible across platforms. Again, exactly like what Dick Eamon has done with elastography, there's a lot of work that has gone into even doing meta-analysis and putting all of these data together across thousands of patients. And this allows for better validation and better use out in clinical practice. Here's an example, a 41-year-old with diabetes and dyslipidemia, and then you see the heterogeneity in the liver, sorry, the improvement in the fat fraction after treatment. Here again, monitoring the fat fraction during weight loss from bariatric surgery. Taking it one step farther, then you can use the R2 star mapping as a biomarker for iron quantification, and here monitoring treatment for iron overload. So a complete set of work that I really have come to respect a lot. I'm going to share with you our work on fingerprinting, which is earlier in the march of science. It's not quite as far along in the slides here. So in fingerprinting, we collect images that you wouldn't tolerate this ever in a clinical setting, but these raw images are used using a dictionary-based approach to quantify maps that look quite nice. And so then those can be used to further characterize tissue. In our case, we're using it for T1 and T2 mapping, but really any property of interest is possible. Philosophical decisions in this include that we're after numerical tissue characterization. We do not care about the raw images. We are going right at the quantification of numbers. Now, down the road, you could think about specialized contrasts or calculating contrast-weighted images, but this is up to you. I'm going to talk about prostate as an exemplar tissue because we've done a lot of work across a whole lot of tissues. And we've gone to great lengths to use phantoms and human volunteers to get at repeatability and reproducibility of our numbers. And we've been using the NIST phantom, which I think is the gold standard because it is made by the National Institutes of Standards and Technology, and it allows us to really standardize our numbers across sites. In the case of prostate, I'm not going to go into the detail of this work, but we have worked on multiple scanners in three cities on two continents, two platforms, and it still is a single-vendor effort, although we're thinking about how to do this across multiple vendors. And this may be the pathway. I actually discovered this paper while preparing for this talk, which is reproducibility at 1.5 and 3T for prostate on a GE platform. So I might reach out to those people and talk to them about this. In this work on prostate, we have looked at a multidimensional space, T1, T2, and ADC together, because we don't feel that either T1 or T2 alone or ADC alone are enough to really give us the separation that we're after. And so these are the initial results and we've gone to validate our work by using targeted biopsy. And what we find is that in the peripheral zone, we're able to not only discern cancer from not cancer, but also with very high AUC, discern intermediate and high grade cancer versus low grade cancer, a clinically important decision. And in the right population, you might even be able to consider using these cutoffs, a separation of some people who may not need a biopsy. That's down the road and it's a tantalizing possibility. In transition zone, we have T1, T2, ADC and we find interestingly that T1 and ADC are more accurate, more useful than T2, which is an unexpected finding. And again, using smart cutoffs based on what sensitivity and specificity one might be after, we start to be able to separate tissues of interest, i.e. cancers from not cancers, which is a big dilemma in transition zone tumors. And there's another tantalizing possibility of thinking about whether we could separate cancers and not cancers in PI-RADS3 lesions. This is work that we intend to do. We have also ongoing work on treatment monitoring in bone mets, which I won't get into at this time. There are several major outstanding issues and we're working on these, as is by no means a complete work. You know, a big question for us also is, can we improve dimensionality in quantitative imaging? And this is beyond prostate, you know, using MR properties and non-MR properties together to get at better quantitative results. There are many needs open in quantitative MR, but the biggest one being, please, please, please publish your numbers and do phantom work and normative studies that really does the due diligence that we need. With this, I thank you, and I want to acknowledge my colleagues who provided slides and have collaborated with me for a lifetime, particularly Mark Griswold and Nicole Cyberlick. Nicole, being also my wife, has collaborated with me in many different ways. So with that, I thank you and I stop. Good afternoon. Thank you for the kind invitation. In the next few minutes, I shall discuss with you the PET-MR imaging protocol that we use for abdominal indications at our institution. And as a case in point, I shall discuss with you the salient utility of PET-MR in the context of pancreatic cancer, which is the most common indication for PET-MR at our institution, and is also rapidly emerging as an indication at other institutions that have access to PET-MR. The PET-MR protocol that we use has got two salient components. The first one is what we call the survey PET-MR. Now, this is analogous to the whole body PET-CT, where we acquire multi-PET extending from the vertex to the thighs, while co-acquiring axial T1-MR attenuation correction and anatomic colocalization sequences. However, the main difference when compared to PET-CT is that in PET-CT, there is sequential acquisition of the CT followed by PET, unlike in PET-MR, where both the PET and the MR components are co-acquired simultaneously. This is immediately followed by what we call as the focused PET-MR of the abdomen. This includes a 15-minute respiratory compensated single-bed liver PET-MR with co-acquisition of the diagnostic MR sequences, including the 20-minute habitability phase images. Due to the prolonged acquisition over a single bed, the PET component has a very high signal-to-noise ratio, which is amplified by the respiratory compensation. Now, how does this translate to diagnostic performance is what we shall see in the next slides. In the context of pancreatic cancer, the two main indications where we use PET-MR are for assessment of response to neoadjuvant therapy, which can potentially guide chemotherapy switch, as well as for the prediction of pathologic response to decide the optimal timing for surgery. The second common indication is for detection of metastasis, particularly the subtle liver lesions, because as you know, in the context of pancreatic cancer, in the context of pancreatic cancer, these would often render a patient palliative. They would also be responsible for early post-operative recurrence or relapse if they are not detected a prior. Some of you may wonder, then why not do a PET-CT? Why go for a PET-MR? Well, compared to the conventional PET-CT scanners, the PET-MR is equipped with silicon photomultiplier detectors, which combine with the scanner geometry, which is this 25 centimeters axial field of view, gives it three times higher sensitivity than conventional PET detectors for those photons. This is particularly relevant in the context of pancreatic cancer, which is often not as FDG avid as other soft tissue neoplasms. And due to the co-localization of PET and MR data, there is exceptional ability to pick up those tiny lesions in the liver and the peritoneum. Finally, we do know that MR is the reference standard when it comes to evaluation of the liver. And therefore, an MR imaging protocol that includes PET, diffusion-weighted, and hepatobiliary phase images together provides almost a one-stop shop imaging solution for these patients. Let's see the utility based upon some case examples. On the top row, we have the pre-neoadjuvant therapy images in a patient with pancreatic cancer, and we can see that the index lesion in the pancreas was intensely FDG avid. Subsequent to the neoadjuvant therapy, patient had what we call as complete metabolic response, which is defined as complete resolution of the FDG in the index lesion compared with the subadjacent pancreatic parenchyma and lower than the hepatic background FDG uptake. In comparison, there was only partial response on the contrast-enhanced MR component of the PET MR, as well as on the multi-phase CT scan, as you can see from these images. Based upon the PET MR findings, the patient went on to surgery and was found to have complete pathologic response on his pathologic evaluation. Now, major pathologic response in the context of pancreatic cancer is one of the key goals of neoadjuvant therapy. And the reason for that is because it is one of the most robust prognostic factors when it comes to patients with locally advanced and borderline receptible pancreatic cancer. We recently published our results from our experience in using PET MR for patients with locally advanced pancreatic cancer in AGR, which also received the Editor's Choice Award. We were also invited to present at the AGR live webinar series. And in case some of you wish to know more details about the study, I welcome you to go ahead and look at that webinar. The salient findings of our study are summarized in this slide. What we found is that patients with major pathologic response had significantly longer overall survival compared to those with limited pathologic response, which validates major pathologic response as one of the key prognostic factors in these patients. In addition to that, in our cohort, PET MR biomarkers suggest complete metabolic response and relative change in SUV max and SUV gluke, which is nothing but SUV max, which is normalized to the blood glucose. These biomarkers had high negative predictive value for prediction of major pathologic response. In addition, they had excellent inter-reader agreement, which as we know, is very critical for the clinical translation. In contrast, CA99, the most commonly used biomarker in these patients, was not associated with the response or with overall survival. These data form the basis of the recent award of an R01 that we received from the NCI, along with our collaborators from UCSF, Dr. Jane Wong and Eric Collis. So we are in the process of prospectively evaluating these biomarkers, along with KRAS-CTDNA liquid biopsy in patients with pancreatic cancer to define the thresholds that could be used in clinical practice. Most of these patients who undergo PETMR at our institution also undergo triple-phase pancreatic CT for evaluation of the local regional vessels, as well as for surgical planning. In one such patient, what we see is that on the CT scan, there was this ill-defined hypodensity, along with hyperperfusion in the subadjacent liver. For whatever reason, this was interpreted to be of vascular etiology when the CT scan was read in the practice. Two weeks later, the patient had the diagnostic PETMR, and as we can see over here, the lesion in the liver had intense FDG uptake. In addition to that, what was even more surprising is that the patient had another intensely FDG-avid lesion in the right hepatic dome, as well as in the subcapsular location, both of which were actually present on the CT scan when we looked back, but were almost occult on that CT scan. This patient went on to have biopsy confirmation of these metastatic lesions, which showed that this is one of those patients where if he would have undergone surgery, or she would have undergone surgery, would have ended up with early post-operative recurrence. And this is exactly the reason why our referring providers are keen on getting every patient on the PETMR scan. We discussed earlier the second component of our imaging protocol, which is the focused liver PETMR. Let's see how that helps us in our clinical practice. So this was a patient who had a focal-enhancing lesion in the right hepatic dome on the diagnostic MR component of the PETMR. When we looked at the whole body survey component of the PETMR, there was no focal FDG uptake to correlate with that lesion. However, there was indeed focal FDG uptake that correlated with that lesion on the focused liver PETMR, which went on to have biopsy confirmation of metastases. This shows the value of the significantly increased signal-to-noise ratio that we get with the focused liver PETMR. This was a very unusual case. This was an unfortunate young gentleman who had a variant of pancreatic cancer called pancreatoblastoma. As you can see on morphologic imaging, this looked exactly like any other pancreatic cancer. However, in addition to that, patient also had this lesion in the right hepatic lobe, which clearly, based upon these MR imaging features, was called as a hemangioma at the outside institution. And when we reinterpreted the images over here at Mayo Clinic, we agreed with them and called it the same. The patient was ready for surgery. However, prior to surgery, our helping providers, as they like to do, had a PETMR for this patient. As we can see, that the index lesion in the pancreas was intensely FDG-arbitrary. But lo and behold, that lesion in the right hepatic lobe also had focal intense FDG uptake, which on biopsy was confirmed to be metastasis rather than being a hemangioma. And therefore, in this patient, the imaging completely changed the management because this patient was no longer a potentially surgical candidate. One of the other indications where we sometimes do PETMR is for evaluation of potential post-surgical recurrence. So this was a patient who was a 55-year-old male who had undergone a Fipples and Adjuvant Chemo for pancreatic cancer. And during the postoperative period, about a year later, he was found to have on a CT scan this focal hypodensity at the surgical resection margin, which in coordination with the elevated CA99 was clearly deemed to be recurrence. The patient was referred to our institution for consideration of salvage surgery in view of the patient's favorable performance status at a young age. Actually, before he was referred, I must mention that he also underwent a PET-CT at the outside institution, which showed focal uptake in that local recurrence and no other site of metastasis, which was another reason that the referring providers wanted to consider potential salvage surgery in the patient. When the patient was referred over here, our referring providers asked for a PETMR, and the PETMR confirmed the findings as expected on the PET-CT, where we see that focal uptake in the resection bed. But in addition to that, on the PETMR, there was also a focal intense FDG uptake in the right hepatic lobe with corresponding findings suggestive of metastasis on the simultaneously acquired MR. When we looked back at the PET-CT, we could not find any lesion to correlate with that lesion on the PETMR. And we also looked at the CT that was performed around the time of the PET-CT. These are the liver window images, and these are the portal venous phase images. Again, we had a hard time finding a correlate for that lesion. Nevertheless, this patient went on to have biopsy, which confirmed that the lesion was actually metastasis. And again, in view of that, the patient was no longer deemed to be a surgical candidate. However, it's not always straightforward. This was a 66-year-old female with nighly diagnosed pancreatic cancer. And on the PETMR, she had a very tiny lesion in the right hepatic lobe, which had all the imaging findings suggestive of metastasis. This patient underwent biopsy evaluation, sorry, this patient underwent laparoscopic wedge biopsy resection, which showed that it was negative for metastasis and actually was found to be granulomatous and chronic inflammation. As luck would have it, the same patient presented with another liver lesion during the course of her neoadjuvant therapy. This lesion was even more suggestive of metastasis based upon the findings on the FDG component, as well as the imaging features on the simultaneously acquired MR component. Again, however, the biopsy in this case showed that this was actually an abscess due to klebsiella. So this is one of the common and founding factors is that metastasis versus pulangular abscess is a tough distinction, not only on PETMR, but on any other imaging modality. And symptoms, as you know, in these patients are often not helpful because these patients tend to have low-grade pulangular inflammation ever since they've had pancreatic cancer. One thing to remember is that despite the appealing features of PETMR, not all patients in our practice can undergo PETMR. One of the main reasons for that is the PETMR is one of the most claustrophobic PETMR scanner. Compared to a standalone 3-TESLA MR, the board size of the PETMR is only 60 centimeters due to the presence of the PET detectors that are present in the MR gantry. Secondly, due to the hour-long imaging protocol, patients should be able to sustain this imaging protocol, and some of them may not due to their comorbidities. And finally, somebody with standard contraindications for a MR, such as Pacemaker, which is incompatible, would also have a contraindication for PETMR. I'd also like to take this opportunity to discuss with you some of the ground rules that we have tried to follow when we have started the clinical practice of our PETMR almost five or six years ago. One of the items that we've particularly focused on is that we keep our imaging protocol restricted to 60 minutes. And this is done to balance patient comfort, scanner throughput, as well as diagnostic accuracy. And the second important thing is that we've gone only after those indications where both the MR and the PET components have well-defined and synergistic roles. For instance, we did not go for HCC where the PET component often does not have a good role. We did not go for something like RCC where the PET doesn't, again, have a good role. Instead, another common indication of PETMR in our clinical practice is patients with neuroendocrine tumors. And in these patients, instead of FDG, as you know, we use gallium dutatate. And this is the imaging protocol for these patients. And as you can see over here, it's quite similar to the FDG PETMR, except that we have gallium dutatate as a radio tracer instead of FDG. To be able to manage those PET and MR into 60 minutes, one of the critical components is the ability to trim down the MR components. And this is essential because we have to take into account the complementary information that is provided by the simultaneously co-acquired PET data. Now, this is easier said than done, especially in a practice like ours where we have a large MR radiology group. And as we all tend to have, sometimes get emotionally attached to some of the sequences. And therefore, this required for us to have some tough conversations in terms of which sequences could be left out to take into account the complementary data that was provided by the PET. In summary, what we have seen is that PETMR has a potential to meet the felt clinical needs in oncology, particularly in the context of pancreatic cancer. In our practice, we use PETMR for baseline staging of these patients, particularly to look for operative metastasis, as well as to evaluate response when you are doing therapy to see which patients would require a chemotherapy switch in case they're not responding to the first line chemotherapy. We've seen how prediction of major pathologic response prior to surgery is critical for optimizing the decision of surgery as well as the timing for it. And we've also seen how inconclusive findings on other modalities like CT scan can be very well evaluated on PETMR. Some of the confounding factors include glandular abscess, which is sometimes a tough distinction compared to metastasis. And finally, with the expected emergence of novel radiopresurges, Gallium 68 FAPI, we expect that PETMR will continue to have a larger role in our abdominal imaging practice. With that, I thank you for your attention. Have a good day. So I'm going to be talking, focusing on body composition with AI. Andrew Smith will follow me with an oncologic focus. And in terms of opportunistic screening, we all know that CT scans contain a lot of data. This has unfortunately led to concerns or appropriately so for incidental OMAs and these do need to be managed and handled responsibly. But beyond that, I think we can also ask ourselves, are there additional findings that we can use in every patient for potential benefit? And that's kind of the glasses have full approach that I've kind of taken with this. And part of this is because CT is so robust in terms of objective measures. There's so many CT scans that are done each year throughout much of the developed world. And so there's a lot of opportunity here to take advantage of this. I'm going to focus mainly on what I would call cardiometabolic body composition measures that can kind of address a lot of these indications that we're seeing here in terms of osteoporosis, cardiovascular disease, sarcopenia, metabolic syndrome, fatty liver, and so forth. And a lot of these, there's a lot of overlap here, of course. And I'll start with just an example. This is a 52-year-old man who had a normal CT colonography, normal virtual colonoscopy way back in 2005. Presumably if he had had a colonoscopy, he would have had the same result. It was a nice study. But as we all know, there's more to a CTC study than just the colon. And we see, obviously, the abdominal and pelvic contents. And as we scroll through this case, I think you can maybe get a sense for maybe there's some other things brewing that may be pertinent to this gentleman's health and well-being. We'll get back to that case in a moment. In terms of what tissue biomarkers we're talking about, all you have to do if you're here at RSNA is walk around the exhibit hall and the vendors in terms of all the different groups providing body composition measures. And obviously, these are readily available. Some are more automated than others, perhaps. We've taken an approach to kind of, with work done with Ron Summers, as Meg mentioned, at the NIH, we've kind of put these together at a single toolkit, kind of focusing on what I would call sort of the big five, looking at fat with both subcutaneous and visceral fat, shown in blue here. Muscle measures, which I think myosteatosis or density tends to perform better than muscle area or myopenia measures. Vascular calcification, liver fat, and of course, bone mineral density, and so all of these things can be combined sort of all together, and this is how we're packaging it now and viewing. These are all different patients at the L1 level, and you can see with a single slice, just from a QA purpose, you can verify that the segmentation was done appropriately, and this is work, now we're working with John Garrett at the UW, one of our informatics faculty, and going all the way back to virtual colonoscopy days of extracolonic findings, we've been focusing on opportunistic screening for many years, and I won't have time to go into all of that, but we summarized a lot of this in this radiographics article here that was published earlier this year, and then now I'd like to just focus on a couple papers that looked at actual outcomes, because this is where it gets kind of interesting and I think potentially valuable, looking at both cardiovascular events, osteoporotic fractures, and also overall survival, and this is the same cohort looking at for both of those papers, and if we just stand back and look at just those who died or didn't in the follow-up interval after their CT, which was about, on average, about a decade or so, there are obvious differences in things like aortic calcification, looking at Agatson score, looking at muscle density in Hounsfield units, just eyeballing these things, visceral to sub-q fat ratio, small difference in liver steatosis, although this is statistically significant that fattier liver is less healthy, and then bone mineral density as well, so those are pretty obvious, but these are mean values that really don't tell the whole story, so if we look at ROC-type performance, and I'll focus here on the five-year area under the curve, the Framingham risk score does fairly well, you can see a .69, essentially, area under the curve, whereas BMI had almost no predictive value, partly because there's sort of a U-shaped response curve with BMI, where both underweight and overweight can kind of counteract. However, if we look at some of these automated, and these are fully automated, single-CT-based measures, they, by themselves, things like aortic calcium, also muscle attenuation, actually beat the Framingham risk score, just as a single automated hands-off tool, so I think that's very promising, and then even if we combine these, of course, here's combining aortic calcium with muscle and liver, we can get even higher values, and there's a lot of signal, even from visceral to subcutaneous fat, liver steatosis, and bone mineral density as well, and there's some overlap, but there's also a lot of complementary interaction here, and if we look at Kaplan-Meier type plots, these are time-to-event plots, you can see with Framingham risk, these are biquartile, it does separate things out, but the automated aortic calcium measure does a better job of separating those out at greatest risk, as does, I think, muscle attenuation, even bone mineral density, the worst quartile, does poorly, whereas the others are kind of clustered together, and then down here in the lower left, if you look at what BMI versus visceral to subcutaneous fat ratio, you can see that there is hidden value when you start looking at where the fat is distributed, and BMI just sort of clouds a lot of that data, so it's a very blunt instrument, but there's a lot more refinement we can do with these CT-based measures, and here's also liver steatosis too, there's some separation out between the quartiles. So back to this gentleman who had the negative study, healthy exam, he's an outpatient, as we can see, he has pretty severe, moderate to severe steatosis, he actually has a fair amount of visceral fat, just visually, compared to the amount of subcutaneous fat, and then he has a lot of vascular calcification in that aorta, which is normal in caliber, but how does this turn out? If we look at, he's really off the charts with some of these automated body composition measures, so he was in the 97th percentile or worse for visceral to subcutaneous fat ratio, for liver steatosis, for aortic vascular calcification, and if you put all that together, even though his Framingham risk score put him at a very low 10-year risk for a cardiac event, and his BMI was just a little bit above normal, which actually probably is normal for us, but slightly overweight, if you look at just these three measures combined, and I won't go into the details on the modeling, but it would suggest that the risk of death is much higher, even out to almost 40% at 10 years with a much higher risk of a cardiovascular event than the low Framingham risk score would have suggested. So what actually happened three years after that normal CT colonography study, he had his first heart attack, five years later, he had a stroke. In 2017, he actually had a negative CT for minor trauma, I think he just, he fell or something from standing or whatever, but it was a normal study, except for the fact that you can see his progressive aortic calcification, he has even more bulging visceral fat here, his liver is very fatty, even on this post-contrast study, so clearly all these things are progressing, and he actually died later that year, essentially of metabolic-type, metabolic syndrome-related causes. And then the same can be done with predicting osteoporotic fractures, and I'll just show that the FRAX assessment tool, which is, again, another multivariate measure like Framingham risk score, you have to enter 12 different variables into an online calculator done one patient at a time, and it's very onerous. It does fairly well for predicting those who are at risk, but so does just a single, basically, region of interest on the L1 trabecular level, muscle does quite well too, and if you put those two together, you can basically match FRAX either by themselves or in tandem. And then here are those same time-to-event plots just to give you the same idea. Again, BMI doesn't do very well. And as a case example, this is another patient, 59-year-old woman, had a negative, normal CT colonography study in 2017. Her FRAX score was well below the treatment threshold of 20%, so pretty normal. Her hip-specific fracture risk was well below the 3% treatment threshold. But if you look at her bone mineral density in terms of the spine, this is something we do clinically on the fly, but this automated tool is giving us a very low measure. Certainly anything under 100 is concerning. Under 90, the fracture risk really goes up, so 63 is quite low. And having a negative muscle attenuation puts her at a very low sarcopenia measure in terms of myosteatosis. So what happened? Well, we didn't have time to intervene, even if we had commented on this in a clinical sense. Three months later, she had this catastrophic hip fracture. And then, of course, at that point, they do the other side with the DEXA, and lo and behold, she has osteoporosis, but it's a little too late for that after the fracture has occurred. I think even more interestingly, in this case, she actually had multiple, this was her CTC study in 2017, but she had multiple prior non-contrast studies for urolithiasis going back many years. And if we apply that automated BMD tool, you can see she starts to fall off the curve somewhere between 2007, 2009, and certainly by 2014, she's at this critical osteoporotic threshold and plenty of time to intervene, perhaps with treatment or what have you, to try to prevent that fracture. But of course, this wasn't done, but going forward, maybe we can learn from these sorts of missed opportunities in the past. And interestingly, when you look at muscle attenuation in terms of hip fractures specifically, and this is something we're looking more closely at in an enhanced hip fracture population, but muscle attenuation alone actually performed better in terms of hazard ratio and area under the curve than this multivariate FRAX for predicting hip fractures specifically, which is something I'm very interested in to see how well this can do in a larger cohort. So, future directions, we are planning on, we're now measuring all of our UW cohort, not just that small asymptomatic outpatient cohort. We're expanding to a multi-center study to look at more racial and ethnic geographic diversity, try to figure out what normal is for all of these body composition measures. Everything I'm saying really could apply to the chest for the most part. And we're focusing on the abdomen to start with this opportunistic screening consortium trial that we're calling OSCQR. And I've met many of the participants this week here at RSNA. Beyond that, I think I want this in practice. I want to be able to use this clinically real time, have a dashboard readout. By the time I pull up the CT, all of these measures could be there in the future. And then taking it a step further, this is all opportunistic. The CT was done for some clinical reason, but if we can prove the value of these, like we're suggesting, maybe it's reasonable to have a patient come in at age 50 or 60 for an intended CT screening exam, maybe in conjunction with some of these other cancer screenings. So in summary, abdominal CT is performed for many indications as we know. It's a very robust test that has a lot of reproducibility. These are all explainable or understandable measures that I'm showing you. They could be done manually, but now that we can turn it to automated, it just allows for scaling up to the population level. And the performance is really quite exciting in that it exceeds many of these clinical predictors that are the current clinical standard. And in the end, we're adding value to what we're already doing, which is critical. So don't view your CT scan as the hammer to nail the diagnosis, but more as a full tool kit to take care of the entire patient. So with that, I thank you. I'm Andrew Smith, Professor of Radiology, Chief of Abdominal CT and Co-Director of AI at the University of Alabama, Birmingham. And I'll be discussing artificial intelligence in advanced GI cancers. For this lecture, I will discuss current practice for longitudinal evaluation of advanced GI cancer, strengths of AI, augmented intelligence for longitudinal evaluation of advanced GI cancer, and augmented intelligence for Lyrads. Let's quickly review current practice for longitudinal evaluation of advanced GI cancer. Patients with advanced GI cancer often undergo repeated CT imaging to evaluate response to therapy. Different radiologists evaluate images and dictate findings into separate text-based reports that are read by the oncologists and patients. Multiple steps, including measurements, are performed manually. All reporting is by dictation using voice recognition software and located on a different computer screen. This is prone to errors, inefficient, provides no longitudinal data, and lacks standardization. It can be difficult to locate target lesions on follow-up exams. The final report is text-based, and this is not an effective form of communication. An augmented intelligence CT and MRI viewer could overcome these deficiencies. The AI algorithms can be combined with standardized guided workflows with direct extraction of data from the images to automatically generate advanced patient-level reports that may include a graph or other forms of communication. Now let me review the strengths of AI, particularly for advanced GI cancer. AI has multiple strengths. AI can be used to segment and quantify various structures, including tumors. AI is strong in image classification. This example is classifying the anatomic location of this liver tumor. AI is good at detection of liver masses, free intraperitoneal air, and more. AI can be used to improve image reconstruction by either improving image quality or the speed of acquisition if we're talking about MRI. NLP can be applied to our radiology reports to improve reporting and adherence to best practice guidelines. AI is also good at simplifying complex data sets or big data. AI often finds trends that routine methods of data analysis cannot. This is also true of radiomics, which refers to extraction of imaging features that are used to predict an outcome or perform a categorization or classification. The types of AI algorithms on the top of the dotted line can be considered transparent or glass box AI, while those below the line are commonly opaque or black box AI. Transparent or glass box AI refers to simple AI algorithms that provide outputs that are easy to verify at the time of use. These are what we are currently seeing in clinical practice. An example of a transparent AI algorithm is shown here. What we have are CT images of a tumor in the liver. The CT image and a user-guided click on the tumor are the inputs. The output from the AI algorithm is the tumor segmentation. We don't necessarily know exactly how the AI is doing this segmentation, but the end result is certainly easy to verify in real time. Conversely, opaque or black box AI refers to AI algorithms to provide outputs that are not visible or verifiable at the time of use. The input for this AI is the patient data from the EMR and the radiomics analysis of the liver tumor. This AI algorithm provides an output that this patient is a responder to some therapy, but there's no way to verify this prediction in real time, which is why this is an opaque AI algorithm. When we look at FDA-cleared AI algorithms in the abdomen, almost all are transparent, meaning they're simple to use and understand. About half of the AI algorithms in clinical use for advanced GI cancer are for quantification, and the other half are for detection. Now let's discuss augmented intelligence for longitudinal evaluation of advanced GI cancer. Augmented intelligence refers to the use of AI algorithms to improve human performance. In general, the hope is that an augmented intelligence platform will increase accuracy, decrease major errors, decrease interpretation and reporting time, and increase standardization or inter-observer agreement. To do this, tasks that are currently manual will need to be more automated. To be successful, an augmented intelligence system will need to be integrated into the normal radiology workflow. An augmented intelligence viewer can leverage a variety of AI algorithms. In this example, there are AI algorithms for tumor segmentation and quantification, anatomic labeling, and longitudinal tracking of image findings over time. The AI algorithms are combined with guided workflows that are standardized and efficient means to completing image analysis for a particular use case. Examples of use cases could include advanced cancer, LIRADS diagnosis, LIRADS treatment response, tumor staging, and more. A guided workflow refers to a user interface that walks the user through critical steps. This is designed to adhere to best practice standards, reduce errors and omissions, and improve accuracy, efficiency, and standardization. Finally, advanced reports with a graph, table, key images, and structured text can be automatically generated with no loss of quantitative data and limited dictation. The user keeps their eyes on the images the entire time, including when reviewing prior analyses. Now let's look at how this could be integrated into a baseline evaluation for a patient with metastatic colon cancer. The Augment Intelligence Viewer has workflows for longitudinal advanced cancer response evaluation that allow the radiologist to measure target lesions, track response of non-target lesions, and track other findings. With a single click, the AI measures this target lesion. A second AI algorithm determines a tumor location to be verified by the radiologist. All of this data automatically flows into the reports, eliminating data transfer and mathematical errors. The AI-assisted tumor measurements and labeling work anywhere in the body. The measurements and labeling are easy to edit on zoomed images, and the radiologist never has to take his or her eyes off the images to check for dictation errors. The AI also assists with labeling the location of non-target lesions, and the user indicates if they're a single or multiple. Target lesion lymph nodes are segmented and labeled in a similar fashion, with automated extraction of long axis diameter for masses and short axis diameter for lymph nodes. Other findings like the large amount of ascites are also tracked over time. The process is even faster at follow-up. A co-pilot feature allows the radiologist to quickly cycle through all findings arranged in anatomic order from head to toe. This is particularly useful in working with a trainee or technician. The user activates co-pilot to initiate AI-assisted image co-registration, which facilitates tumor tracking. The radiologist clicks on the target lesion on the current exam on the left to measure it. There's no need to relabel since the name is retained. Co-pilot rapidly cycles through all remaining findings. The radiologist is actively in control of this process. All data is transferred into reports, and mathematical calculations are automated. Co-pilot also moves through the user through non-target lesions. The radiologist indicates if the non-target lesions are absent, present, or pathologic, unequivocally progressed, or not evaluated. A similar process is used to track other findings like this large ascites. The user can indicate the amount of change and optionally force additional follow-up in the workflow at the next time point. New sites of disease can also be added as possible or definite, but these are not relevant in this case. The automated reports are immediately available. The graphical report displays all relevant information, including individual measurements, total tumor burden, percent changes in tumor burden, non-target lesion response, and final response category, per Rhesus 1.1 or any other chosen criteria. A complimentary table, key images, and structured text are also available. It's also possible for different subspecialty radiologists to work independently from one another on the same patient and still generate a single patient-level report. Further, the information is digital and can be translated into reports with different languages. The data can also be expressed in sixth grade language for patients with low health literacy. Collectively, the AI-guided workflows generate crystal-clear reports and improved communication. If you think about it, all of the data coming out of the Augment Intelligence Viewer is structured. Data including patient demographics, tumor type, treatment, tumor measurements, radiomics, and annotated images, and final response are all digitized for incorporation into new AI algorithms. How does an Augment Intelligence Viewer compare to current practice? In a landmark multi-institutional trial with 24 radiologists and 20 oncologists, comparing the AI-guided workflows and automated reports to current practice methods with manual measurements and dictated text reports, the AI-guided workflows and reports increased accuracy by 25%, reduced major errors by 99%, were two times faster for radiologists, and increased intra-oncologist agreement by 58%. In a post-study survey, 96% of radiologists preferred the AI-guided workflow, and 100% of oncologists preferred the advanced reports. This Augment Intelligence Viewer with AI algorithms, guided workflows, and automated reports for advanced cancer response to therapy is overwhelmingly more effective than current practice methods and establishes a new standard of care. An Augment Intelligence System for Lyrods could utilize similar AI algorithms for tumor segmentation and quantification, anatomic labeling, and longitudinal tracking. But again, we can include additional AI algorithms. These AI algorithms could be integrated into specific workflows for Lyrods diagnosis and Lyrods treatment response. This process is currently in planning phases and will soon move into development. The proposed workflows will walk the user through all key steps of Lyrods, including assessment of underlying liver disease and complete characterization of liver observations. After the workflow is complete, the reports will be automatically generated and structured to specifically meet the needs of Lyrods. Let's take a closer look at the Lyrods observation report. Key components of the Lyrods observation report will look something like we see here. These reports will adhere to Lyrods guidelines and include all relevant information in structured format. For each liver observation, the reports will include key images and major features including arterial phase hyperenhancement, non-peripheral washout, enhancing capsule and threshold growth. All 21 ancillary features can be incorporated and contribute to the final Lyrods category. The radiologist would not have to remember all of these steps but could be guided through them. These reports are far superior to text reports and avoid omissions and errors and improve accuracy and standardization. For this lecture, I started talking about the strengths of AI. Then I discussed multiple deficiencies in our current practice methods. I discussed the goals of augmented intelligence. We described combining various AI algorithms with guided workflows and automated reports. The use of augmented intelligence for longitudinal advanced cancer evaluation has been validated in a multi-institutional trial. Lastly, we briefly discussed how AI, guided workflows and automated reports will soon be applied to Lyrods. Thank you for the opportunity to share some ideas on AI and advanced GI cancers.
Video Summary
The transcript contains presentations on the cutting-edge use of AI and advanced imaging techniques in medical diagnostics, specifically focusing on CT and MRI systems in detecting and evaluating diseases such as pancreatic cancer, advanced GI cancer, and body composition. Dr. Meg Lubner from the University of Wisconsin discusses the immense potential of quantitative CT imaging in enhancing the value of diagnostic scans by assessing body composition and guiding clinical decisions. A significant part of the presentation is dedicated to the utility of dual energy CT and AI in deriving valuable diagnostic data retrospectively.<br /><br />Dr. Dick Eamon's work on elastography in MR imaging is highlighted as a prime example of effective clinical application, guiding through fibrosis staging and followed by Scott Reeder’s innovations in fat, water, and iron quantification for liver assessment. The transcript further delves into the application of AI in multi-phase CT and MRI imaging, specifically in pancreatic cancer and advanced GI cancers, as presented by Dr. Andrew Smith. These AI systems aim to improve accuracy, reduce errors, and streamline diagnostic processes through automated lesion segmentation, quantification, and longitudinal tracking, ultimately resulting in standardized reports that enhance communication between radiologists and oncologists.<br /><br />Finally, the discussions underscore the significance of opportunistic screening using CT for early disease indication and prognosis, emphasizing the integration of AI to analyze large datasets and predict outcomes more reliably than traditional methods. This innovative approach demonstrates potential enhancements in clinical practice efficiency and patient care outcomes by implementing advanced quantitative imaging and AI technologies.
Keywords
AI in medical diagnostics
advanced imaging techniques
CT and MRI systems
pancreatic cancer detection
quantitative CT imaging
dual energy CT
elastography in MR imaging
AI in multi-phase imaging
automated lesion segmentation
opportunistic screening
RSNA.org
|
RSNA EdCentral
|
CME Repository
|
CME Gateway
Copyright © 2025 Radiological Society of North America
Terms of Use
|
Privacy Policy
|
Cookie Policy
×
Please select your language
1
English