false
Catalog
Meet imaging challenges with AI and deep-learning ...
WEB05-2024
WEB05-2024
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Welcome and thank you for joining us today. I am thrilled we have the opportunity to bring together this exceptional group who bring their experience and expertise to discuss an important topic, addressing key challenges we know radiology departments face every day. We'll be discussing the topic of meeting imaging challenges with AI and deep learning solutions. Allow me to introduce our panelists for today's discussion, Dr. Peckett from Shields Health in Massachusetts, Dr. Lyme Gruber from SMN in Switzerland, Greg Verburg from GE in the Brussels office, and I'll be your moderator, Melissa Burkett. Thank you all for being with us today. Before we begin, I have to stop on our disclaimers. We all know the importance of precision imaging to clinical outcomes, but staffing challenges, and inefficient workflows can get in the way. We understand that within a radiology department, there's a high need to improve operational efficiency. Today, burnout and turnover rates continue to rise with approximately one in three physicians experiencing burnout at any given time. can pose challenges around staff and patient scheduling. Also variation in staff experience levels with legacy imaging equipment can lead to inefficiency and reduced image quality. Every day can be demanding, but imaging doesn't have to be. Deep learning can help you meet imaging challenges of today and tomorrow. We wanted to understand if we could achieve exceptional image quality without any additional manual steps. Effortless imaging isn't an illusion, it is possible today. To support the overall success of the radiology department, it's important to integrate both workflow solutions along with the software to gain the clinical insights needed across the whole department. GE Healthcare has effortless solutions across imaging to help achieve higher efficiency and high image quality that can enable you to gain the insights needed to support clinical decision making. Allow me to introduce effortless imaging. As a global leader in deep learning enabled image acquisition, reconstruction, and processing software, GE Healthcare is helping to deliver exceptional image quality across multiple modalities with both reconstruction and post-processing software along with workflow solutions to enhance the full exam process from pre-scan through post-scan. Today we will discuss how these solutions are coming to life across MR, MI, and CT to address some of the inefficiencies currently prevalent in imaging departments. So let's discuss how clinical advancements in clinical workflow and image quality are bringing efficiencies across multiple modality exams from pre-scan to during the scan and to post-scan. Starting with the bigger picture of the full clinical workflow, we understand it starts prior to the scan with patient positioning and system setup of protocols and more. Prior to the scan and after the scan, there are important steps such as image processing and reconstruction. Greg, could you start us off by sharing how automation and other advancements are streamlining the CT clinical workflow? Yes, absolutely Melissa, thank you very much for that question. When we examine the complete workflow for performing a CT scan, we can actually divide the process as you can see on this slide here into three main stages. With today's technology, various AI, artificial intelligence, and automation tools can actually be integrated within these three stages, which I will walk you through with. We do this to really enhance the efficiency and standardization across the workflow. If we can move to the next slide, please. When we zoom in on the pre-scan stage of the workflow, we try to assist the operators by using intelligent protocoling to select the appropriate protocol for each patient. Using machine learning, our system will actually learn which protocol has been used the most or should be used in this particular case, depending on RIS codes. Also, what we try to do by implementing a 3D camera on the CT scanner is that we use that to do accurate automatic positioning, as we know that there might be some flaws in positioning which can impact those, but also the planning of the scan, it requires additional clicks and so on. So this really ensures us that we have consistent results across all patients and operators. If we move to the next slide, we jump into the scan phase of the workflow. During the scan phase, we implement numerous intelligent systems in the background, which you can see here on this slide. A few examples are, for example, auto-prescription. Here we will select automatically the KV depending on patient's morphology. We will also apply smart plan, which actually will set the field of view based on the scout views. We can apply auto-ROI, which will put the ROI in the region of interest in order to do the contrast triggering. So again, to avoid mistakes and to be more consistent in our examinations. And the goal here is really to reduce the operator effort and ensure standardization across all examinations, ensuring that every patient receives the same high level of care. And of course, we also apply deep learning to improve our image quality, which you would find under the section of effortless recon DL. But then at this point, we made the acquisition, we made the images, but then comes the interpretation. So if we move to the next slide, the last stage of the workflow is the post-scan phase. So this is a very crucial phase for analyzing and creating reports and establishing a diagnosis. So high quality and standardization are essential for efficiency and diagnostic confidence. Since the image manipulation is very operator dependent, we as GE Healthcare, we try to automate as much as possible this process. Just as an example, spine autoviews. Processing the spine in a very manual way is super time consuming. So by using spine autoviews, the spine is actually automatically reformed. We no longer need to manually angle the spine, find the disk space, label the disk space. Immediately here, artificial intelligence will take over and will perform this task that is taking quite some energy. And after AI has performed this task, it will send the images to the PACS immediately. So they are available for the radiologists to read and to perform the reporting. So by doing this type of automation, we can save time, we can reduce operator stress, we can have an improved standardization across the different patients. Actually, the tech has time immediately again for the next patient and focus on patient care. So spine reformatting is just one example that I'm explaining here. As you can see, we have numerous solutions like for head scans, straightening out the head scans, stroke assessment, spectral imaging, cardiac imaging, radiation therapy, and we will definitely continue to innovate in this domain. Thank you, Greg, for sharing. And to further this, I'd like to welcome Dr. Lyme-Gruber, molecular imaging expert. Welcome, and thanks for joining us today. Dr. Lyme-Gruber, I have heard that you have built an amazing department in Switzerland equipped with brand new digital technologies, both in PET and in SPECT. Can you please share with us how StarGuide and OmniLegend have helped you make your procedures and workflow more efficient, and how has the new technology improved the overall experience for technologists and the patients? Thank you very much, Melissa. I will start by showing you how we organized the department in Switzerland in the Lake Geneva area. We are working in a private setting in a network of 21 hospitals, and we tend to focus on small departments that provide direct care to the patients rather than large academic departments. Nevertheless, with the help of GE, we are building innovation through our innovation hub in our main clinic in Genollier that you see here, where you can start the animation, hopefully, Melissa, where we are under the construction of our Swiss Theranostics Ward, which is our brand for performing theranostics. We do have, with the help of GE, installed an academy for the region of Austria, Germany, and Switzerland to have visitors to our MI Center of Excellence, and GE helps us build integrated radiation delivery and diagnostics in combination, hence, with radiation oncology on this site in collaboration also with Acura, that has a training site for the rest of the world outside the USA. If you go to the next slide, our focus in terms of effortless imaging focuses on high impact factors for the patient experience. We have the luxury of having relatively low volumes compared to other centers, which helps us develop complex imaging protocols, and we use effortless imaging also not only to have a high throughput, but also to use short scan times that allows us to perform more complex imaging, in particular in the PET domain. We're interested also in dosimetry, multimodal radiation, as you know, and we are a teaching center. If you go to the next slide, we want to also show you that an effortless imaging department can be small and still very interesting in terms of patient workflow, and can be the small brick of a bigger department. Here you see that we have a central control room with a lounge for the GE visitors, and I hope that some people in the audience will come to visit us. We have built this central room across the two modalities of MI, which is the OmniLegend, in a room that is ready for a total body PET, and the StarGuide. It's literally within 20 meters or a few seconds from the parking lot if you go to the flow of patients that come in. We do have, if you continue the animation, so the patients coming in from one side, and we separate the flow from the flow of radiation oncology patients that come from a dedicated elevator between the two departments, and finally we can separate that flow from the flow of visitors that comes from the GE Academy, which is the last arrow that has to show on the slide if you can go to the next. Thank you. This is our department, and here I will show you, I will discuss with you in the next part of the meeting, how we do with clinical example, take advantage of effortless imaging in improving scan times for the comfort of patients, but also to do complex imaging, how we are also able to scan every patient within a StarGuide machine because it's a 3D scanner, which is not classical for a small department, and what is the impact of a deep learning algorithm in our daily practice. Thank you, Melissa. Thank you, Dr. Langegruber. Dr. Peckett, would you like to share your experience with clinical workflow improvements in MR and how advancements in technology and workflow automation are benefiting your department technologists and possibly even your patients? Sure, thank you. Like CT, MR workflow has really improved from the time the patient gets on the table to the time the radiologists are reading the images. Even before the patient is slid into the magnet, the technologists are getting the patient ready and positioning the coils, and AirTouch is what allows the coils to be positioned and combined with elements automatically selected for the field of view, and this is different from the old days when the technologists had to pretty much guess which elements needed to be activated, so now it's automated, intelligent, and that makes the coil placement less prone, and it's really eliminated our repeat series that used to be due to suboptimal coil selection. We just don't see that anymore, so that's the technologist getting the patient sort of set up and ready to go, but even more exciting is AirX, which is a GE deep learning technology, and that automates the process of setting up these of the scans from a single three-plane localizer, so the technologist runs the localizer, and as soon as that localizer is done, the first series, first MR series, is prescribed and presented for the tech to verify. It's really very good, and you know, I see that the techs typically want to tweak the prescription a little bit, and we try and get them to run it just the way it is, but, and by the way, these slides are for the second part, so you can take them off or whatever, so we try and get the techs to run the AirX prescription just the way it is, and as soon as they start doing that, they realize that they've got a lot of other time to do many of the tasks that we're trying to get them to do at the same time as they're scanning. They're supposed, you know, while they're scanning, they're supposed to be getting paperwork ready, checking the risks, making sure that the previous scan got to PAX, and so we did a little study to sort of see how much time was freed up by using AirX, and our typical need takes maybe eight to nine minutes of scanning time, and it's in a 20-minute slot, so when we retested the technologists who ranged in experience to sort of see how many seconds they took setting up these scans, which is now automated, and our super tech took about 75 seconds on the eight to nine minute scan. That would be time that she could do other things with. Our average techs were kind of taking a minute and a quarter to a minute and a half, and some of the newer techs were taking two and a half minutes, and that's all found time that they can be using to either do the other things during that scan that they're being asked to do or just de-stress, which is, you know, pretty important at the pace we're going, and kind of my final point is that GE is really aware of the workflow and the increasing pressure on that's being put on the technologists by improvements in scanning technology and how fast we're going, and they've designed a new workflow experience called Signal One, which is already available on some models, and we're doing early clinical testing at one of our sites, and our hope is that when we compare the reduction in scan time, so techs have the same amount of work to do in less time, that all these technologies which GE is putting together will bring the amount of tech effort necessary to achieve a great scan kind of down in line with the reduction in scan time, and kind of at the tail end of the chain, the reading, for the radiologists, reading great images is just less stressful and a better experience, so that's kind of how it works through the entire chain. Got it. Thank you for sharing. It's great to see that not only are you seeing a reduction in workflow steps, but we're gaining efficiencies in acquiring the process of images. I understand that you were one of the world's first users of AIR™ Recon DL in a clinical setting. Given your experience with technology, could you share the impact that AIR™ Recon DL has on image quality and the benefit to your daily practice? Oh, absolutely. So, AIR™ Recon DL is the biggest advance in MR technology since the introduction of parallel imaging. We are able to serve more patients per day, increase patient comfort by making each examination shorter, and at the same time create images that are really significantly better. It's a kind of trifecta, and it doesn't happen in our world very often. For those of you who might not have seen AIR™ Recon DL, I think I'm just going to show—yep, we've got it there, the first image—to show you how this is done and how these are some of the images that were acquired during the earliest days of our trial with AIR™ Recon DL at Shields. That first image is how we used to do things on our standard reconstruction. That was from a 3T pioneer, and it's a sagittal STIR image of the lumbar spine, and I think it says down there it's about four minutes long. So, in order to do this process, we then modified that protocol, and if you click on it to get the next one up, we modified that protocol to sort of intentionally introduce some objectionable noise. And you can—hopefully you're going to be able to see on the screen that that second image is noisier, it's a tiny bit blurrier, and the scan time has gone—it's not quite half, but it's down from four minutes and 10 seconds to two minutes and 14 seconds. Honestly, I wouldn't love to read that image. Could I make a diagnosis? Maybe. But now, let me show you the third, which is—that is the exact same data that's on the center image processed using AIR™ Recon DL. You can see that the time is also two minutes and 14 seconds, and I hope that you can appreciate that actually that two minute and 14 second image on the right is better than the original four minute scan. It has higher signal-to-noise, and the edges are sharper. And it's really amazing that you can do these things at the same time. And it's sharper because AIR™ Recon DL is able to use all of the data that we acquire in the raw data in KSPACE, so we're increasing the sharpness without making up any data or using any edge sharpening. So that's just kind of what I wanted to show you, and you can imagine that applies to virtually every anatomy now. The other thing that I—that's really—that I wanted to point out is that we—at our institution, we keep pretty careful metrics. And when we compare similar machines without and with AIR™ Recon DL, kind of across our fleet, we are able to serve maybe 25 to 30 percent more patients per day. So imagine if your center, for example, just to take a rough number, we're scanning 20 patients per day. You get four, five, six additional slot times, depending a little bit on your patient mix. And that's huge in terms of providing patient service, keeping backlogs low, and of course, kind of the success of the business. So that's really kind of the critical things. Do I have a couple more minutes to go on? Sure. Okay. So just to kind of give you an idea of this overall change in speed. So when we started out before AIR™ Recon, we had about 15 minutes of scanning time for a lumbar spine. When we got two-dimensional, 2D AIR™ Recon DL, we cut that from about 15 to 8 minutes. And now, in the last year with 3D AIR™ Recon DL, we're doing 4 minutes and 45 seconds for a complete lumbar scan. And from that, we get a sagittal T1, a sagittal STR, and a really beautiful 3D T2 cube sequence. And then we reformat that 3D sequence, and it covers all of the disc spaces, allows us to see foraminal discs and little tiny fragments that we just we cannot see with conventional 2D imaging. And that has really, I mean, that we've expanded across our network because it's just so much better. And reformatting the 3D images has really significantly improved with AIR™ Recon DL. It just didn't really work very well before that. So, three sequences, next patient. And the final point I would make is that for the technologists, AIR™ Recon is really important because the increased signal-to-noise and sharpness that you get means that the techs have much more leeway in how they can change and prescribe something. For example, if a patient has different body size and you want to expand or shrink your field of view, you really don't need to change a whole lot of other things, which you used to have to before AIR™ Recon. So, it has been a huge win across the board. Thank you. That is great to hear, Dr. Peckett, the impact for your patients as well as the technologists. Thank you so much. Dr. Langruber, are you also seeing the impact of high image quality and molecular imaging? I believe you have experience with deep learning image processing both on PET-CT and SPECT-CT. How are you seeing a difference with these advanced solutions and what the benefits are to your daily clinical activities? Thank you, Melissa. Indeed, there are significant improvements in image quality, but also in the investment in time and energy that the techs have to take to obtain the image datasets. And of course, improving the workflow is highly beneficial to the patients. I must underline that this is the very first time that the patients come back to their referring doctors and comment on their scan to say that it was significantly better for them to stay, let's say, 15 minutes in a PET-CT with an injected CT, with a breath hold CT, and an arterial phase, and a whole body imaging. And then they go back to their referrer and they say it's actually very nice to do a PET-CT. And as you know, the workflow in PET-CT is not the fastest in the imaging world. So, it's a significant improvement for us, especially as we see either a lot of oncology patients that are uncomfortable and are tired of going through all those examinations, but also, for example, for patients with neurodegenerative diseases where they cannot stand for a long time in the scanner and are disoriented. On the images, I want also to show you that we can provide advanced imaging with the help of those technologies in centers like worldwide that have staff limitations and also regardless of the size of the center. If you go to the next slide, I will go through examples, especially we are having at that specific center only one SPECT camera, so no planar imaging at all. And that's a very specific setup because it forces us to do any kind of imaging with it rather than only the ones that are the highlight of 3D scanners. So, here you see if you start the animation that it's a dynamic acquisition and we've been able for months overall to have access not only to deep learning algorithm but to dynamic imaging with 3D scanners. What we will work on in the future is on dose reduction, of course, because we have doses that are way too high for the sensitivity of the scanner. But what patients say about the scanners, there are also the comments of the techs and the techs say that it's not only the fact that whether we are fast in installing the patient in the machine, you can go already to the next slide, but it's also the fact that once the patient is installed in a 3D scanner, well, basically, you don't have to go back and forth in the room a lot of time as you do usually with scintigraphic imaging to position the patient one way and then the other way, etc. So, the time you invest in positioning very comfortably the patient is helpful for you because afterwards you can start having comprehensive imaging at the whole body level. When you see here that a classical three-phase bone scan for a complex regional pain syndrome patient where you can see that in spite of doing spectrometry, we're able to do flow images, blood pool images of the whole body and also bone phase imaging on a delayed imaging scans. So, it's entirely possible to manage those patients with this way and here we will also drastically reduce the dose which is going to be an asset for the tech because that means that once they have installed the patient they don't have to expose themselves several times but also if the dose is less it's important because many indications concern patients in the scintigraphy space that are not cancer patients. I will show you the next slide which is a better example. It's still two patients and you see we do scan, we used to scan for 45 minutes and it's a very long time for a patient but keep in mind that once it's not just a swipe of the whole body on a planar image here we actually have a full 3D data set of the entire patient and then when you click and get the animation you see that with the upgrade we are able to reduce time by 40%. Don't go that fast. It's the scanner that is fast not the slides. So you see that we were able to reduce the acquisition time by 40% and you can see that also the image is nowhere near the same. So not only do we almost have the time and come to a time zone that is acceptable to the patient and to myself because it's not that late to speak today but it's also 25 minutes and there is no other image you need to take because you have the entire patient right and it's really a padding change. So to be honest we have to adapt ourselves because we are looking at MIPS and not pseudostatics. So it's not the same because it's not an advantage between anterior-posterior things like that. We have the liberty to adapt reconstruction parameters on a per bed or per region base which may be interesting to separate the reconstruction algorithm for the legs from the torso for example and we can use CT for attenuation correction. So in this image the patient had no CT at all so there is no correction of attenuation and it's also interesting because once you have that data set you can zoom in and tailor the CT to the region of interest. The patient is not moving so it's an extraordinary asset. If we move now to the OmniPET because I want to talk to you about the PET as well because there is deep learning algorithm and effortless workflow in it. We may want to see an image here because it's so much nicer to see my team and an animation if you click one more time. Here what we want is we want to help increase patient throughput but also use this bet that GE did in using BGO crystals to enhance sensitivity to improve scan time, scan quality and leave us time for more imaging on the same patient because that's always what we have with nuclear medicine radioactivity is on board we can keep imaging. If you just click on the animation I underline, oh that's fast, again that both scanners use optical recognition to place the patient in the scanner and that's a huge asset that is not only the case in nuclear medicine but it's very important for two things. Give it away let's go to the next slide. Sorry the animation will not play. That's okay for the tech it's an asset to be able to place past the patient in the scanner because the optical recognition knows the shape of the patient and places the patient in the ISO center of the scanner but I also have to emphasize that I've been struggling my entire career to convince the techs to place the patient in the ISO center of the scanner also for those purposes because that's where you have an impact in terms of radiation in terms of dose for the patient so it's when you see CT scans and it's somewhat off-center you should really have the warning lights blinking and that's an asset. It's not deep learning but it's deep learning in the camera it's not in the optical camera and it's already something. Here now I see you I show you a few algorithm or technologies in the PET space that are so useful to us. One challenge that we have is you see here it's the same patient two studies PSMA and the DCFPYL so prostate cancer patient you see that it's a very low activity already it's 2 mbqs per kilo it's 80 seconds per bed and we're talking 32 centimeters bed and not 16 or 18 or whatever and scan time on the patient on the left on his first study was eight minutes and now we see we kind of see also issues is that if the patient has irregular breathing during a one minute bed on the liver you can see start to see artifacts so she has a an answer for that which is called acoustatic technology that increases the scan time a little bit on each bed where there is motion detected and drops the frames that are in the unfavorable phases of the cycle and you see it's not that bad when you have an eight minute scan becoming a 12 minute scan right and you see the quality you could easily go lower if we go to the next slide we can also see that in addition to acoustatic we have the precision DL technology of course and here it's a normal BMI patient still with 2 mbqs FDG this time and so probably on your screen they look pretty much the same so same study same patient but without a deep learning on the left with deep learning on the right what you see is that you recover also better statistics for your lesions and you have a SUVmax that goes from 5 to almost 9 on a centimetric lesion. If we go to the next slide we can see how impressive is the capacity of the combination of deep learning and high sensitivity crystals because here we have a whole chest in one single bed acquired for 10 seconds 20 seconds 30 seconds and our standard 80 seconds protocol so see how how spoiled we are we can do an 80 second scan which is already very fast and we don't even need to because we could even scan for more time but at some point we have to to accept that the scanner is just good and you can see that we recover very good statistics whether that's at 10 20 or 30 seconds. If you go then to the next and the last slide because I think that my five minutes are running out is that precision DL is also there for high BMI patients and see here how good is already the scanner with two MBQs on the left but nevertheless you see an improvement in noise or you don't maybe because it's the issue of showing it on a slide and not on a diagnostic screen but you can maybe see on the bowel that there is a better better delineation of the bowel and here we're talking about two MBQs and maybe we could go half that and still it's just a 12 or 8 minute scan so it's pretty impressive and so that concludes my talk of course I cannot avoid advertisement about my wonderful team so small departments have small teams but they have also challenges with this this same technologies that are used in big departments so please visit us if you want to you're more than welcome. Thank you Dr. Langruber and thank you for sharing your experience and your team with us. It's been wonderful to hear how efficiencies can be gained not only across the workflow but also within the image acquisition itself. Achieving superior image quality that improves confidence during interpretation with faster scan times that provide the needed clinical insight while avoiding the need to re-scan patients. Greg I know you shared a bit about this with the CT workflow earlier during the discussion is there anything you would like to add about the impact of obtaining high image quality and further clinical insights from a CT perspective? Yes absolutely as I mentioned earlier on within our effortless workflow we have a solution that we call Effortless Recon DL so I'll be happy to expand a little bit on that so within Effortless Recon DL we actually have True Fidelity DL and then later on I'll mention I'll talk about True Enhanced DL and they have a significant impact on CT imaging today so both technologies which are part of our deep learning imaging reconstruction suite help us to really significantly enhance the image quality and we would and we gain in clinical insights so if we start with True Fidelity DL it's a technology deep learning based that actually helps us to produce images with a remarkable clarity by reducing noise as you can see on the animation and it will also improve the overall image quality. What is important to know is that here we actually use filtered back projection as our ground truth so that's our aim to achieve that image quality that we had a long time ago but of course with the doses the CT doses that are accepted as of today. So the results that we get with this is highly detailed images that can boost the radiologist diagnostic confidence and of course clearer images can help to have a more accurate detection of abnormalities reducing potential need for follow-up scans and of course streamlining workflows. If we jump to the next technology which is True Enhanced DL here actually it's another technology that we apply that we have learned from from our experience in spectral imaging and here it will help us to refine image contrast so it will help us to distinguish subtle differences in tissue types and improving the overall diagnostic quality of the images. So True Enhanced DL what we do here is actually we can virtually lower the kilo electron volts at what we are looking at in the image meaning that we can boost the contrast inside the image. So to conclude in terms of clinical insights both True Fidelity DL and True Enhanced DL are particularly important and beneficial in fields like oncology where we want to do precise imaging. They can help us to better delineate a tumor for example and of course if it has a growing population of obese patients globally it's important that we can also serve this population and what we've learned with True Fidelity DL is actually it enables us to even work with lower KVP meaning we still have signal and we have less noise in those in those patients where we struggle in the past. In cardiology particularly True Fidelity DL also helps us to have a clearer visualization of coronary arteries for example. So in general we can say that these technologies really improve our workflow efficiency because we have less artifacts we have less noise in the image so less doubt and as I mentioned earlier it might reduce the need for repeat scans and of course consistency is pretty important as well as we want to have high quality images across all our patients. We should serve them all with the same quality and therefore we try to create this type of technologies. So from a patient experience perspective of course it's clear that as we can reduce noise it opens the door to reducing or potentially reducing or optimizing the radiation and eventually maybe contrast doses right without compromising image quality making the scan safer especially if you have frequent scanning and in those patient populations where we have risks of patients who have chronic diseases pediatric patients we really want to avoid scanning them too much so if we can go lower in those that's the way to go right. So in summary again True Fidelity DL, True Enhance DL they can really help us transform CT imaging which has always been seen as a high dose delivery modality but nowadays with these type of technology we can feel more at ease more safe and we are assured that we get the quality and detailed clinical insights so we can serve our patients in the best possible way. That's it from my side Melissa. Thank you for sharing. At this time I'd like the audience to please enter your questions for our experts and I would like to thank our panelists and experts for sharing their experiences and helping us to understand a bit more of how AI and deep learning solutions are impacting their clinical workflows today for the patients as well as the technologists and the potential to help across many modalities to achieve efficiency and high image quality needed for confident clinical insight. So taking a look at the questions that have come in let's start with Dr. Landgruber. How do you manage scans that were planar with the tomographic star guide spec system? So to be honest it was more of a change of philosophy that I would have expected initially but as always you can survive most of the innovations because they don't get really worse than before they get better. So we do have a very small limitations but that have to be taken in account in terms of a high energy imaging but since we can do dynamic imaging and 3D imaging of the whole body we don't have many limitations. When you plan your protocols you have to decide really the sequence whether you want to do one field of view and center on that or whether you want to scan the whole body and then focus. You have the possibility to do with or without CT and to decide whether you do the CT before or after. So for example you can do the CT before so that when you do a dynamic acquisition you are able to reconstruct attenuation corrected images on the fly which is quite useful because you don't want to compromise on the decision process such as interrupting the scan because you're doing a renal scan or lymphocentrography and the drainage is fast so you don't want to continue scanning forever. So all those things are possible on those scanners for sure. So I think I tried to answer your question hopefully the audience will ask more. Yes we'll see. Thank you. Dr. Paquette, how has using ARICON DL increased your staff's confidence when they're interpreting MRI images? Yeah, so first of all, ARICON DL, its main function is to remove noise from the images and it also removes certain kinds of artifacts and that's what we as radiologists are fighting against all the time. Noise, artifact, motion, those kinds of things. So as we look at images and as we learn, a part of my brain is always trying to do its own denoising. I'm looking at an image and mentally removing the noise. If ARICON does that for me, that's less that I have to do and it's a piece of my brain that just isn't working on this on this problem all the time. So the images are more pleasing. We all like to look at pretty pleasing images but if they're easier to just get through and your eye goes directly to the pathology rather than being distracted by all the artifact and the noise, which yes we can we can usually get around, but it's just much more work. So that decreases my stress level. It increases my confidence. The other thing that increases the confidence is that we can now with with ARICON, we are getting much less artifacts. Why? Well part of them is that some of the artifact is removed but also we're going faster. When we take an image faster, the patient has less time to move. I think I showed an example of we're doing a lumbar spine sequence in two minutes rather than four. So you have half the motion and half the problems that are generated and finally we are able to confidently prescribe and run high resolution 3D sequences where you can really look at very very tiny structures. We're doing a lot of MR neurography now and I'm looking at nerves in the hands, suprascapular nerves confidently because I know those images are going to come out and I can see things that are a millimeter or so with no noise. Prior to ARICON that just those things weren't possible and you'd be sort of guessing, well I don't see that nerve, I guess it's not abnormal. Those are just some of the ways. Thank you. There's a question that just came in on the topic where you were just discussing. It says how do you reassure technologists and radiologists who are concerned about having to handle more patients or read more images per day due to the speed of ARICON DL? That is a great question and it's a challenge. So the patients love it because they're in and out faster and they really notice. We have many patients who have many exams and they're like, yeah is it done yet? The technologists are being asked to do most of the same work in less time and that's why GE and we are working so hard to figure out ways for example with Air Touch, with Air X, with the enhancements in the operating system and the user interfaces to align those workflows. So ideally if the scan is say half as long, the technologists will have about half as much to do. The number of scans are of course a challenge for the radiologists. I guess we need more of us. And one of the things that we can do and that we're actually looking at really hard now is it's easy to generate thousands of images in an MR scan. Ten years ago it wasn't, but now some of the scans I look at easily have two to five thousand images, too many. So what we can do now with the new technologies, we can confidently get the best images that we need to make the diagnosis and not all the extraneous ones. That is an ongoing project. It's a great question and that's one that we work on every day. Thank you so much for sharing and good luck in those new technologies and adventures. Dr. Lange-Ruber, there's a question that came in for you asking how many patients per day you image with the new PET-CT and the AI-optimized acquisition? To be honest, I have no clue because I could probably do many more patients than I actually do. So there are two aspects to this question. So it's certain that we can easily do 20 minutes or maybe even less than 15 minutes slots if we really wanted. Maybe at some time you will have a shortage of the number of injection rooms you have, right? Because you have to inject the patient and keep the patient for an hour before you actually image. So currently really in small centers in Switzerland, we do 10 to around 10 patients a day but we could do 30 really. What we use the time for is we still limit ourselves to image the patient for 20 minutes or 25 minutes most because we don't want the patient to be uncomfortable. But within that time frame, we're able to acquire a full body CT in venous phase or even in triple phase on the upper abdomen as a routine scan for 90 to 95% of our patients. We do in addition to the scan of the whole body, we're able to acquire additional images such as on the chest to also a CT of the chest in inspirium so that we can actually in breath hold, sorry, so that we can actually detect lung nodules and have a one-stop shop. So we actually generate all those images. We also add data sets in respiratory gated motion so that we can see the motion of the lesions on the chest for our radiation oncologists to already evaluate what technology they will use once they deliver EBRT. So for all those reasons, we stay with relatively long slots of 30 minutes for our patients and then it's only a question of how long are you open and how many injection rooms you have. And then there is a second question on the Star Guide. Have you seen an increase in image consistency in your quality because the detectors self-place or the positioning is due to the patient scan and not to the technologist moving the detector heads? If it's within a single patient, I can't answer because we've been open just for a few months. So most of the patients have not come back yet. If we see between patients, I would tend to say yes because the placement of the detectors is as you point out, automatic. An easy example is brains or even cardiac imaging for that aspect where we see very little variability in the quality of the scans between patients. That's great. Greg, there's a question for you that came in. It says, how do you reassure people still hesitant about the fact that DL potentially adds non-existent information? Right. Yeah, that's also a very good question. I think for that, we need to refer to our procedures in validating. And again, on our scanners, you're not obliged to use DL all the time. It's a matter of taste and reviewing what you like. We also still have the iterative reconstruction available on the scanners. So you can actually choose from different applications which you prefer, which you trust the most. But of course, from the majority of the CT scanners we have installed, people just use deep learning because it's a nicer image to look at, like Dr. Paquette has referred to. It's less fatiguing to look through the noise and just see clearer. So it's a matter of adoption, which has always taken a bit of time, but we are pretty confident about that technology for sure. And does the True Enhanced DL have the different choices that True Fidelity has in its settings for low, medium, and high to help with those preferences? Yeah, right. That's a good question as well. So currently at this very moment for True Enhanced DL, we reconstruct using 50 kilo electron volts. So we have one energy level to choose from, and we do not have the option to choose for low, medium, high at this very moment. Okay, thank you. And then there's a question for Dr. Paquette. You mentioned the compatibility of Air Recon DL with 3D acquisitions. How has this changed your confidence in reading MR's multi-planar reconstructions? Oh, this is very big for us. We do a tremendous amount of 3D imaging, and although we did it before 3D Air Recon was available. So initially Air Recon for MR was available for 2D, and then it was expanded to other things like Propeller and Flex. And I can't remember, it was about a year or 18 months ago that it came out for 3D imaging, and that has changed a lot of how we practice. Because prior to Air Recon 3D, if you, for example, acquired an image in a sagittal plane, the axial reformats might not be great. They were a little bit challenged. They were always a little blurry. For the most part, 3D Air Recon DL has fixed that, and if you properly acquire your 3D volume, it can be very challenging to tell what the original plane of acquisition was. So that problem is gone, and we will just acquire a volume and the radiologist will manipulate it in real time, move through it. If you want to, for example, figure out where the cervical neural foramina are, it's pretty easy to do, and the results are great. And that is pretty much since 3D Air Recon DL. Great, thank you so much for sharing. And that comes to the end of the questions that are in the channel that I can see. And we're just about at the end of our time, so it looks like perfect timing. Once again, thank you all so much for joining. Thank you to our experts for sharing and being open for questions on the fly. We really appreciate your insights and expertise. And everyone, thank you. Have a great rest of your day. Bye-bye.
Video Summary
The discussion focused on addressing radiology department challenges by leveraging AI and deep learning to improve imaging workflows and image quality. Panelists included Dr. Peckett, Dr. Lyme Gruber, and Greg Verburg, with Melissa Burkett as the moderator. The conversation highlighted the role of AI in enhancing operational efficiency amidst staffing issues and burnout. Solutions from GE Healthcare, such as deep learning-enabled imaging, aim to deliver exceptional image quality and streamline the entire clinical workflow, reducing manual steps and enhancing decision-making capabilities.<br /><br />Greg Verburg emphasized the automation of CT workflows using AI to standardize processes and reduce radiologist workload. Dr. Lyme Gruber discussed the integration of advanced technologies in a Swiss hospital to improve PET and SPECT imaging and workflow. Dr. Peckett highlighted improvements in MR imaging workflows and image quality through automation, such as GE’s AIR Recon DL, which significantly enhances image quality and reduces scan time.<br /><br />The panelists agreed that these advancements lead to improved patient care by reducing scan times and increasing image consistency, while also addressing potential concerns about increased workload for technologists and radiologists by streamlining processes and decreasing the need for repeat scans.
Keywords
radiology
AI
deep learning
imaging workflows
image quality
automation
GE Healthcare
operational efficiency
patient care
RSNA.org
|
RSNA EdCentral
|
CME Repository
|
CME Gateway
Copyright © 2025 Radiological Society of North America
Terms of Use
|
Privacy Policy
|
Cookie Policy
×
Please select your language
1
English