false
Catalog
ASGE Endoscopy Course at ACG: Everyday Endoscopy: ...
Session 1: The Future is Now: Advances in Endoscop ...
Session 1: The Future is Now: Advances in Endoscope Technology
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hi, everybody. Good morning. Well, I would like to welcome you to the American Society for Gastrointestinal Endoscopy Postgraduate Course. On behalf of myself, Michelle Anderson from Mayo Clinic Arizona, and my co-directors, Dr. Nalini Gupta from GI Associates in Milwaukee, Wisconsin, and Dr. Irving Waxman from Rush University Medical Center in Chicago, Illinois, we would like to thank everyone for joining us for this course entitled, Everyday Endoscopy, Expertise Meets Evidence. Can we get our slides? We would also like to thank our esteemed faculty in helping us put together the content for this postgraduate course. We have, as you can see, a star-studded lineup in store for you today. Next slide. So we'll be taking questions and answers. You can scan the QRS code here and submit that way. Or you can do it simply by joining the text stream and texting ASGE2024 to 22333. Next slide. You should have received prior to joining the course a list of all faculty disclosures listed on the accreditation statement document. Please note that all of the relevant financial relationships listed for these individuals have been mitigated. Next slide. We would like you to fill out the online evaluation because it helps us to form better and future courses for you. Also note that this course has been approved for eight AMA PRA Category 1 credits and eight ABIM MOC credit. You should have received an instruction sheet when you walked in the room if you did not see anyone at either door and they can give you that. Next slide. So just a couple housekeeping items. I would like everyone to please silence your mobile phones if you've not already done so. Restrooms if you need them are to the right. And all of the sessions will be held in this room, including some really interesting TED talks that we have for you for this afternoon during the lunch break. So you'll go out and get your lunch and then come back. So without further ado, we'll jump in. Our first session today is The Future is Now, Advances in Endoscope Technology. And I am honored to present our first speaker. Amandeep Shergil is a professor of medicine at UC San Francisco. And she'll be leading off with What's All the Fuss About Endoscopes Today and Tomorrow. Thank you. Got it. Thank you. Hello, everybody. Good morning. Thank you to the course directors for inviting me to speak. I'm honored to kick off the course. I'm going to take a moment to review my disclosures because they're actually relevant to the talk today. So I consult for ergonomics for Boston Scientific, Neptune, and Dragonfly. I have received a research gift from Pentax and a visiting professorship from Intuitive Surgical and Pentax. I think it's important to note that I use Olympus scopes and the illustrations provided by the scope companies have been used during this presentation to help explain design features. The goal of this talk is to cover technology advances in endoscopes across all manufacturers. We're going to review imaging, and really it's just going to be a high-level overview of available technologies. I won't have time to get into details about yield or efficacy of these different technologies. There's a robust literature supporting many of them. This is a recent article that covers even some of the newest technologies, if you're interested. For ergonomics, I will present data that objectively demonstrates ergonomic improvement because this data is harder to find in the published literature. We're all familiar with the reusable endoscope companies, and we will also cover the disposable endoscope companies. So I think it's helpful to level set when we're talking about imaging and endoscopy. To see those beautiful images on the screen when we're scoping requires first a light source, and across the board, all the processors now use LED. The camera lens provides your field of view, and the image sensor is what determines the resolution. Back in the day of flexible fiber optic endoscopes, the pixels were determined by the number of optical fibers in the bundle. We then progressed in the video endoscopes first to standard definition, and now we have high-definition capabilities with CCD and CMOS sensors. This high-definition image is transferred via cable to the image processor where any post-processing might happen, and then the video output and video cable send this image to the display, which may be a 4K resolution monitor, but the lowest resolution component in the image system determines the resolution of the overall system, so this will be a high-definition image. It's important to note that the only 4K imaging is currently available in rigid video endoscopes. The optical imaging technologies are really going to be focused here on virtual chromoendoscopy. Again, this is a nice review article that covers a lot of these newer technologies. For virtual chromoendoscopy, we have pre-processing technologies, which are basically manipulating the wavelengths of light and filtering wavelengths of light to create a different kind of image, and then the post-processing technologies, which is what the processors are applying computer algorithms for. It's also helpful to level set for ergonomics, so ergonomics is a scientific study of work. It's important to note that our work shouldn't hurt. It really relies on user-centered design, designing tools and tasks within user capacity while understanding limitations, and in GI, it's important to note that our users are changing. Fifty-one percent of active gastroenterologists are over the age of 55, and there's an increasing number of women in GI. Risk factors for endoscopy-related injury include posture, force, and repetition, so any time we're in a non-neutral posture, applying high forces, especially if that's repetitively over the course of a day, this can overcome our internal tissue tolerances of our muscles, ligaments, tendons, and joints, first leading to pain and then injury, and we know that there is an overall high prevalence of injury, 58 percent in the recent systematic review and meta-analysis resulting in the first ASGE guideline on ergonomics and endoscopy. So we're going to be approaching this from the hierarchy of controls, really asking the endoscope companies what they are doing to prevent injury through their design processes, so either elimination or substitution, as well as what engineering controls they're introducing to help neutralize posture, minimize forces, and potentially minimize the amount of time we're spending performing procedures. So you're going to see these signs sort of pop up throughout the slides. The way I approached this was I asked the endoscope companies how have hand size, reach, and grip strength been considered in the latest design iterations, what is the weight of your control section to try and understand static load, and what engineering controls are available. I emailed the scope companies in preparation for a talk before DDW. I met with them all at DDW, and then I met with them again in September to understand their virtual imaging, and many of the illustrations are provided by the companies. So many of the factors are same across the endoscope platforms, so we're just going to review those broadly here. So Fuji has the Alexio system, Pentax the Inspira system, and Olympus EVOS X1. Fuji's been around since around 2018, but Inspira and EVOS X1 are new on the market in the last year or so, so backwards compatibility is an issue and is available with both of those processors. Its LED light source across the board. The cameras use either CMOS sensors for Fuji and Pentax, and Olympus uses CCD to produce high definition images. Fuji does have an optical zoom in their 760Z series as well as a digital zoom on all scopes. Pentax also has a dedicated optical zoom scope as well as digital capabilities, and Olympus offers dual focus on their HQ scopes and electronic zoom capabilities. What we're going to focus on is these enhanced imaging technologies, their AI as well as other features. So to start with Fuji, so Alexio is their platform, and G7 and GI700 are their endoscopes. The main image enhanced technology is blue light imaging, which is a preprocessing technology that filters the short wavelength of light to enhance characterization of lesions. Their linked color imaging is both a preprocessing as well a post-processing technology, so they are again filtering that short wavelength of light, but in the second step they're applying a post-processing algorithm which emphasizes the slight color differences and provides better color contrast within the red color image, highlighting abnormal mucosa. They also have autophotometric control, which automatically optimizes image quality, and their AI is called CADEYE, and it's a F10K cleared and in limited release currently. They also have a technology called ScaleEye. This again is 510K cleared, limited launch. ScaleEye is available in laser-enabled scopes, and so it can automatically determine distance and auto-corrects the scale for that. The laser is placed on the left side of a lesion, and a virtual scale is then placed on the screen to help estimate polyp size. Their control section and insertion tube were underwent a complete redesign in 2016. They have three scope models, an adult hybrid SLIM and therapeutic ultra-SLIM, which has 210 angulation. The prior version of their control section had a drum and wire mechanism or a pulley mechanism with greater than 100 parts. Their G7 has less than 30 parts and uses a chain drive mechanism with a new lubrication method that may maintain responsiveness for longer, so the goal is to increase durability and control, and although the data were not available for review, this may result in fewer repairs and full angulation capability and responsiveness that's maintained over the life of the endoscope. They use a softer material in the flexible portion of their connector to reduce static load on the left wrist. This may result in left strain on the arm and wrist and improved control in various body positions. Fuji is one of the only companies that actually has a recommended way of holding the scope, so if you're interested in Fuji or have Fuji, make sure you talk to their representatives to understand their recommended means of holding the endoscope. This is the G5 prior version, and this is their new G7 version. The user control updates include an initial straight position of the left-right knobs and more space allocated for knob access and operation. They've smoothed out the surface of the control section to design to fit more comfortably in the hands as well as easier access to buttons as well as angulation knobs, although data on the hand sizes or how this was studied was not provided. The air, water, and suction valves have been raised in height for easier fingertip operation, and they can be controlled from the side to accommodate smaller hands. Again, hand size data was not provided. And insertion tube engineering includes adaptive bending, advanced force transmission, and a flexibility adjuster. Their control solutions include this GI700 series, which is this distal balloon which, when inflated, can flatten the folds, in theory bringing those polyps that may be behind a fold into view, potentially reducing the need for tip angulation or torquing to work the folds, which may reduce strain on the left thumb, hand, and wrist. They also have a dial extender that's in development. This is an active R&D project. The timeline and pricing are to be determined. Moving on to Pentax, they have the Inspira processor and the I20C endoscopes. Their main imaging enhanced technology is called iSCAN. So they have iSCAN optical enhancement, which integrates both digital as well as optical filters. So the digital filter, again, filters the short wavelength of light. And then the optical enhancements create the same kind of image in more true-to-white light format. They also have iSCAN surface enhancement, tone enhancement, and contrast enhancement. Surface enhancement enhances the mucosal structure and natural color. Tone enhancement changes the vascular and mucosal structure with a color tone supporting pattern characterization. And color enhancement adds blue color to the edges, enhancing depressed areas. They have auto HDR, D-range expansion, which enhances far-field illumination to bright and dark areas, and twin mode, where you can see the white light image and the optically enhanced image side-by-side. They also have AI that's 510K cleared and available for sale, although, again, in limited release. Their control section is described as lighter and redesigned. This was based on a 2019 evaluation of hand dimensions. They had 42 users, with a variety of users reflected gender and geographic considerations. They looked at average hand, thumb, and finger dimensions. And then they engineered finger-optimized spokes that require less force to move the distal end and lower, flatter steering wheel shape to improve access and operability of tip angulation. The anthropometric data was provided. The force data was not. You can see here, in purple, the prior generation, and in black, what the current generation looks like. The height of the right-left knob was reduced, and the diameter was increased. The shape was modified to ease operation. The umbilical cable was shifted outward for a better grip on the control body. And the third and fourth buttons were arranged horizontally instead of vertically and have side-triggering capability. The scopes are 210-tip angulation, 210-degree tip angulation standard. The endoscope connector is 40% lighter and includes a swivel for light and free-moving use. And their insertion tube technology includes iFlex, TrueTorque, and adjustable stiffness. In terms of control solutions, they are looking to put dial extenders on the market. It's pending 510K clearance. They do have an endoscope support stand. That's what I received my grant for. It's FDA-approved but not marketed. And in terms of administrative controls, they do recommend scheduled scope maintenance, and they do offer personalized assessments with physical therapists if you are interested. Moving to Olympus, Olympus has the EVUS X1 platform as well as the 1100 series scopes. Mostly those of us who use Olympus are very familiar with NBI. Again, this is a preprocessing technology that filters the short wavelength of lights to highlight surface mucosal architecture. They have now introduced RDI, or red dichromatic imaging, which filters the amber and red LED lights. These penetrate deeper and so can highlight deeper blood vessels and in theory can facilitate, especially in bleeding cases, targeting bleeding vessels. They have a technology called brightness adjustment imaging with maintenance of contrast. It corrects brightness of the dark portions of an image. It's compatible with the white light and NBI technology. And they also have TXI, or texture, tone, and brightness enhancement. So TXI images are captured first using white light, then this technology is applied post-process. So your image is split, your texture and brightness are both enhanced and then stacked to create an image. This ends up being your TX mode 2 image. And then when color tone enhancement is applied, that is your TXI mode 1 image. They have AI available that just received 510K clearance but has not yet been officially launched in the U.S. They have a scope guide, which helps you understand the scope configuration as you're inserting through the colon. And this will be compatible with their 1100 series scopes. Their control section is called ErgoGrip. I was very intrigued to understand what was ergonomic about their grip. So the yellow is the new scope contours and the gray is the old. They have tried to design this to provide easy reach to dials. So they have snipped a little bit off the up-down dial and added a little bit to the right-left dial for reach. They've designed it to be easy to grasp with smaller hands by slimming the control section handle. The buttons have been raised in height and again have side actuation. And this was tested according to glove size, small, medium, large, although the number of users that were evaluated was not available to be provided. They did say that the control section is 10% lighter than their older scope, although they were not able to give me the weight. They did say that this equaled 30 grams and asked that we do our own calculations. The insertion tube is responsive insertion technology, including high-force transmission, passive bending, and variable stiffness. And their control and solutions include the scope guide, which potentially can reduce procedure time by leading to less looping and decrease equal intubation time. And the endocuff vision is a distal attachment cap, which in a meta-analysis had almost a one-minute lower mean withdrawal time. An important administrative control is scheduled endoscope maintenance. So customers with a full service contract can request for an endoscopy support specialist to come out to check angulation on a regular basis. This has been a personal issue of ours at the San Francisco VA with our Olympus scopes. Over time, the angulation control vials can stretch and play. And there's an interesting 2016 study that showed in the cohort of their Olympus scopes only 10% of them reached maximal angulation for all bending as prescribed by the manufacturer, even though 40% of them had just come back from their yearly maintenance check one month before the measurement. So this can be a big issue just in terms of our ability to perform procedures. In here, this picture is courtesy of Dr. Sanchez-Luna. You can see two upper scopes. How much harder it's going to be to see the cardiac retroflexion with the scope on the left as compared to the right. This moves us on towards disposable endoscopes. So one of the manufacturers, Ambu, they have the Ambu A-Box 2. This does provide high-definition resolution. And they do have an enhancement mode called Advanced Red Contrast. They have two gastroscopes on the market, a therapeutic, which has a 4.2-millimeter working channel, and a diagnostic with a 2.8-millimeter channel and 210-degree angulation. They have been able to demonstrate a 55% weight reduction compared to reusable scopes, which results in a 15% to 30% wrist muscle load reduction compared to reusable scopes. And this data was presented at UEG 2023. The A-Scope Duodeno 2 is their next-generation duodenoscope. They recognize that to apply ergonomic principles, you bend the tool, not the wrist. So they have added 20 degrees of angle to the A-Scope Duodeno 2 to favor neutral posture and position. This has been evaluated in 32 physicians, including seven women of variable hand sizes. And this data is being currently processed for publication, so it was not available for my review. Boston Scientific has the Exalt Model D Generation 4. So the Exalt controller for use with the single-use duodenoscope has a detachable power cable, DVI, and HDMI cables to optimize resolution. This is a high-resolution digital camera system. And the LED light source here is actually built into the distal end of the scope. It's called in-scope lighting. In order to iterate on their designs, they've actually created a database of hand sizes to guide design decisions. They've improved access to large and small wheels based on this data. And their ergonomic updates in the Generation 4 include updating almost 25 percent of their device components, reducing friction, updating working channel, and elevator linkage redesign. And this has resulted in a force reduction of 35 percent in the left-right direction, 25 percent in the up-down direction, and 25 percent in the elevator force. These measurements are comparative to the Exalt Model D, and it was made on a biaxial test equipment for both the new and older version of the Exalt Model D. This has been evaluated in a simulation study, and they've been able to demonstrate that a lighter scope has a promising effect in reducing upper arm muscle activity during ERCP. So that was a whirlwind tour of the scopes that are available today in sort of the newest and latest technology. Here's a summary again of all of the different platforms, both reusable and disposable. And I wanted to take a moment to just think about the common themes in ergonomics. It's clear in talking to industry that they wanna do better, but we really need to hold them to data because so much of what we're hearing now is ergonomic this and ergonomic that, but when we get down to what are the anthropometric considerations or the biomechanical considerations that have been taken into account, that data is less robust. The reusable scopes, they are making smaller control sections that are lighter, they're improving reach with dials and buttons. They're talking about decreased forces, but we don't see hard data to support that assertion. The insertion tube technology has variable and adjustable stiffness as well as torque transmission. And we're seeing that in the disposable scopes, they're lighter overall and they're able to iterate at a faster pace because they are disposable. So it's really interesting to see how they're looking to neutralize postures and it'll be interesting to see what that data looks like as well. And with that, I thank you. All the speakers will take their questions at the end, so you can please send the questions by the text message technology. It's my pleasure to introduce our next speaker. Dr. Shergill has led to the future of the scopes and we are going to talk about artificial intelligence and endoscopy, especially in IBDs, it's a prime time. Dr. Sravanti Parasay is here from the Swedish Medical Center. She's also a leader in the AI Institute at ASG. Thank you. Hi, thank you Nalini for that introduction. These lights are bright here. Probably need a wake up call here. So what I'm going to talk today is AI in inflammatory bowel disease. A lot of us already know what's going on in the colonic space in terms of like CADI devices. How many of you have used any kind or experimented with CADI devices here? Nobody? Okay, a few. Okay, so moving from that base, I'm going to take the talk over to how we are using artificial intelligence in inflammatory bowel disease. And also is it prime time or are we getting closer to using some of these technologies? And these are my disclosures. So what is artificial intelligence and machine learning doing with IBD, right? When we think about artificial intelligence, most of us as endoscopist, we are very much in this box of computer vision or pattern recognition. So we'll start with some of those pattern recognitions that are relevant to inflammatory bowel disease like dysplasia detection, cancer detection, inflammation scoring systems, and how these inflammatory scoring systems are driving next generation clinical trials and discovery of new drugs and so forth. The other important piece that we as clinicians do other than just doing endoscopy is also trying to predict which patients respond to certain medications and why we choose certain medications for certain types of patients. Do we have a precise way as to how we can get to that and how can machine learning or AI help in that process? The same thing with treatment response, disease monitoring, and also care delivery. How do we deliver care? How do we improve patient education tools and so forth? So this was a recent study that was published in CGH by Dr. Silverman and team. And what they talk about is essentially these big buckets as to where we are going to see the next impact of AI in inflammatory bowel disease. So if you look at the IBD provider, the IBD patient, and then you have the AI in between, right? So it can kind of help both the patients as well as the clinicians. And different aspects, as I discussed before, clinical trial optimization, trying to understand the disease mechanisms better. How do you deploy these AI tools at a population level and how does it augment our clinical decision support? So we are endoscopists, so we'll start with the endoscopy. What does computer vision or a sub-branch of artificial intelligence bring to inflammatory bowel disease? So it's AI-guided endoscopy in IBD. Using AI for development of improved assessment of IBD endoscopic activity. AI in enhanced endoscopic imaging, like confocal laser endomicroscopy. AI in capsule endoscopy in IBD. And AI-aided histology interpretation for IBD assessment, which we know has a lot of variation. And I'll go over all these in a second. And AI in radiomics. So what are we talking about when we are talking about advanced imaging and AI in IBD, right? All of us know that if we are scoping a patient with ulcerative colitis, we have to use chromo or virtual enhanced endoscopy. But we also know we have several other great tools in our pocket, like PCLE and so forth, that we can use to improve the dysplasia detection rate. But in reality, a lot of these technologies do not penetrate in community practice because there is a knowledge gap or implementation gap. Now, how does AI improve or reduce that gap so that we are able to use all these tools at point of care? And the most recent one is your intestinal ultrasound, right? All of us are grappling to get to learn how to do IUS. What if there is an AI tool that will help us learn these tools or enhance our diagnosis at point of care? So there's several of these technologies in different forms that are already being evaluated. And there's several publications, more so in engineering journals, that are looking at continuous scoring of inflammatory bowel disease disease assessment, including your Mayo endoscopy score, trying to identify how you correlate histology with endoscopic findings so that your endoscopy AI-guided findings could be a proxy for the ground truth, which is your histology. And then how do you develop new tools for enhanced imaging that will correlate with the histology at point of care? And of course, there's a lot of work going on in radiomics as well, and I'll go over some data regarding how AI is performing in radiology as well. Okay, so this image talks about how we can use AI for enhanced imaging. So a lot of times when we are talking about inflammatory bowel disease, the dysplasia detection and finding those subtle lesions and making sure that they're resected completely, we need to know that the margins are clear and we need to use advanced tools. How do we get that tools to a point of care where we are not worried that we missed something, right? So AI augmentation with a performance of like 90 to 95% accuracy will allow us to work together with the AI and get to that point. Okay, so there are many studies and they're ongoing studies. A lot of these studies predominantly in the gastroenterology journals are looking at how does artificial intelligence, more specifically computer vision, help with diagnosis of Crohn's disease, severity of Crohn's disease, severity of ulcerative colitis and so forth. Several of the initial studies were looking at fixed images, meaning you take a picture and then that's how the algorithms are developed. And the current technology that we have, at least from an AI standpoint, will allow us to improve the diagnosis, the accuracy of diagnosis just by the model itself. So diversity of data that you can collect and deploy these algorithms is what it is key. We do not know how these algorithms actually perform in real world yet because most of these are in small centers or multi-center studies where the data is carefully curated and then it's deployed. In this study, they basically use around 7,700 images, 38 patients with Crohn's disease, and of course the way AI training happens is you have an internal data set or an internal validation and then the machine learning algorithm kind of validates itself and then you have a test set that is completely aside which the AI model has never looked at and that's how it's validated. Now when we talk about generalizability or reproducibility of these results, what we are talking is if we take this algorithm and I'm taking it to your data when I deploy it in your endoscopy room, is it performing with the same accuracy that it was done at the point of model development? So there's several studies that went on and there's a recent meta-analysis that looked at can AI help or assist us with assessing mucosal healing and ulcerative colitis? And this piece is very important because we use this data, one, for guiding our treatments, two, for clinical trials and drug development as well as we think about clinical trial recruitments. So what they found was about, for the 12 studies that they've done, they found that the diagnostic odds ratio for finding and detecting mucosal healing was about 0.9 for images and for videos, it was about 0.86 to 0.91 with a good accuracy. So bottom line, at this point, we have several studies that are available and several models that are available for mucosal assessment of inflammatory bowel disease, but they're still not to the point of accuracy that we want it to be deployed in clinical practice, but I think we are there very soon. So another example for AI in endoscopic diagnosis of ulcerative colitis, this is a paper published in Nature a couple years ago based off the hyperquasar dataset from Norway. They used a convolutional neural network and the beauty of this study is not only did they develop a model that will help diagnose the severity of ulcerative colitis, but also they have what you see, this class activation maps. The last picture that you see with all the neonish colors, that is basically what we call a way of representing how the AI algorithm thinks that there is an area of abnormality. The downstream implication of this kind of an algorithm is that this helps us with what we call explainability of the model. A lot of times when we look at AI algorithms and some of which are not very clear to us as to why it made a prediction, for example, as an endoscopist, let's say you're doing a colonoscopy and you see this inflammation, you think there is probably some dysplasia and let's say the AI algorithm now tells you that this has a probability of dysplasia of 0.8 or something like that. Then how do you trust that this algorithm is telling you the truth? So there are different ways of how this explainable AI is now being integrated and in GDPR, it's a requirement. And very soon in the United States, most AI algorithms might need this explainable AI part to clinical diagnosis and prediction as well. So anyway, this study had an accuracy of about 88%. Again, a little more detail on different types of explainability models, but what you see on the middle figure here is that you see this ulceration and you have a bounding box saying that there is this erythematous lesion, there is this fibrin-covered ulcer. That's why the class activation map, which you see on the left, you see that there is this redness down there that kind of tells you that's the area of active inflammation. So you can quickly, what we envision is you quickly switch the scope and then you will see that this is why this algorithm made this prediction and whether you agree to it or disagree, it's up to you. So this is a simple model that we built for ulcerative colitis, again, just out of fun as part of a DDW workshop that we do. What you will see, I don't know, can we play the video? Thank you. So this is a real world example of how a colon looks with all the bubbles and stuff like that. So what you're seeing here is a continuous, forget about the UI UX, we didn't work on that during the workshop, but what you will see is on ulcerative colitis there's a grade two that is constantly coming up. On the bottom you will see the gray, that is the dead areas, and the other ones are different areas of the colon. And what you see is at the end you will get a composite score of the total area of mucosa that has a score of two or a score of one or a score of three and so forth. It's currently the way we diagnose or enroll patients for our clinical studies, the highest score at one spot. Let's say you have a highest score of three in segment of the descending colon, your total score is three, right? It doesn't matter if the rest of the colon is completely healed up or so. So that is, I think, where the future is going. We are becoming more granular in how we depict our data and how we assess our drug is performing on a patient and so forth. Again, this is another study that was published, I think, in 2020, looking at the ulcerative colitis inflammatory score. And then again, it has about 90% accuracy and a 93% accuracy for assessment. Several of these studies have also looked at chromoendoscopy, advanced imaging technologies and so forth for assessing the disease severity. This is an example of radiomic data where they used a CT entrography and the ground truth was actually a resected segment of the colon and saw that actually AI performed much better than radiologists. Radiologists were at 0.6 accuracy and AI performed at 0.8 accuracy in detecting the area of fibrosis. So a lot of research has been going on in tackling clinical trials. A lot of data shows that central reading is a very important piece in clinical trials. So optimizing that using AI has been one area of active interest. Several areas of interest have been in R&D for big pharma in terms of finding new molecules and rolling patients effectively into clinical trials and so forth. And there are several ways into which AI can help with this whole process. I'm not going to go over the complete details, but that's what we call semi-centralized or decentralized trials that are powered by artificial intelligence. Another big piece about how we kind of move into the next generation of clinical trials is are we selecting the right patients for our clinical trials? Are these algorithms being fair in terms of how we recruit patients for clinical trials? And I think AI is playing a major role in how these clinical trial recruitments happen. And several centers across the country, at least in the United States, have centralized data sets where you can actually pick and choose which patients would be appropriate for your clinical trial. Again, moving from the major part that's the endoscopy to actual clinical management, where is AI having a big impact? One is in treatment algorithm selection, clinical assessment. We talked about that in terms of histology, endoscopy, and radiology. And how does it impact lifestyle management? These are AI-powered applications or digital tools that we currently use for managing these patients. Another important buzzword, and I'm not going to go into details, we just don't have the time here today, is precision medicine. There is a lot of information and a lot of enthusiasm about precision medicine within inflammatory bowel disease. Is it prime time yet? Not yet. But there's a lot of genomic data along with the clinical parameters and radiomics and vision data that's being collected to kind of put all these together, what we call multimodal AI, to help us predict which patients will fare better or which patients will respond to a certain treatment and so forth. Another piece, again, I won't go into a lot of details. If you are reading news and being in this cycle for AI, you will know that AI has a big role in drug development and it basically shortens a major part of finding how a molecule will interact with a person's DNA or in the biome of a patient. And that timeframe drops from five years to about one month when we use computational methods for finding which molecule is working versus not. That's the whole hype about the AlphaFold, if you ever heard about Google or Gemini's AlphaFold protein technology. All right, so in the future, I think I like this slide because it's very confusing. This is how our lives will look like as clinicians. We'll have dashboards from all aspects of the patients, including real life data, which comes from sensors. You will see biomes, you will see radiomics, you will see vision, you will see histology, you will see proteomics, you will see genomics. And we as clinicians will be these persons who will collect all this and get the wisdom out of it to kind of help guide the patients. But to do all that, we need AI because we cannot process this information. It's just a very high dimensional data. And that is where AI will help us understand and get us through some of these problems. Another point of care solution that is currently existing, this was a slide from more than five years ago, when we first envisioned what we call point of care solutions for literature. So medical literature doubles every three months currently. So for us to keep up with all that information, of course, we have postgraduate courses like these, but we cannot go through every single literature point. How does AI help with some of these? So there are several solutions where you can just type in and ask what is the best management for this patient, and in a particular box of guidelines, and then you should be able to actually retrieve that information, including clinical trial data. So AI will help with improved search, quantification of results, and meaningful data points as well. Several other companies are working from a digital health standpoint to kind of create dashboards for your IBD patients. We all know getting these inflammatory bowel disease markers, vaccinations, the skin checks, several other things is a tedious thing that we cannot coordinate effectively with our primary care colleagues. How do we optimize that and how do we do that at scale for your entire hospital or system? The last piece I want to kind of touch on, which kind of ties in with AI, and again, AI is basically a tool that will help us get the wisdom from all the data points that we collect. More and more, we are trying to move care from hospitals, clinic to the patients at home, home health at home. So one of the things that a lot of people are looking at is how do we use sensor data to help us understand if a patient will go into a flare or do they have a flare currently instead of them reaching us on a Friday evening on MyChart and we are ordering a calprotect and are getting them steroids and so forth, how do we predict some of these ahead of time. So some solutions that are in the pocket right now are sweat sensors, skin sensors, and so forth. This is an example of a sweat sensor that is looking at CRP and its correlation with inflammation in patients with inflammatory bowel disease, and then you have a validation curve. Now why did I show this slide? I know it's not prime time yet, but the role of gastroenterologists in this space is trying to kind of connect this particular data to the ground truth. So if the ground truth is an actual inflammation or an elevation of the calprotectin, how well does this sensor actually correlate with that information, and what kind of frameworks do we need to kind of say this is something that I will use for my patients in clinical practice? Another example, this just came out in Nature this month, is a temperature sensor. Of course, you can see it's definitely not prime time for humans, because that's a rat in that picture. So what they did was they had the sensor that they embedded into the abdominal fat, and then tried to create algorithms from a computational standpoint, that's again AI, trying to predict at what time does the temperature decrease, and does that temperature decrease correlate with the flare of inflammatory bowel disease in this particular rat, which was actually in the terminal area. Again, another sensor example is your classic toilets. There are several computer vision algorithms that help with that as well. And I will stop there. I know there's a lot of information going on, but in conclusion, AI in inflammatory bowel disease is multi-pronged. Right now, we are seeing a lot of solutions in the endoscopy space, but there are several others from a risk prediction, diagnostics, and genomic space as well. Thank you for your attention. Thank you. All right, we'll move a little bit to how we do technology in the pancreatic biliary disease. Are we reaching the holy grail? Dr. Shyam Varadarajan probably doesn't need a lot of introduction, most of you know. He's at Orlando Health. So, Shyam. Good morning. Thank you, Melanie, Michelle, and Irving for this opportunity to speak. I've been asked to talk about the next technologies in pancreatic biliary diseases, and it all really boils down once again to artificial intelligence, and these are my disclosures. So first, I thought I'll talk about AI in the bile duct, and most commonly, we use it at ERCP, and second, what might be the use of artificial intelligence in endoscopic ultrasound. It's also very important to note that none of these technologies are commercially available and not relevant to day-to-day clinical practice as of now. These are the publications that have come out in AI and ERCP. You can see that in the last one or two years, there have been a prolific number of publications that have been published in peer-reviewed journals. I thought I'll talk about a couple of things. One, how can AI help with interpretation of fluoroscopic images, because several of the companies are working on it. Can AI be used to predict clinical outcomes in patients during a procedure? And finally, and most importantly, what is the use of this technology for analyzing cholangioscopic images, in other words, indeterminate biliary structures? So when you do ERCPs, we always get these MRI reports or CT scan reports telling you there's a filling defect on the right intrahepatic duct on the anterior branch or there's a stricture. And when you do an ERCP and you perform a cholangiogram, it's very difficult to correlate that image with what we find on an MRI. And a lot of companies, including Omega Imaging and some of the new companies, want to integrate CT findings into the technology, fluoroscopic machine, before we perform the procedure so that we can target the correct duct and perform the relevant intervention, whether it is extracting a stone or placing a stent. But this remains a common problem because our existing fluoroscopic technology is not very accurate. And very often, even when we do simple things like stenting a duct, we may have a problem because of overlap of images. This is a video. This is now available in real time, assessing a stricture length. When we do ERCPs, it's not uncommon to have a wrong assessment of a stricture length and to place a stent that is either too long or too small. So in this particular case, you can see this is a patient with obstructive jaundice. You're performing a cholangiogram after passing a guide wire into the bile duct. And you see a stricture somewhere in the proximal half, common bile duct extending into the liver hylum. So this patient obviously will need a stent of appropriate length. And now we have got a technology that will allow you to measure the length of the stricture and thereby help you determine what stents can be placed. This is now available. Not commercially available, but it's available in clinical practice. The second is managing bile duct stones. This is the ESG guideline. As you know, when a stone is more than 15 millimeters and you've got a different size and the distal common bile duct is narrowed and so forth, there's a difficulty in extracting the stone. The current recommendations from the societies included performing a sphinctroplasty and then it's pretty much left to us to decide whether you want to do a single-operator cholangioscopy or you want to perform a mechanical lithotripsy or some sort of a laser treatment to fragment these stones. And if you don't have a good roadmap when you perform a procedure, the procedure can get very prolonged and it can be very cumbersome. And some of the times we have to perform a second ERCP to overcome this problem. This is a simple assessment. When you look at bile duct stones, there are only two things that matter. Your stone-duct mismatch ratio. If the diameter of the stone is larger than the diameter of the distal duct, we have a problem in extracting the stone and we will require additional technology. So let's play some videos here. So in this particular case, this is a patient who presents with obstructive jaundice. You can see that the anatomy is a little complicated and after you perform a cholangiogram, you see numerous stones. There are many ways you can manage this stone. We are trying to assess that stone-duct ratio in this particular patient and then you realize that the stone is larger than the distal duct and we are trying to perform a sphinctrotomy. And then after a sphinctrotomy is done, you will see that we are attempting to fragment these stones after performing a sphinctroplasty using a mechanical lithotriptor. But unfortunately, the mechanical lithotriptor cannot go past the stone because of the fulcrum effect. The stone compresses the mechanical lithotriptor. It is not able to open completely to interrupt the stone. So obviously, the default mechanism in this case will be to do a cholangioscopy with a laser. And in the second video, this is the same patient. Again, the scope position is a little difficult, but obviously, we are just going straight with a laser to fragment the stones and the treatment is quite successful. Should we be wasting our time with these procedures going through alternate treatment modalities and is there a prediction model where you can go straight and choose the treatment of appropriate choice so that we are conserving our time and also limiting resource utilization? So this is a simple protocol that we usually adapt in our clinical practice. And for this, in some form, we use AI. So if the stone-duct ratio is less than one, then you just need a simple ERCP with sphinctrotomy and stone extraction. If the stone-duct ratio is more than one, obviously, you are going to require some sort of stone fragmentation. If the ratio is one, exactly one, which means the size of the stone is equal to the size of the bile duct, a mechanical lithotriptor will work. But the larger the stone and if the ratio is more than one, then these patients will eventually require a single-operator cholangioscopy and you can see a scatter plot that is absolutely, if you use this formula, you just don't go wrong. You have a ductal clearance of 99.9% for a single-session stone clearance in a vast majority of these patients. So these are the final wraparound on the bile duct is evaluation of indeterminability structures. You can see there are numerous studies that have been published looking at meta-analysis, looking at how well we perform with assessing these structures, but the first line on the table is important. Digital single-operator cholangioscopy visual impression. So if you look at it, your sensitivity is 95% and your accuracy is 94%. Down that, you've got several other options. You can perform biopsies, you can perform cytology, you can use FISH, perform EUSFNA, but still, just a visual impression appears to be more accurate than using all the technologies that are listed above. So therefore, AI is all about impression and this could be probably the correct thing to do. There has been several classifications for biliary structures and people have come out with five different categories on how to evaluate a structure and each of them have their own accuracy in determining what a structure could be. And a combination of these factors probably will yield the best prediction model. So if you use all the five features that were suggested and you develop your AI model for evaluating using single-operator cholangioscopy, you find a sensitivity of 95% and an accuracy of 95%, which again is very similar to the table that I had shown before where visual impression appears to be more perfect than any other technology that is currently available. And this is another model where they try to compare AI with forceps biopsy and brush cytology and once again, you can see that the CNN model based on just visual impression is much, much more accurate than cytological or histological correlation. This is one other study. This came out of the Hopkins Group where what they did is they took all the patient information including patient demographics and clinical presentation, fed it into the machine and then they thought, well, this is going to probably give an even better performance. And what you will see is that I would have thought that when you combine all the clinical features along with the visual impression, your performance will be higher. It did not turn out to be. Your area under the curve was about 0.88 and your negative predictive value was 0.61 and your sensitivity is about 0.88, which once again tells you at least in indeterminate bleary structure, it looks like visual impression could be the way to go and you may not need anything else. And this is a meta-analysis once again of all the studies that have been done on indeterminate bleary structures using AI and you will find that the accuracy is anywhere between 90 to 95, which is what we are getting if you go by visual impression on cholangioscopic images. When you combine clinical information, that number drops down to about 89%. So what do we do with it? The problem is none of our surgeons are going to go with AI and perform a bootle procedure or a bleary reconstruction. Let's face it, at least in the United States, we still go by cytology and histology and they want some sort of a tissue diagnosis. This is a randomized trial from our group. It was done a few years back and you will see that in a good center, if you are going to perform three biopsies and use onsite assessment with final cell block, you get an accuracy of 90%. All the things that we are talking about AI gives you maybe at best a 5% edge. But again, the criticism is that this is probably from an expert center with a lot of experienced endoscopists and may not be applicable to somebody with a lesser experience. But I think we should aim for, if we are going to direct clinical decisions with bleary structures, which is going to be surgery and chemotherapy and so on, 95% may not cut it. We need to be at somewhere at 99% for our surgeons to accept, along with correlation with histology. We are not there yet. None of these things, once again, as I have told you, is not commercially available and it should be significantly better than a clinical randomized trial. So to wrap up this section on bleary diseases, I think potentially the technology will help us to access the desired duct. Treatment outcomes can be improved by determining the complexity of what we are treating, whether it is a structure or a stone. And finally, this is going to hopefully help us with determining the underlying nature of an indeterminate structure, with the goal being we have to be just not 90, but close to 95 to 99% accurate in determining a structure model. Finally, I will move on to pancreatic lesions at EUS. I think AI can help with detecting a pancreatic lesion and second, quality improvement. And these are the publications in AI and EUS in the last few years. And you can see in the last few years, we have got probably about 50 or 60 papers that have come out. Again, most of them are not performed real time in patients. So this, to provide a little background, when a tumor is small in the pancreas, less than 10 millimeters, you can see that cytology and histology are inadequate. People think you must combine this with pancreatic juice cytology in order to improve your operating characteristics. This is a very old study from my group in Birmingham, but we have shown that in a patient with a pancreatic mass, if you have a coexisting chronic pancreatitis, your diagnostic sensitivity is less than 75%. On the other hand, if you don't have chronic pancreatitis, it could be 90%, as long as you have rapid onsite assessment in the room. So if you have a small tumor in the pancreas, and if it is less than 15 millimeter or less than 10 millimeters, we don't perform very well at EUS. If you have got coexisting chronic pancreatitis in conjunction with the mass, then our performance is even worse. This is a new technology, it's called Pancreas. This has been developed by a grant from an institution in collaboration with a group in East Europe, where AI is being used for looking at EUS for detecting solid pancreatic masses. And we are now also trying to see what its role could be in evaluating cystic lesions in the pancreas. So this is your normal pancreas, it's going to appear green. So if you drop an endoscope, approximately it takes about 80 procedures for an endoscopist to identify the normal pancreas. But if you're an endoscopist not trained in EUS and you have access to endoscopic ultrasound, at least you'll be able to recognize an organ. The second video and the third videos are solid tumors and cystic tumors. Solid tumors appear red, cystic tumors appear blue. And this is a very new concept with which we have been working with. We have done about 308 patients. There are two examiners in a room. One performs a procedure, the other is about three meters away in a different monitor and has no access to regular technique. And this person is just interpreting AI images. So you can see that the groups are divided into three categories, approximately equal number of patients and a majority of patients are age more than 65 as typical for patients coming with pancreatic compliance. So we looked at normal lesions, pancreatic mass lesions and cystic lesions. And you will see that the performance of conventional EUS compared to the AI EUS is quite similar across all sizes for pancreatic masses, less than 15 millimeter or even if it is more than three centimeters. And what we find is that AI is pretty good in detecting pancreatic mass lesions and the smallest one we found was a five millimeter adenocarcinoma. So for pancreatic masses, it looks like this works. For cyst lesions, we do have a problem. I think for large cysts more than 15 millimeter, the technology is as good as an expert. But when the cysts are small, I think we have a difficulty in differentiating your main duct IPMN from a pancreatic cyst because the duct is very dilated and it looks similar to your cyst. And unlike a CT or MRI, you don't get a complete cross-sectional imaging in EUS. You get small points from time to time and a dilated duct will look like a pancreatic cyst. Again, this could be one among the very early things. This probably can be sorted out, but this is a major limitation that we are having in distinguishing a dilated duct with a focal dilation from a pancreatic cyst lesion. But otherwise, when the cyst is more than 15 millimeters, AI and an expert endosynographer appear quite identical. These are the operating characteristics for the pancreatic masses. Absolutely no problem. No matter what the size of the mass, I think AI is pretty good as an expert endoscopist. We missed only three lesions, many of them less than about 10 to 11 millimeters in size. But for pancreatic cysts, as I outlined earlier, AI does have a problem evaluating cysts less than 15 millimeters. But again, the AGI guidelines will tell you only when the cyst is more than three centimeters, you have to be concerned and then you have to sample them. And for smaller cysts, we don't have to worry too much about it because the risk of this being neoplastic is quite less. But nevertheless, when you do a technology, as I've told you earlier, it cannot be 95, 99%. The problem that we are having with all these studies is that the FDA is insisting that if you're going to call a lesion a lesion, then you need histological or a psychological correlation. And we are finding it difficult to sample cysts as small as five millimeters to prove what it is. These are my last few slides. This is diagnosing solid mass lesions in the pancreas. And this is a study from China, where they looked at multi-modal AI, which means they had clinical information in conjunction with what they are finding at imaging. And what you will find is that the performance of AI for a senior expert endoscopist, it really doesn't make much of a difference. They perform equally well. Also for senior endoscopist, people have done more than 3,000 or 4,000 EUS. The performance of AI is comparable to an expert. And finally, for a novice endoscopist, this is where the real difference is. These are people who have just performed less than 500 cases. And across the cross-section, you can find that their performance when they use AI is quite similar to that of an expert. So significantly, this technology can improve the learning curve and enhance the performance of beginners. This is a study from Mike Levy's group differentiating autoimmune pancreatitis from chronic pancreatitis. Once again, I need to caution that this is not done in real time. These are sequenced images that are done by a testing model. And what they find is that EUS is pretty good in differentiating both because I think autoimmune pancreatitis does look a solid pancreatic mass lesion. And when you're not using AI for an endoscopist without histological correlation, it's only about, sensitivity is about 55%. Whereas for a CNN model, it was almost twice as much. And finally, with EUS, if you don't do a good exam, it's same as colonoscopy. If you don't reach the cecum, you will miss polyps. And if you don't evaluate all the parts of the pancreas, then you're not doing a good exam and you will lose a solid tumor. I mean, you can miss a solid mass lesion. So this is a study from China where they developed a model and they wanted to see, compare the performance of the model with the tough endoscopist. And this model will tell you if you have performed a thorough examination or not. And you can see that the AI model helped identify all the parts of the pancreas, starting from the uncinnate process all the way to the tail region, compared to a new beginner or somebody with a limited experience. So therefore, when you use this technology, you will be able to perform a more thorough examination because you will be alerted to the fact that complete examination has not been performed. Consequently, when you do a good exam, you will identify more lesion and this technology can help improve your performance. So finally, this is my summary slide on pancreatic assessment EUS. I think it will really help to identify solid mass lesions, particularly lesions as small as five millimeters when performed by a non-expert. It has the potential to improve the quality of the examination because you will not miss any station. We need to still work on this technology for pancreatic cyst lesions. I think that could be a problem in evaluating small cysts. I don't think there's any data or there's very limited data in differentiating a neoplastic and a non-neoplastic mass or a cyst lesion because we will require histological correlation to achieve this. We are not there yet. And finally, we need real-time assessments. We have used this technology or people have used this technology with limited processes and limited equinoscopes. This is not unlike colonoscopy, not broadly available. We have got very limited data and I think it's going to be about 10 years before we are going to see this prime time commercial use. Thank you. That was fantastic. Thank you. So, I can't see because the lights are shining so bright in my eyes, but I hope our next speaker is here. It's my pleasure and honor to introduce Dr. Prateek Sharma, Professor of Medicine from University of Kansas in Kansas City and the current president for the ASGE. He will deliver the keynote address for this session of our meeting, How AI Will Shape the Future of Endoscopy. Thank you. Okay. Thank you very much and good morning everyone. So, Michelle, Nalini, and Irving, thank you for setting up this wonderful course. So, you can see the theme has been so far you've been hearing the word artificial intelligence all along. So, I'll just walk you through how this will be part and is part of our endoscopy practice. And our endoscopy practice is just not doing procedures. It starts before the procedure, which is the scheduling of the patient, the pre-procedure input which needs to go there. Then you perform the procedure, and part of the procedure involves doing a high-quality examination besides detecting lesions. And then more importantly, the post-procedure follow-up, which is the note that you need to write for the procedure that you've just done, the instructions given to the patient, and then also how do you schedule the patient for follow-up. So, I think when we talk about our endoscopy practices, what we have heard so far is just one-third of it, which is the procedure itself. So, let's start off how AI is impacting the initial part of our practice, which starts prior to the procedure. So, you can use this to schedule your procedure by taking into account the patient profile, the physician profile, what is the procedure type, what's the post-procedure follow-up that may be required, what's the current availability of the physician, and then this software can suggest and schedule the procedure for you. So maybe it's a female patient with IBD who wants to see a male endoscopist who has expertise in IBD. How does that happen? The scheduler doesn't know that, but this is how it can happen. And so, you're already seeing this, that transforming colonoscopy scheduling with artificial intelligence. So, this is here. This can be done today. Chatbots can be applied prior to the procedure. All of us, you know, get these phone calls the day before from our patients, well, I'm having bloating after taking my first jug off or the tablet for the procedure. But now, a lot of this can be answered by AI-powered chatbots, which have been trained to give specific answers to very specific questions from the patient. So, this is not FAQs that can happen on the net, but these are AI-powered chatbots which can do that. And then there's the study which looked at how good are these chatbots in answering patient queries as compared to humans. And this is not specific to gastroenterology, but just for medicine. And it compared licensed healthcare providers versus chatbots. And you can see that the quality rating of the chatbot was very similar to that of a human answering the patient. But more importantly, on the right, you can see that the empathy that these chatbots provide are now very similar to what we do as humans. And in fact, maybe in certain cases, maybe better than humans, because after you've received the fifth phone call, maybe you're not as pleasant to the patient as the computer may be. So, you can see that's why the empathy rating also is getting better over time. So, these chatbots are really as human as they can be. Also in the pre-procedure setting, don't forget the power of these large language models in which both the patient and the physician can go. And this is something that I did is I just typed in, can you explain to me how to take a bowel prep for colonoscopy? And this is exactly what ChatGPT spit out. This is 3.5. So, this is a free version. And so, in our pre-procedure handouts, you can provide links to this because in the vast majority of the cases, these instructions are reasonably good for our patients. So, this is something which is there, which can be used, and perhaps should be used. We recently wrote this article in Nature Review looking at these large language models, which is also generative artificial intelligence, to tell and guide physicians on where they can be used in medicine. And these are the areas where probably they can appropriately be used in medicine. The area on the left are the concerns, and the area on the right are what's needed for the future. And always remember that these models are not trained specifically on the medical literature. So, please don't use it to look up guidelines or to make management decisions for your patients, at least not as of today. But again, a lot of the mundane tasks, both pre- and post-procedure, could probably be done by using these large language models, which are available. So that's the pre-procedure aspect of endoscopy. Now let's look at for screening, detection, quality, et cetera. And you've already heard about it. All of us, when we travel, you know, specifically international, you have this camera and this face recognition, and this will very soon be in our endoscopy suites. As soon as the patient walks in, there's facial recognition, and automatically their records are pulled up by the computer. And while you're sitting in your dictation room or in your office, you already have these charts which are coming up, and you can start pulling them up based on their previous colonoscopy, their CAT scan, their list of medications, et cetera. So you don't have to go in and type the name of the patient anymore. Just by facial recognition software, this will happen as soon as the patient walks in and checks in for endoscopy as well as for our outpatient clinics. So after the patient's checked in and you're ready for the procedure, why do we need AI? Let's look at some of the challenges we still face as endoscopists. These are miss rates for upper GI cancer for all EGDs, and this includes experts, non-experts, seven studies, 122 endoscopists, so this is individual endoscopy level data. The overall miss rate for upper endoscopy is 18%. And if you compare the East to the West, Eastern endoscopists do a much better job for detecting upper GI lesions as compared to Western endoscopists. And experts do a much better job than non-experts. So can we make all endoscopists Eastern endoscopists, right? Okay, that's one concept. Or can we make everybody an expert, right? That's another concept. And so can we reduce this miss rate of the endoscopists who have higher miss rates? So that's one area where computer vision or endoscopy and AI will come into place. So let's see an example of this in a patient with Barrett's esophagus undergoing an AI-assisted endoscopy. You can see the segmentation with this purple color. And as you start examining this segment even carefully, you can start seeing that this red heat map starts showing up on the left side of the screen. And that's because there is this subtle millimeter-based area of high-grade dysplasia, which is being detected by the software, which, you know, maybe you can, even some expert endoscopist could miss this. So this is one concept of how, in the upper GI tract, we can increase our detection rate and thereby reduce our miss rate for upper GI neoplasia. The second concept is improving the quality of upper endoscopy. All of us take pride in doing a 60-second upper endoscopy, right? I mean, it's natural because, you know, there's a colonoscopy waiting right after that. So here's something which will guide you to make sure that you've examined all the segments of the upper GI tract. It's automatically taking endoscopic pictures, so you don't have to worry about documenting the G-junction or the bulb or the pylorus. So it does that. The column on the right is where it's giving a checkbox to you. So right here, you can see with this mouse, the checkbox means that you've examined that part of the stomach or the esophagus. If it's unchecked, that means you still need to examine that area again and go back. So it could be used as a training tool, as we are teaching our fellows to do it, so making sure that they're examining the incisora, which is an important area specifically for looking for early gastric cancer. Sometimes we forget to document important landmarks, so it will take automatic photos for you in case you've forgotten to do it. So just improving the quality of our endoscopic procedure, I think, which is important. Let's move to the lower GI tract, and I think this is a study which I found was very eye-opening. And you look at this. This is a study that Doug Rex conducted several years ago in the 1990s, talking about the concept of adenoma detection and adenoma misrate. How do you do that is by doing back-to-back colonoscopies. And we all know and remember this figure of 20% that there is an adenoma misrate of 20%, right? I mean, we all know that. And this was back in the 1990s. A similar study looking at an AI software was done a few months ago in 2024. Nalini, what's the adenoma misrate in that? 5%? 10%? Probably about the same. Yeah, so it's probably about the same, right? So in 30 years, scopes have gotten better. We think we've gotten better. I mean, at least I think I've gotten better. I can't speak for Michelle, but I have, you know? But if you do these studies, it still shows you that we probably are still missing lesions. So have we made any progress during this time period? Again, these are data. I'm not making it up. This is what it shows. So perhaps we do need something for the lower GI tract as well, just as we need for the upper GI tract. Looking at polyp detection during colonoscopy, here's one way to do it is you see these subtle lesions and it just alerts you to the presence of a polyp in that area. You can characterize it further, telling you whether it is a hyperplastic polyp in green. You can see the green heat map coming up, and that says it's a hyperplastic polyp in that situation. And so it will help you maybe either leave that behind, but if it's in the right colon, it may be a sessile serrated lesion. So you do need to remove it, so characterization is there. And then you can get characterization along with sizing. Here on the left, you can see that it's telling you what it is, a neoplastic polyp, but it's also giving you a size estimate that it's between six to nine millimeter, which we know as endoscopists we are not very good in sizing. And here it's a polyp which is less than five millimeter, and it's a non-neoplastic polyp. So all these things, tools could be helpful to us during endoscopy. Bowel preparation rate. It's a quality metric. We need to follow it. We should follow it. Unfortunately, at least the way I do it is after I'm done with the procedure, I'm typing in my report. I try to remember what the bowel prep rate was. Well, I think in the transverse colon it was two. Let's give it a two, right? This could be automated as well, and actually there's one software which is now FDA approved for this, is automating this Boston bowel preparation score. So it can in real time tell you whether it's a zero, one, two, or a three, and it can populate your note for you so that you don't have to remember and rely on all of our fading memories. How well does it perform? Well, reasonably well. If you look at the data here, the overall accuracy of AI was 93%, much better than even expert colonoscopists at 75%. So accuracy is important, but more importantly, I think this is one of those tasks that could be automated rather than us spending time in trying to remember what that score was. It could be automated for us. Similar to the upper GI tract, quality metrics for the lower GI tract are right now, you know, approved, but also followed and their financial incentives for it. Landmarks. Okay. So if you reach the landmark, the software will tell you, yes, you've reached the cecum. It's time to start withdrawing. And when you start withdrawing on the left, here's one concept of a speedometer. It can tell you how fast or slow your withdrawal is. So if you start withdrawing too fast, it will go towards the red. If you're too slow, it will go towards the green. And so you want to be somewhere in between. And it's automatically capturing images for you. So on the left-hand corner at the bottom, you can see it's in pink. It's telling you you've reached there. And the software has captured the landmark for you that you've reached the cecum. Now you're in the ascending colon. So it's almost like one of those guides that you see. So it gets there. As you're withdrawing, if it doesn't see the lumen, but it starts seeing redness, which typically happens in July and August when you're trying to proctor the first-year fellows, you're always telling them, go to the lumen. Show me the lumen, right? I mean, I still remember, you know, Dick Sampling, they're telling me, I don't see the lumen, you know. So that's important. So if the view is lost, it's telling you the view is lost. Please return. So these are simple tasks, but I think a second voice, a second pair of eyes don't hurt us. And studies have shown that having a second person looking at the screen actually improves your adenoma detection rate. So we all know that effect which that happens. So that's the second piece of the pie, which is during endoscopy. How about post-endoscopy? Right? We forget about this in which now the note needs to be done. That's one of the tasks we all don't like. We wish that the note could be done while you were doing it. And there are softwares now in which the note can be done. This is an example of, let's say, your visit prior to the procedure or a clinic visit in which you're talking to the patient and now you have these commercially available softwares, ambient artificial intelligence, in which they're listening to your conversation with the patient. And after the conversation is over, it's smart enough to recognize and put it in a soap note format for you. And this can be done with endoscopy notes as well. And it's gotten the assessment, the plan. It's given the CPT 10 codes, the ICD 10 codes. Those are all in there for you. So this type of AI is also right now available. And you can start using it in your practice. ADR measurement. All of us need to do it. It's a quality metric. But it's difficult. How do we do it? We employ a nurse, right? At least that's what I do in my unit is every month or every three months, she goes in or he goes in and pulls out the ADR. And how do you do it? You look at the endoscopy report. You go to the pathology report. It's time consuming. And it's an FTE that's required. Here's a study that compared such manual retrieval versus NLP or natural language process retrieval. What's NLP? It's a simple way in which a computer can understand text or spoken language. That's what NLP means. That's a simple definition. And they used two different ways of AI, which are shown in the non-blue box. So the blue bar is manual retrieval. So what it found was that the PDR, the ADR, or any quality metric, the computer was as good as a human in pulling it out. The accuracy was very similar. So what was different between the two? NLP was able to extract data under 30 minutes for every single procedure done in the institution over 10 years. And what do you think the human took? Six to eight minutes per patient. And so you can see that if a random sample size of 600 patients took 160 person hours to do that. So simple tasks, I think, are here, which should be automated, in which we don't require anybody to tell us how good it is. But these are the tasks, I think. I don't think all of us enjoy doing it. But more than that enjoyment part, these are the mundane tasks, I think, which should be taken over by AI for us. The other thing is assigning surveillance intervals, but making sure that our patients don't fall through the cracks. All of our practices have different ways of how that happens. You find five polyps, the patient needs to come back in three years. If I ask all of you, we'll have 10 different ways of how we address that, whether it is by assigning five people to do it, by assigning a single person to do it. Here's the way of how it was done in this study, is they took patients with upper GI conditions, and they trained the computer by telling it that if there was no dysplasia, the patient needs to come back in three to five years. If there was dysplasia within, low-grade dysplasia within one year, if it was high-grade or cancer, the patient should come in for a resection. And they trained it based on these different types of algorithms which are shown here. Here's the intervals which were given, and then when they tested it on the internal validation, the computer was able to correctly identify all patients 100% of the times as to who needed surveillance. And the assignment for surveillance intervals was 99% accurate for the internal validation and similar results for external validation as well. So there are patients who sometimes need to come back after piecemeal EMR in the colon within six months, but sometimes they don't for different reasons. So how can we automate this process so that the patient keeps getting calls, keeps getting letters, something, so that they show up rather than showing up in five years with an advanced lesion? So I think this will improve patient outcomes as well. So I've shown you all the great things, but why are we not using all of this today? Well, I mean, there are definite challenges to doing it. There are device limitations, and that's why not all of them can get approval, because I've shown you all the great things about them, but they need benchtop testing, they need initial pilot studies, and then randomized control trials. So there may be limitations of some of these devices. For real data relating to NLP, there may be data access issues as well, which still happen. Most of them, if you want to use them as a medical device, they need to undergo FDA approval, or in the EU, they need an MDA approval, and that's sometimes difficult to do because the regulations today are not as nimble as they probably should be, so there are barriers to it. There may be misaligned incentives as to why we may want to use it for something else. The manufacturers may some have other motive. The patient may want something totally which is different, so we need to align all of those things together. And of course, there are financial implications. I've shown you 10 softwares which are available today. Can your institution afford it? Maybe they can afford it, the Mayo can afford it, but are they willing to buy it? You know, where is that ROI? That's what they want to see. So I think those are all barriers to implementation. So while as endoscopists and researchers and as clinicians, we are busy taking care of patients, I think there needs to be this complementary movement in the innovation side, but also on the support side on the left so that all of these could be resolved in a way that we start using them in a meaningful way tomorrow. So this is how I see the GI practice under AI. You have one side, the endoscopy piece of it, in which you have an optical biopsy. You can recognize lesions. You can reduce your cancer miss rate. There may be a cost-saving strategy with the optical biopsy. You could have improvement in your quality metrics and automated documentation and automated endoscopy writings. I think this is great. I think it will help our practices. On the tech side of things, you have your endoscopy report and your pathology report, your EMR, NLP, and I think that will allow us to improve the quality of our patients as well as provide more precision care to our patient population. So I've taken you through this entire circle of how we perform endoscopy, and I think I've hopefully shown you that AI does improve our practices in each and every single piece of how we perform our practices today. So this is one way to look at it. This is how at least traffic was done in 2009, right? You had the guy in the chopper. If you were driving to work, you could listen, and he would say, oh, we have this person in the helicopter, and there's on I-35, there is a jam, right? But now it happens like this. So this tells us that in 10 years, two people lost their jobs. So we have to upgrade ourselves, and I think this is the time for artificial intelligence and endoscopy. Thank you very much for your attention. Can we get, oh, they are on. Okay. Can we get all the speakers back up to the table here so we'll do some questions and answers. So again, if you joined late, if you want to ask a question, you can just text your question to ASGE. You text 22333 and you start with ASGE 2024, and then you'll be able to send us any of your questions. Okay. So the first question I think anybody on the panel could answer. Could you explain why Olympus doesn't offer a method to measure the size of a polyp since it is available with Fujinon? This continues to be a problem due to significant variability in estimating polyp size by gastroenterologists. Yeah, so I can answer that. So again, even for polyp size determination, right now you have to have your software undergo FDA approval. So you have to submit it for regulatory approval, and the companies that want it just have to submit it. So if one is not available, it's perhaps either they don't have the data or they have not submitted it for approval. Do they all have laser-equipped technology, all of the major makers? No. So the one from Fuji is based on laser, and I think part of the reason they were able to get FDA approval is because they claimed that they were not using artificial intelligence as part of the determination. The laser is just a marking tool, and the physician was still making the call, and I think that's the way they were able to get approval. There are a couple of companies that have submitted to the FDA, but they're using AI or the computer to make that determination, and I think that's where the regulatory agencies need to have that concept approved that are we okay in the machine making the call versus the human making the call. Dr. Shergill, did you want to add something? Yeah. So Olympus may or may not have the technology. When I met with them, they did not comment that that was something that was even going to be available. And for Fuji, even though they do have the FDA approval, they're actually wondering about whether or not there is a market for it. So it's a limited release, and they're really trying to understand if this is going to be something that's going to sell in the U.S. So if this is of interest, you should certainly stop by Fuji and let them know you're very interested in it because they're pondering whether or not what the release should look like. Thank you. So the next question is, do endoscopists tend to over or underestimate the adequacy of bowel prep compared with AI? So in the study, the vast majority of the inaccuracies was from over-calling the bowel prep, meaning that a score of one was given a three or a score of two was given a three. So it was usually over-calling by endoscopists. That it was better than it actually was. Exactly. Yeah. So there's a follow-up to this. It's maybe even from a different person, but it's closely related to this. So when AI assesses bowel prep adequacy, how does it know after you've cleaned, since we should use our score after we've cleansed, right? It's not the before. How does the software know when to do the assessment? So two ways. I mean, one way is that it's a real-time assessment. So it will give you a score of a 0 if there is stool in there completely obscuring your view. Once you start cleaning and you're able to get it to a 3, then that score in real time will change to a 3. And then it's just allowed to give the best score report. So if at one point it was a 3 in one segment of the colon, then that's the score which will be superimposed. So I haven't upgraded myself yet, but if anybody wants to stand up and raise a question, that is perfectly fine. I would have to repeat it for you because we have a lot of online audience as well. So there's a 20% miss rate that's obviously concerning for all of us as endoscopists. But are the polyps that are missed much smaller? Are they all advanced adenomas? Because sometimes you look at a polyp and you say it's hyperplastic. It's probably meaningless. So how do we as endoscopists deal with it and go to bed knowing that we haven't missed a cancer? So if you look at even the contemporary AI studies for improvement in ADR or for AMR, the majority of the lesions are small polyps. I mean, I think that's a well-known fact. But again, if you go by the guidelines and if you go by the Doug Corley paper about the ADR, I mean, we know that for every increase in the ADR, there is a reduction in the inter-alcoholic rectal cancer states. This was the same concept for CTC when CT colonography came about. I mean, that was the same issue which was raised that the majority of the lesions being missed by CTC were very small lesions. And do we really care about small lesions? As of today, we care about small lesions. It's part of our multi-society task force guidelines, and every lesion counts. So Dr. Parasur, I have a question for you. All of us spoke so much about AI and all the wonderful technologies that are coming. Is it time for us to standardize these things? Yes, definitely. I think AI, when we talk about it, it's a broad term, and we just say it's AI that has an accuracy of 95% and so forth. But the devil is in the details, right? Just like any other clinical study, we have to understand why this algorithm made this prediction and what was the data on which it was trained, and what is the composition of that data? Does it make sense to use this data for our clinical patients and so forth? For me, when I look at AI, it's just like any other tool, like any other statistical software. So you need to understand those details, and coming up with frameworks for understanding those details, I think, is most important, and that's something that Institute is working on. So are there any benchmarks or guidances that we would have? Because there's so many companies, so many softwares that are coming. Your IT person will let you use only so many things extra on the computer, so it's a problem, right? So we are telling everybody it's getting there, but how do we regulate it? How do we standardize it? And more importantly, if I can pick all of your brains for 10 seconds each and have a rapid round, how do you go to your administrator and say, this is what I need? Right. I will pick the piece on the evaluation, and in that aspect, I think the ASGE task force now, the ASGE AI Institute, put out guidance for frameworks. It's a simple model card that I highly encourage all of you to take a look. It has, you know, whenever you're reviewing any AI software that you want to bring to your practice, it has questions regarding how the data has been trained, what version has it been tested in real-world data, how does it perform post-market, and several other important clinical things that we have to look at when we evaluate an algorithm. The second thing was about how do you get this into practice? Once you are convinced and once you know that it's going to standardize practice across all your endoscopists, all your clinicians, and you see value in what it is bringing, I think that's where you approach your administrators, and administrators these days are very well versed with AI technologies. The only issue is you have a barrage of these technologies that they are being attacked from different subspecialties and from an administrative role as well, so you need to make your point why you think this is important and how it's going to, you know, improve your performance, better patient care, and return of investment. Yeah, I think we've heard this term a few times now, return on investment. I think it's going to be trying to convince administration that there really is, that it's worth the investment. I think when it comes to something like colonoscopy and misrate, you can imagine if you're using some kind of a distal attachment, either a distal attachment cap or a balloon that's kind of flattening those folds and really exposing all the mucosa, and pair that with AI that's helping to detect these lesions, and then, you know, future iterations of AI which help us potentially diagnose and figure out what needs to get resected, how you decrease procedure time, hopefully while maintaining or maybe even increasing ADR, combining that efficiency as well as quality that I think are going to be key in terms of a return on investment. Shyam, how do you explain the ROI to your administrators? I mean, I think, you know, I'm from a community health system, so resource allocation is very important. I don't think in the big picture endoscopy fits in the health system. It's not a priority. The priority for them is radiology. They don't want the radiologist to miss a big cancer, which happens all the time, so they want AI there. They want AI in the primary care clinics for their dictations and notes. They want AI to pick up cases for referrals to cardiology if there's a chest pain, EKG abnormality. We are looking at small polyps. The only way that I found it useful is to tell them, well, if you do more polyps and if you do a histology, there's a downstream revenue and so on. Unless it's commercialized, really commercialized, and there's a lot of data, it's going to be difficult to get AI into endoscopic practices uniformly. There might be some high-end practice. There might be some academic institutions, but for community health systems, where a majority of us practice, the priority for administration is way down unless there's data and unless it's commercially available. So, Dr. Shergill, I'm in the market for a scope. Should I wait for another two years because you just showed that there are a lot of changes that are happening? The Olympus and Pentax scopes were just updated, but there's certainly a lot of similarities between scope platforms. I think one thing I took away from this is leasing scopes may be the way to go just so that you actually have that option to switch. Sometimes we get locked into scopes just because of the processors that we already own, and especially if you have multiple generations of the same scope, that can be difficult. But if you lease a scope, you can always have kind of that latest and greatest. So one quick question, then I have a question for the group. This one came in from the audience, and I think, Prateek, this is for you. What is the name of the AI software app that summarizes a patient encounter for the HPI? So the one that is available, I think DAX is available. So that was the software that I showed. Shravanti, what's the one that you guys use? There are several softwares that market as flooded, but we use a Microsoft product called DAX Nuance. That was one of the first ones in the market. There is several others. The second popular one is, or getting popular, is Abridge. And then there's Suki. Suki actually is a Seattle-based tech company that recently tied with Zoom. So a lot of telehealth works through Zoom, so it's actually integrated. But in common, depending on your practice, some of those decisions are made by our administrators at the hospital level. If it's an independent, independent practice, you can just choose. You can choose which ones you want. And there's software for nurses as well. And the other thing about starting with this, probably in your practice may be the easiest one to get, because it can uniformly be applied across all subspecialties. The DAX ones. Yeah, exactly. Or any of these, right? I mean, DAX and Abridge are probably the ones which are the most common. But again, we don't want to promote that, but just saying, you can use that across all subspecialties, so it's not just for GI. And it's DAX? Yeah. It's actually in the Epic upgrade. Oh, okay, great. Okay, so this is for anybody that wants to take this on. Maybe everybody will. So one of the things that I've been hearing in just the lay press is about how much impact AI is going to have on the environment due to its energy needs. Can people on the panel comment about that? Have you thought about that? What are the solutions for those? Yeah. One of the things, whenever you're using those compute and those GPT resources that are widely available, everybody thinks about is carbon footprint. A lot of companies, like when we talk about it, it's big tech that looks at how do we build data centers that are carbon neutral. Similarly, hospitals are looking at being carbon neutral from waste as well. In general, the way technology, when it first starts, as your compute is high, you need a lot of that. That will get a lot of carbon dioxide. But now the technology is becoming into, from the large language models to the small language models and also edge devices. So what we know is they take the knowledge or the reasoning from the large language models, but it can still perform on your phones. So that way, there's a lot of technology aspects also that will optimize your carbon footprint in the future. So Sean, you go to Europe and Asia, you can get contrast enhanced to U.S. And now you're showing AI doing a similar thing. We still don't have contrast enhanced U.S. 15 years after the rest of the world is doing. How long do you think what you've shown is going to come into practice? It's going to be very long. I remember still in Calcutta, maybe in 2012, I saw in real time Olympus program for colon polyp detection. 2012, after 13 years, we don't still have it in the United States yet. I think many of these are investigator initiated. If the hospital is willing to fund, we can do some studies. But to get this FDA approved, we went through this process. It was very, very incredibly painful process to get it through the FDA. I think, as I told you earlier, I don't think EUS and AI is going to be able to see anything for the next 10 years. I just want everybody to know that while it sounds exciting and some of these are initially look very promising, it takes a long time to get into practice. So you quite don't need to worry about jobs yet. I think that's it. We'll stop a couple minutes early. So, just let me repeat so that, because I'm not sure everybody can hear and I know people at home can't hear what Dr. Kaswani just asked, so the question is, will we need to update guidelines to incorporate the use of AI to bring us up to speed to 2024, given the advances that have happened in AI, specifically like polyp size, the accuracy of that, the accuracy of the bowel prep? Yeah, so of course the short answer is yes. In terms of the timing, I think the first document from the AI Institute will be more on detection because that's what's available, right? I mean, in terms of your question about metrics, I think first we will, within the Institute, have to define what the AI-based metrics are and then have a document on it. So it will be based on as and when each one of those get available because the CADI software is available right now. The first document will be on CADI and then as we define AI-based metrics for a procedure, that's when those documents will be performed. All right, if there are no further questions, we'll take a short break for 20 minutes, reassemble here at 10 a.m. Sharp. Sharp, and the speakers will be around, so if you have any questions, they'll be happy to answer them. Thank you.
Video Summary
The video features a comprehensive postgraduate course on gastrointestinal endoscopy hosted by the American Society for Gastrointestinal Endoscopy. Led by Dr. Michelle Anderson, Dr. Nalini Gupta, and Dr. Irving Waxman, the course is titled "Everyday Endoscopy: Expertise Meets Evidence." The objective is to explore advances in endoscopic technologies and their clinical applications. <br /><br />Key presentations include Dr. Amandeep Shergil's examination of endoscopic technology advancements such as improved imaging, chromoendoscopy, and ergonomic enhancements. These innovations aim to optimize procedure efficiency and practitioner health. Dr. Sravanti Parasay discusses the evolving role of Artificial Intelligence (AI) in inflammatory bowel disease, particularly in predictive diagnostics and treatment response monitoring. Dr. Shyam Varadarajan highlights AI applications in pancreaticobiliary diseases and Endoscopic Ultrasound (EUS), noting its potential for lesion detection and quality enhancement, though its integration remains distant. <br /><br />Dr. Prateek Sharma's keynote explores AI's transformative role throughout the endoscopy process—from scheduling to quality improvement and automated documentation. He emphasizes AI's potential to improve detection rates, automate bowel prep assessments, and streamline post-procedure tasks, despite implementation challenges.<br /><br />Discussions address the need for AI standardization and regulatory processes before widespread clinical adoption. They also acknowledge potential environmental impacts due to AI's energy consumption. Lastly, the session identifies the need for updated guidelines to incorporate AI advancements for enhanced clinical practice outcomes.
Keywords
gastrointestinal endoscopy
American Society for Gastrointestinal Endoscopy
Dr. Michelle Anderson
endoscopic technologies
Artificial Intelligence
inflammatory bowel disease
pancreaticobiliary diseases
Endoscopic Ultrasound
Dr. Prateek Sharma
AI integration
procedure efficiency
predictive diagnostics
regulatory processes
clinical practice outcomes
×
Please select your language
1
English