false
Catalog
Gastroenterology and Artificial Intelligence: 2nd ...
Plug and Play, Computer Vision Devices, Motility a ...
Plug and Play, Computer Vision Devices, Motility and Capsule
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
For Session 3, our rapid-fire session showcasing what is here and now, I'd like to introduce our moderator, Shyam Thakkar, and Helmut Messmann. Shyam is a professor of medicine and the director of advanced endoscopy at West Virginia University. And Helmut Messmann is the director of the medical department at the University Hospital Augsburg, Germany, and president-elect of the ESGE, as well as secretary of the German Society of Endoscopy. Thank you, Helmut and Shyam. Thank you, Mike. It's my pleasure to also introduce the next talk, Plug and Play, Computer Vision and Devices, Motility and Capsule, by Dr. Seth Gross. As many of you may know, Dr. Seth Gross is a professor of medicine at the NYU Grossman School of Medicine and clinical chief of the Division of Gastroenterology and Hepatology at NYU Langone Health. Dr. Gross specialized in advanced endoscopic procedures. His clinical practice is focused on the prevention, diagnosis, and treatment of gastrointestinal precancerous conditions and cancers, such as esophageal cancer, colorectal cancer, and pancreatic cancer. Dr. Gross's research interests lie in the area of gastrointestinal malignancies, as well as quality and innovation in endoscopy. Welcome, Seth. I'd like to thank the course organizers and the ESGE for giving me the opportunity to present at the second annual Artificial Intelligence Summit. My talk today is Plug and Play, Computer Vision and Devices, Motility and Capsule. Over the next several minutes, I'm going to talk about some of the Plug and Play AI options. Is there a role of artificial intelligence in GI motility? And has artificial intelligence helped us in reading capsule endoscopy? So when we think of Plug and Play devices, currently there aren't any approved for the United States by the FDA, but we are seeing them commercially available in Europe. And we see two of them right here. One is GI Genius. That one is to the left. And then the other one is EndoAid. And the whole concept here is that these devices would plug into your processor or the back of your endoscopy monitor. And then you will have that artificial intelligence support while you're doing a screening and surveillance colonoscopy. And that's the main area where this is being used today. I do expect future iterations of this to cover other areas of the luminal GI tract and even go beyond that. And I will even show an example of Barrett's esophagus. So when we think of the workflow, we have the endoscopy system. Again, that's connected to an artificial intelligence, a computer processor or module. The image is captured and then it is processed by the machine-based learning software. And you can see all the way to the right, that green circle, which would be to cue the endoscopist to say that there could be a polyp there. And then the endoscopist would need to interpret if this is a true finding or not. And if it is a polyp, decide which is the best modality to remove it. This is just a typical setup. Right now, most of these systems are not just on a single monitor. And so you're going to have your regular endoscopy monitor, which in this case would be on your right, and then your artificial intelligence monitor to your left. And again, you see a square yellow box. And that is where a potential polyp was identified during this colonoscopy. So where will Plug and Play fit in and what will it look like? How will it potentially help us both in upper endoscopy and colonoscopy? And could we be at a point in the future where this helps the entire procedure, not only improving our ability to diagnose abnormalities in the luminal GI tract, but could it help the whole procedure as a whole? So this is just an example of artificial intelligence. This is from Docbot doing surveillance for Barrett's esophagus. Just to orient you, we're looking at the distal esophagus, and you see a clear segment of Barrett's esophagus. And one of our goals when we're evaluating a patient like this is to identify if there's any evidence of dysplasia. And so what we're going to see is that the scope moves closer to that Barrett's segment and starting to analyze the mucosal vascular pattern and pit patterns using narrowband imaging. And now we see that green box, and that green box is highlighting an area in this segment of Barrett's esophagus that there's high suspicion for dysplasia. And this is really important because being able to target biopsy and identify the highest degree of dysplasia in a segment of Barrett's esophagus, if dysplasia exists, it's certainly going to impact our risk stratification. And even we would likely offer endoscopic therapy in someone that has dysplasia, especially hybrid dysplasia, which could be endoscopic mucosal resection or just ablation. So the artificial intelligence system identifying the area that is most abnormal. And this could certainly help the endoscopist in the future when we're sampling and evaluating a patient with long segment Barrett's esophagus. But most of the work that we're seeing today is in colonoscopy. And what would a fully integrated colonoscopy look like? So here you would see that the AI starts to record the time. So the start time of the procedure at the level of the anus. It will identify bowel preparation. So here it's a Boston Power Prep score of one. And could we move it to the green part of that bar on the left? With washing and irrigation, the system changes it to a two. And then when it's fully cleaned, it gives it a three for that segment. And that's really our goal because that's going to put us in the best position to identify polyps. Imagine when you reach the cecum that you could get cues that the key landmarks are found such as the appendiceal orifice with the purple box and the ileocecal valve with the yellow box. It tells you our start time and time to cecum in this case, which was a little over three minutes. Being able to identify a polyp and you have that green box around this flat polyp, it gives you a rough size of 12 millimeters. You see that in the upper left corner. And it even gives us a polyp classification of a 1S. Other possibilities that we're seeing with the machine based learning is after finding the polyp, being able to interpret, give us a real time histopathology. So a type one would be a hyperplastic polyp based on NICE classification and a type two would be an adenomatous polyp. And the confidence here is 100%. Putting tools out to take polyps off in this particular scenario, you see a flat polyp with the green box, a snare is put out and it tells you the type of snare in the right upper corner, as well as the size of the polyp. The AI records the last image so you actually have your complete withdrawal time. So this is what it would look like if we put it all together. You're in the cecum and then you're in narrow band imaging. There's a box around a polyp. It's a high confidence of what this polyp might look like. It gives you the Boston Valprep classification of three in this segment. The forceps that come out and it says tool forceps, which tells you what we're using. I could even pick up the tips of the cuff. So pretty amazing that we could be in a position that the whole procedure is completely mapped out for us from our insertion time, our withdrawal time, the quality of our bowel preparation, making sure we reach cecum with the appropriate landmarks. And when we do encounter polyps, the real-time histopathology, is it adenominous versus hyperplastic, what tools we are using. Now we're going to shift gears and we're going to talk about machine-based learning in the area of motility. Functional luminal imaging probe, which many of you have heard as FLIP, uses high resolution impedance to measure luminal dimensions. FLIP detects esophageal contractions, which correlates with motility disorders. The main one being studies is achalasia. There are different types of achalasia, types of one through three, and these are just characteristics of LES relaxation as well as if there are contractions or no contractions within the luminal esophagus to help us determine a type of achalasia, since that could certainly dictate the long-term management for this patient. So this was a study of 180 patients. They had the different types of achalasia, types one through three. 140 were in the training set and 40 were in the test cohort. The goal was to see if the artificial intelligence model could identify spastic type three achalasia, able to do it 90% of the time, and non-spastic type one and two, 78% of the time. And it's really just identifying those image characteristics that the FLIP generates. This just shows you the FLIP image above a high-resolution manometry image for the different types of achalasia, types one through three, and the machine-based learning is looking at those FLIP images to help the person interpreting those tests what type of achalasia they're dealing with. So certainly this is a preliminary model, but this again goes on what I mentioned earlier that even now the main focus of machine-based learning in gastroenterology and endoscopy has been on identifying polyps and interpreting their histopathology, we're starting to see expansion beyond the colon. And we saw that with barrett's esophagus. Now we're seeing that with motility testing. I think there's going to be more to come in this area and many other areas. The last area I want to talk about today is capsule endoscopy and artificial intelligence. Capsule has been around since 2001, really groundbreaking in terms of that black box of the small bowel no longer exists, and it's really helped physicians and patients with unexplained symptoms, most commonly for occult gastrointestinal bleeding. There have been advances in the software for capsule endoscopy, such as a suspected blood indicator with a sensitivity of 60% showing that there's active bleeding during the capsule study. There's adaptive frame rate to improve resolution. And then quick view. Remember, there are 50,000 to 60,000 frames that are taken during a capsule study, and the quick view attempts to identify the top 10% that pathology could be present. But keep in mind, readers are not perfect. Sometimes things just show up in a single frame and could be missed, but we also have a limited attention span. So could we improve on what we're doing with capsule endoscopy to improve our diagnostic yield? One of the most common things that we see are angioictasias. This is probably the main finding during a capsule study. And could computer-aided detection improve our lesion detection for angioictasias? So there was a study looking at capsule videos, 20,000 normal, and close to 3,000 vascular lesions. And the question would be, could the machine-based learning identify angioictasias? And so there were 300. The sensitivity was 100%, specificity 96%, positive predictive value 96%, and negative predictive value 100% for identifying these lesions and making sure that normal images are not being deemed as those with angioictasias. How this is done, there's the original image all the way to the left, and you see classic angioictasias. These images are annotated and then using a CNN platform. These are marked, and then the CNN computer-aided detection is able to identify these lesions on the original image. When you look at the left side of the screen compared to the right side of the screen, these angioictasias are picked up. The other type of lesion that we find in capsule endoscopy, which I think is actually the more challenging, it's easier to certainly see flat red spots or angioictasias in the lumen of the small bowel. But the small bowel is always moving, contracting with the normal motility. And sometimes just folds, which are just normal folds of the small bowel, could be confusing for polyps. And the types of protruding lesions we see are just normal variant, polyps, nodules, epithelial tumors, submucosal tumors, and venous structures. And again, sometimes these things are just found on a single frame that could be missed, or we see something, but we can't really figure out if this is a real pathology that would lead to deep enteroscopy, or this is just something the patient or the physician should not worry about. So there is a trial looking at 30,000 images of protruding lesions in 292 patients. These were then annotated by endoscopists for a CNN system, 7,500 had protruding lesions, and then there were 10,000 normal images in 93 patients. And the whole goal here, again, is to help better classify these polyps with accuracy. And you can see here towards the bottom of the screen of protruding lesion, and there's a box from the computer-aided detection software to suggest that there's a protruding lesion present. That one is quite obvious. So how accurate was the convolutional neural network to identify these lesions? 90% of the time for protruding lesions, but then it also identified normal mucosa. So the sensitivity was 90%, the specificity was lower, just under 80%. This is just an example of the different type of lesions detected by artificial intelligence. Looking on the left, you have your polyps, followed by nodules, followed by epithelial lesions. The submucosal lesions sometimes could certainly be the most challenging because those could also just represent a normal fold. And then the motility of the small bowel at the time of capturing those images can make it confusing for the person reading. And lastly, venous structures all the way on the right. The challenge that we have to overcome for better accuracy is to eliminate both false positives and false negatives with machine-based learning. So false negatives could just be related to the color pattern and the surrounding normal mucosa, the smallness, poor demarcation due to darkness, debris, or foam, or just poor focus from the camera when it's capturing those images. And then again, false positives, that's the area we really want to try to improve on, which mainly is from normal mucosa, foam, debris, and vascular dilatation. But keep in mind, this is a tremendous step forward for having machine-based learning, computer-aided detection to help us better evaluate the small bowel, which we know is quite long, 18 to 20 feet. And there are, again, 50,000 to 60,000 frames that a reader has to review. So to summarize, plug-and-play devices, the goal is to easily integrate with the current endoscopy processor. I think in the future, having this on a single monitor versus our regular monitor and then the computer-aided detection monitor will make it easier for us. We're starting to see the expansion of indications of machine-based learning in gastroenterology and endoscopy, in motility, being able to pick up the flip images, and AI shows tremendous promise in the small bowel. I think we're going to see expansion into inflammatory bowel disease, endoscopic ultrasound. We're really just at the beginning of how artificial intelligence can help us in both gastroenterology and endoscopy. Thank you very much for your time.
Video Summary
In this video, Dr. Seth Gross discusses the use of artificial intelligence (AI) in gastroenterology and endoscopy. He begins by explaining the concept of plug-and-play devices, which can provide AI support during screening and surveillance colonoscopies. He mentions two commercially available devices, GI Genius and EndoAid, which can be integrated into the endoscopy system to aid in the diagnosis of polyps. Dr. Gross then demonstrates how AI can assist in various aspects of the colonoscopy procedure, such as identifying landmarks, detecting polyps, providing real-time histopathology, and suggesting appropriate tools for polyp removal.<br /><br />Next, he explores the potential applications of AI in motility testing, specifically in identifying different types of achalasia based on esophageal contractions. He also discusses the use of AI in capsule endoscopy, where it can improve the detection of angioectasias and classify protruding lesions more accurately. Dr. Gross emphasizes the promise of AI in gastroenterology and endoscopy, predicting its expansion into inflammatory bowel disease and endoscopic ultrasound. He concludes by expressing that the field is only beginning to explore the full potential of AI.
Asset Subtitle
Seth Gross, MD, FASGE
Keywords
artificial intelligence
gastroenterology
endoscopy
polyp detection
motility testing
×
Please select your language
1
English