false
Catalog
ASGE Annual Postgraduate Course: Clinical Challeng ...
Current applications of AI in Gastroenterology
Current applications of AI in Gastroenterology
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
The next part that I'll be talking about relates to the use of our clinical applications. And Tyler, already in his talk, has sort of laid the groundwork for this, so that's part of the thing. And there may be a little bit of overlap on some of the data that we'll be sharing, but some of the concepts that he's brought about, let's see how they really can help applications of AI in gastroenterology, specifically in endoscopy. So AI in medicine, I mean, if you look at the expenditure of the US healthcare system, it's $4 trillion every year. And so a lot of effort is now being put into digital health, and 80% increase in digital health expenditure has happened over the last decade or so. And once we think about it from our perspective, most of the times we are looking at the clinical applications only, which is right from imaging to patient intake, as well as to in-hospital care of the patient. But there's also another aspect of healthcare expenditure, which is the operations, the data infrastructure, I think, and a lot of efforts are also being spent on the AI side towards this, and we'll discuss that in a little bit as well. So just keep in mind that sometimes we lose focus of the administrative aspect of healthcare, but that's also where artificial intelligence can play a big role. So now if you look at the applications of artificial intelligence in gastroenterology, I mean, they're listed right here, is we've talked about or heard about computer vision in endoscopy. I mean, that's specifically there. Quality assessment, and I think quality of the care that we are providing to the patient, how can that be impacted by AI? We heard a little bit about NLP, so that gets into documentation and automation of reports, but also finally for clinical care coordination and education, both of our patients as well as our trainees. I think they're different in a whole slew of applications here, and I'll just try to spend a little bit of time in each one of them, just trying to highlight the areas where applications have already started. So the first is computer-aided detection or CAD-E, and this is one of the RCTs that I'm sharing with you, trying to compare patients for colonoscopy screening or surveillance, and they were randomized to either an HD colonoscopy with AI or without AI, and the device which was used here was the GI Genius device, and this is the output that you see from the machine, and these are the data and what that showed with ADR or adenoma detection rate being the primary outcome that there was a significant difference between the AI versus the non-AI arm, and the significance was there for adenomas per patient as well. The major difference that they found was both for diminutive polyps but also large adenomas, there was a significant difference between the two arms, which has not been shown in other randomized controlled trials, but this was one of the studies from Europe published last year in gastroenterology. And so this led to the first AI device being approved in the U.S. in April of 2021, although there are several devices which are approved in Asia as well as in Europe, but at least for us in the U.S., this was the first device, and what this is just telling us that this is just the start, this isn't the end, I mean, so very soon there'll be a number of other CAD-E devices which are being tested right now, but also that takes us into the next step, which other companies are looking at is the computer-aided diagnosis or CAD-X, and this may be either characterization of neoplastic polyps or neoplastic barotosophagus, but also the depth of invasion of cancer or delineation of lesions, which are also important issues to look at it. So this is one such way that one device works, is that it analyzes whether this particular polyp which is seen on the left, is it a tubular adenoma or is it a hyperplastic polyp? On the left was an adenoma, and now this is a diminutive colorectal polyp in the sigmoid colon, and again, it is trying to characterize that and trying to figure out whether this is an adenoma or a non-adenoma in this situation. So what's the evidence that this works, and this is actually one of the earlier studies two years ago by Michael Byrne and his colleagues looking at the real-time characterization of polyps by using the NICE classification and using high-definition white light endoscopy followed by narrowband imaging with the near focus. And you can look at, this is the initial training and the validation set which was used and what the output is shown here. It also gives you the probability according to the NICE classification of that being an adenoma or a non-adenoma. And this is the diagnostic performance of this algorithm in this specific situation with a negative predictive value of 97% and being able to characterize or differentiate adenomas from hyperplastic polyps. So that's one application of looking at AI in endoscopy is characterization. The other is able to look at images and for the algorithm to be able to differentiate whether this is a deep submucosal invasion, SM2, which is submucosal two-level or deeper and here's the probability score which goes from zero to one. The closer you get to one, the higher the probability of this being an invasive cancer. And I think this would be extremely important when you are performing endoscopy with the intent of resection such as ESD. And so does a lesion actually undergo a resection or not? And this is looking at gastric cancer and they evaluated the images by both white light endoscopy, narrow band imaging, as well as indigo carmine spray, chromoendoscopy, and the AI-based lesion accuracy for depth of invasion was highly accurate by using either one of those modalities. So that's telling us about how this can apply to predicting the depth of cancer in the situation. Now there are other domains with active AI research that I won't be going into, but again, looking at large datasets and looking at inflammatory bowel disease, models to predict GI bleeding, readmission rates, for example, pancreatic lesions with computer vision, as well as algorithms for chronic liver disease outcomes as well. So these are all being looked into at their applications of AI in gastroenterology. Now let's move on to the next setting, which is quality assessment in endoscopy. And I think that's key, is how frequently are you reaching the cecum? What is your adenoma detection rate? What's the Boston bowel prep rate, for example? And this is one such tool, which is the software developed in China, which looks at the landmarks, for example, during colonoscopy. And this is telling us that the cecum has been reached, and this is recognizing the ileocecal valve as well as the appendiceal orifice. And also what it does is for a quality metric, it assesses the withdrawal time. So you can see the withdrawal times right there. So for us, rather to go back to the charts, this becomes automated so that you're not looking at your insertion time or the removal time in the nurse assessment sheet, but this is actually something which is automated for you. And so this is one way in quality assessment that can happen. The other is assessment of bowel prep by artificial intelligence. And this is shown right here, in which you can look at the Boston bowel prep score from a zero to a three, which is what you want in different segments. And usually a score of six to nine is what we consider good. Here it is looking at real time is what's the Boston bowel prep in the segment as you're doing a gradual withdrawal. And the score calculated is every 30 seconds. And this again used an initial training set and a validation set, which is listed here. And then it looked at the overall accuracy of the AI device to assess the Boston bowel prep, and it compared it to five early or inexperienced colonoscopists, senior endoscopists as well as expert endoscopists. And you can see that it outperformed all of them. And this was through all different grades of the Boston bowel prep from zero to three, in which the software was able to outperform all of them. But again, you can see automation for quality metrics that we need in endoscopy, can we do it? So can AI do it all? And this is what is being looked at. It records the start time. It can look at the bowel prep. It can identify the different landmarks. It can help recognize the polyps. It can also probably identify tools that, for example, you should be using for removal of a polyp after it has identified that, and then it records the end time as well. So you can see that there are different algorithms and softwares, which are in various phases of development, which can have this application in endoscopy to look at it. So that was looking at quality, just focused on endoscopy. Now let's move on to something that Tyler did discuss, which was natural language processing. But before that, recognizing and automated labeling of landmark for images. So for right now, after you're done with the procedure, you have all the images there and then you'll go in manually and you'll click it and you'll say, that's the cecum or that's the gastroesophageal junction. Here's the antrum. Here's the pylorus. This is the incisura. So we are supposed to do that manually at this stage, but here's one software which actually just tells you how many sites in the stomach were observed. And it calculated 32 sites, which the algorithm was designed to detect, and then it would automatically label these sites and tell that this is the incisura, this is the antrum, that's the pylorus. So I think, again, automated labeling of landmarks will make this easier. This is something you saw earlier, which is looking and calculating ADR. For example, looking at natural language processing, and this was compared with a retrospective study by data extraction by two reviewers manually. And you can see that NLP alone performed as good as two reviewers manually pulling the data for PDR, which is the polyp detection rate, for ADR, for the bowel prep rate as well as secal intubation rate. But again, what was interesting and which is important to know why we need AI is because NLP did this under 30 minutes for every single procedure that had been done in the institution since the inception of the database, whereas for manual collection, it was six to eight minutes per patient. So 160 man-hours was required for collection of this type of data for 600 patients, whereas the algorithm or NLP could do it under 30 minutes for the entire thing. So you can see sort of the strength of AI in looking at this for us. The other thing which was looked at this was by AI-assigned colonoscopy surveillance intervals, and this was just recently published in Gastrointestinal Endoscopy, in which they looked at more than 500 colonoscopy and pathology reports from 320 patients, and it was stratified based on the different number of polyps which were found and the categories which were assigned. And you can look at the outcome here of how good the software was in assigning surveillance interval by using NLP in this situation. The overall accuracy, again, in this early study was more than 90%. So you can see that in these clinical applications, we are getting there is not only can we recognize the polyp and the computer can characterize it for us, whether it's an adenoma versus a non-adenoma, but more so is also it can detect your ADR and also in the future help assign the colonoscopy intervals in looking at this. Finally, let's end up with looking at clinical care and coordination, and this is an app which was developed based on bowel prep and gave bowel prep instructions, and the software also looked at the diet of the patient and was able to modify its instruction after the initial instructions were generic instructions to all the participants in this situation. And you can see that the outcome was significantly better in the app group as compared to the control group which just received standard instruction from the nurses. So you can see that this also will help in patient education as well as clinical outcomes in this situation. The other is using AI as a scribe and a virtual scribe. So this is the ambient AI in which the computer is just listening to the clinical interaction and the doctor-patient conversations and automatically documents the visit, scribes it, so that the majority of the time that we are spending on EHRs actually goes away and we are able to spend that time with the patient rather than spending 30 minutes in trying to write a report after we've done. And again, this is true for the endoscopy reports as well as for our clinic visits as well. This is more for patient education as well as for trainee education, and here's an example of AR and VR in which you can give a good anatomy lesson by using this and looking at the different parts of the body or the anatomy there. And this is looking at a clinic visit or a virtual clinic visit with a patient in which the physician's not there, but then it's almost like looking at it and then telling the patient that you have a pneumonia and that's the location of the pneumonia. This is looking at the same technology but for a different application, which is interventional radiology, and here's a femoral catheterization which is being performed, but you can see exactly how the interventional radiologist can look at those images and try to perform that procedure. You can see that the applications will continue to expand. And this is what the task force looked at and said, well, how do we put this thing all together? What are the use cases which should be defined? And I think that was done nicely by Tyler and the task force in this document. What are the different data science priorities of looking at this and what are our research priorities because we feel that as a society and as physicians, we should try to be able to identify that and lead this field rather than the other way around in which we are struggling with different tools being there and then try to find an application for this. So finally, we have to build this ecosystem and that's where we are at by looking and having the physician, the different societies, the software developers, as well as the data scientists, and that's, I think, which will lead the way of AI in gastroenterology and endoscopy. So thank you again for your attention and I will stop right there.
Video Summary
In this video, the speaker discusses the use of artificial intelligence (AI) in gastroenterology, specifically in endoscopy. They highlight the increasing expenditure on digital health in the US healthcare system and the potential applications of AI in both clinical and administrative aspects of healthcare. The speaker then focuses on the applications of AI in endoscopy, including computer-aided detection and diagnosis, quality assessment, and natural language processing. They reference studies that show the efficacy of AI devices in detecting and characterizing polyps, as well as assessing bowel prep quality and surveillance intervals. The speaker also introduces the use of AI in patient and trainee education, such as virtual anatomy lessons and virtual clinic visits. They conclude by emphasizing the importance of collaboration between physicians, societies, software developers, and data scientists in building an effective ecosystem for AI in gastroenterology and endoscopy.
Asset Subtitle
Prateek Sharma, MD, FASGE
Keywords
artificial intelligence
gastroenterology
endoscopy
clinical applications
administrative applications
×
Please select your language
1
English