false
Catalog
Demystifying the Black Box: Where AI meets GI (On- ...
Webinar Recording
Webinar Recording
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Welcome, and good evening. The American Society for Gastrointestinal Endoscopy appreciates your participation in tonight's webinar. My name is Ed Dullard. I'm the Chief Publications and Learning Officer, and I will be your facilitator for tonight's presentations. ASGE is grateful to CDX Diagnostics, who is the sponsor of tonight's presentation. Our program, entitled Demystifying the Black Box, Where AI Meets GI, will address the evolution of AI in medicine, emerging technologies in upper and lower GI endoscopy, and an AI-powered diagnostic platform currently in the ASGE guidelines for the screening and surveillance of Barrett's esophagus. Before we get started, just a few housekeeping items. There will be a question and answer session during this presentation, so please make sure that you submit your questions at any time online by using that question box on your screen. Before we get started, please note a number of features in tonight's platform so you are aware of the many resources available to you during and after tonight's program. Currently, you are located in our auditorium. Please feel free to visit ASGE's resource room. You will find a number of options, including Video GIE, Meet the Master Videos, History of Endoscopy section, a Gaming section, as well as access to ASGE's guidelines and GI Leap. In the networking lounge, you will find access links to complete an evaluation survey for tonight's webinar. This will also be available in GI Leap after the webinar. We would appreciate you completing this, and it only takes a couple of minutes or less. Thank you for noting all of these features available to you, and please enjoy during and after the webinar at any time throughout the program. Please note that this presentation is being recorded and will be posted on GI Leap, ASGE's online learning management platform. You will have ongoing access to the recording in GI Leap as part of your registration. We have an exciting program tonight. Our objective is to review and discuss how AI is currently being used for endoscopy, colonoscopy, and gastrointestinal pathology. Now, it is my pleasure to introduce our faculty for tonight's program, and we have a great threesome here of presenters for you. Let me first start with Dr. Seth Gross. He's a professor of medicine at New York University-Laguna Health. He graduated from the University of Tel Aviv in 2001. He has well over 20 years of extensive experience in the field of gastroenterology and endoscopy with extensive experience in managing and treating esophageal disorders. Next, we have Dr. Prateek Sharma, a professor of medicine at the University of Kansas. He originally hails from India. He graduated with his MBBS from the University of Baroda in 1991. His professional focus has been on improving the diagnosis and management of GI diseases and cancer, specifically in esophageal diseases, GERD, Barrett's esophagus, advancing images, and endoscopic treatments. He is also a primary investigator currently working on the detection of colorectal polyps using artificial intelligence and is serving as chair of the ASGE's AI Task Force to develop an efficiency system in collaboration with various software and computing domains. And finally, Dr. Robert Adzi. He is former chief of gastroenterology pathology service at Brigham's and Women's Hospital. He is an internationally recognized authority, lectures extensively, and works at one of the world's leading institutions for pathology. He's an author of surgical pathology of the GI tract, liver, biliary tract, and pancreas. Dr. Adzi holds his MD from McGill University in Montreal, where he was awarded the pathology anatomy and surgery prize. He has served as associate editor for the American Journal of Gastroenterology and associate editor of IBD and has published more than 300 peer-reviewed research articles, reviews, editorials, and book chapters. We will begin tonight with the presentation from Dr. Seth Gross, entitled Outlook of AI in Endoscopy. Take it away, Dr. Seth Gross. Thank you very much, Ed. Welcome, everybody. We do have a fantastic program tonight. As AI continues to grow in medicine, but especially in the field of gastroenterology, you're going to get some amazing updates. And I do encourage to text your questions as you hear the presentations, because we will have a robust Q&A towards the end. So my task is to talk about the role of artificial intelligence in upper endoscopy. So we're going to go over the background. We're going to talk about where artificial intelligence is being looked to be used in upper GI diseases, mainly Barrett's esophagus, squamous cell, esophagus dysplasia and cancer, and gastric cancer. So it's important to get a sense of where AI falls in the history of medicine. And I'm not going to go through the whole timeline, but just highlight some key areas. And it began in 1950 with Alan Turing. And the term AI was actually coined in 1956. And there was some work being done in AI during those years. But then there was a period where there was really no activity in the 70s and 80s. And then again, we saw another gap in the late 80s, early 90s. Fast forward to after 2000, we started to see deep learning come into play. And for those of you that use Amazon, we were introduced to Alexa in 2014. And just before that, individuals were working on computer aided detection for endoscopy. Fast forward to where we are today. There are a number of AI platforms that are available globally. You'll hear about the first one that was approved in the United States about two weeks ago. And I really do think we're at the beginning of an amazing journey of how artificial intelligence is not going to impact just medicine. But I think procedural based specialties like endoscopy potentially have a tremendous amount to gain. There's a lot of terminology in the world of AI. There's artificial intelligence. And that's where computer systems perform tasks requiring human intelligence. But there's some guidance. And as you move down to machine learning, and what we're talking about is deep learning where machines are learning and they rely on networks capable of learning unsupervised. And what you're going to see over the coming hour is the role of AI in action as it is inspecting parts of the luminal gastrointestinal tract, picking up areas of abnormality. So what can artificial intelligence do? It could classify, it could localize for object detection. You could see here that you're seeing object differentiation, the cat, the dog, and the duck with the different boxes, recognizing the different animals. And they took this and they moved this towards medicine, which you're going to see in a little bit. This is a fun slide that I really enjoy trying to differentiate the dog versus the food. And I'm going to show you a dog versus the food. You have the blueberry muffin and the chihuahua. And you could see the difference between the muffin and the dog. But if you look at it really quickly, you may get confused. And here's another example of the dog and the chicken. But artificial intelligence, when the systems are developed and ready, they are able to differentiate the muffin from the dog, just like we would be as humans. So right now we're going to start seeing plug-and-play devices be available clinically. And essentially what will happen is is that it will plug into the back of the monitor, of the endoscopic monitor that you're using, and you're going to get real-time feedback. And you're seeing here in this example, all the way to the right, you see that green circle to focus your attention there because that's where a polyp might be. So where are we in terms of humans and AI? We're sort of at that crossroads where AI is starting to match what we do as human in terms of performance. And then in the future, you know, the thought would be is that AI will start to exceed our performance in certain areas. So what are some of the functions that you're going to hear about? There's computer-aided detection and computer-aided diagnosis. I'm going to focus on computer-aided diagnosis for upper endoscopy on the diseases that I mentioned, and you'll hear about computer-aided diagnosis and detection in the lower GI talk and the pathology talk. So where is artificial intelligence in Barrett's esophagus? Even though Barrett's esophagus often involves just a small area of the distal esophagus, it can be quite challenging to differentiate nondisplastic Barrett's from dysplasia. And as we all know, the highest degree of disease in a segment of Barrett's esophagus is certainly going to impact clinical management for that patient. If there's no dysplasia, we'll continue to watch them. And if there's high-grade dysplasia or early cancer, we're going to recommend intervention. Mathematical models predict that esophagus cancer is on the rise, and we've seen this year after year. And staging impacts prognosis. And the earlier we could find the disease, the better the outcome. But most patients present late, often associated with poor prognosis. We do surveillance using the Seattle Protocol for Barrett's esophagus every one to two centimeters, and you can see the Xs here. But the challenge is that oftentimes it's random, where the endoscopist doesn't really see an area to target, and it's certainly possible that dysplasia and early cancer could be missed. The whole concept of artificial intelligence is this one model of intersection over union, where you have the ground truth. In this case, you have a pony. And then there's a prediction. And the more these two areas overlap, the higher the accuracy. But what's quite interesting is that ideally we want to be around 0.5 of this overlap, the truth versus prediction. But even if you're 0.3, there is a good possibility that the artificial intelligence system will be able to highlight an area for you that's abnormal. And this is just an example of this. This is a segment of the esophagus, and you're seeing the truth versus the prediction in that segment where there's an abnormality present where you should do a target biopsy. There was a paper put out in gastrointestinal endoscopy in 2020 looking at AI with white light and narrowband imaging. We have different focuses on the scope some of us will use, standard versus near. And just to summarize it, the AI diagnosis sensitivity was 96.4% with a specificity of 94.2%, so quite accurate in identifying nondysplastic and dysplastic lesions of a Barrett segment. In this video, what we're seeing here is we're looking down into the esophagus. And so now we're going to transition to narrowband imaging. And sure, you could see there's an area here that's a little bit raised, certainly suspicious. And you're seeing this in near focus, and that green box is giving you a percentage of how accurate it feels that dysplasia is present. And this could be really helpful for the endoscopist to target biopsy and pick up hybrid dysplasia in a segment of Barrett's esophagus, because we know that will certainly impact management. So there were 20 patients, 10 nondysplastic, 10 dysplastic, and the accuracy was 90%, sensitivity was 91%. And I suspect over time, as these AI systems continue to mature and grow, these numbers are going to only increase. I'm going to share with you another example of how artificial intelligence could work in a segment of Barrett's esophagus. There are three cases in this video. It just seems to be jumping a little bit tonight. May be best to just let it play through. And what you're seeing is the current setup where you have your regular endoscopy monitor on the left and the artificial intelligence system on the right. I suspect in the future, these two images will be merged so the physician could just look at one screen. This is just an example of nondysplastic Barrett's C6M8 PROG classification. The endoscopist is scanning the area, and you can see on the right-hand side, this particular system offers a grading scale of how suspicious there's concern for dysplasia being present. And this is just an example of nondysplastic Barrett's, and so the probability score going 0 to 100 is going to have a low probability for dysplasia. And then in a few seconds, we're going to move towards a dysplastic case, and I don't want to move the video forward because I'm afraid it's going to just jump to the next slide. So I just want to show you one more example, and this is another example of a C4M6 segment. There was high-grade dysplasia on random biopsies, but the physician that referred to this center did not notice a visible lesion. And so here again, we're down at the cardia of the stomach where they're pulling back, and you see that there's a transition from normal squamous, and you can see the AI system highlighting an area that looks abnormal, and then within that area, there's that red circle to say this is where you should biopsy because that's really important. The delineation of identifying the area that's most suspicious for dysplasia, so that certainly will impact a patient clinically. Another area is squamous cell dysplasia. Here is an area of the esophagus, and there's a lesion right in the field of view, and the AI system is identifying that, actually mapping out the exact borders of this lesion. This certainly could be beneficial for physicians that are not comfortable doing dye staining when they're looking for squamous dysplasia. They compared the AI system versus experts in this study. They looked at lesion size going from 10 millimeters or less to greater than 51 millimeters, and as you could see, as the lesions got bigger, the AI system performed better than the experts, and that's really important. Then we talk about cancer invasion, the ability of the AI system to give a sense of how deep the cancer is going because that's also going to impact the course of treatment and clinical management, but again, versus endoscopic therapy versus surgery or chemotherapy and radiation, so this is quite important that the AI system is able to outperform the experts. Again, this is a real-time video of the AI system at work looking at this area, and you could say watching this video, I wouldn't have missed that, and that's certainly possible, but having the ability to have something with you to give you a high degree of accuracy of where to place your biopsies, again, to identify the highest degree of disease could certainly improve the outcomes for our patients, and lastly, we'll talk about AI in gastric cancer. There were 98 cases in this trial. They had 87 lesions. They looked at different videos, cancerous versus non-cancerous, and as many of you know, most of the AI systems are populated looking at thousands of images, normal and abnormal, for them to make a real-time diagnosis. It was an interesting study in the sense that the computer-aided system compared itself to 11 experts, and the accuracy of these experts ranged anywhere from 58 percent to almost 90 percent, and you could see that the computer-aided detection system was at 85.1 percent, 87.4 percent for sensitivity, and 82.8 percent for specificity, so it was outperforming experts, and this type of scientific information is really important and encouraging because it could certainly be beneficial for what we do on a daily basis, and what's really important here is to look at this image right here. You see the blue box and the red box, and the blue box represents a cancerous lesion right next to an area that's non-cancerous, and you could see how focal these lesions could be and how we could inadvertently miss that when we're doing a forceps biopsy, and this is just, again, another video looking at the accuracy here in narrowband imaging for gastric cancer. You could see from this that the vascular and mucosal patterns are completely abnormal, and it's sort of grading that in this area there's a high prediction that there's a malignancy there. In summary, I do believe that we're just at the starting point for AI and endoscopy. This space will continue to grow, and not only will it be helpful to accurately diagnose GI disease, and a lot of this right now is in the luminal GI tract. You're going to hear a lot about what's happening in the colon, but ultimately, in the long run, I think it's going to positively impact the whole endoscopy procedure, not just for the patient with better outcomes, but also for the endoscopist. Please put your questions in, and I'm going to turn it over to Dr. Sharma, who's going to talk about AI and colonoscopy. Okay. Thank you. Good evening, everyone, and thank you to ASG and CDX for arranging this AI program. Hopefully, we'll have a lot of good discussion towards the end. Seth's nicely covered the upper GI tract, and in the next 15 minutes or so, I'll be discussing the role of artificial intelligence in colonoscopy. Let's look at that. One of the things as we look at AI in medicine, the question comes up from what we heard earlier from Seth is that this has been around since the 1950s, but why are we more interested in today rather than over the last several decades? First and foremost, we all are aware of how much we spend on healthcare, and that is driving a lot of digital health and innovations on how we can actually improve the care that we are providing to our patients. When you start slicing this down, it can be into both clinical aspects, such as imaging. That's what we're talking about today, but it goes right from patient intake, remote health, telemedicine that we've all been doing right now, as well as precision medicine. There are a number of areas on the clinical side where AI could be helpful, but something that we don't talk about as much is also on the administrative side, such as operations, big data, medical documentation, so having a smart scribe or a voice-recognized scribe for our endoscopy report. That's also a role where AI would be helpful in the field of gastroenterology and endoscopy. When you start looking at this in GI, what are the potential innovations or reasons or ways how AI can help us? Number one is speed. It can reduce delay in diagnosis. It can help automation of tedious and repetitive tasks, and colonoscopy for polyp detection is a good example of it that we do every day. Fatigue can set in at some point after you're doing multiple procedures, and so that's where I think AI can really help us. We can also expand access to services, and I'll talk about colon capsule, for example, and then, of course, we also would like to deliver consistent diagnosis for our patients whenever we are doing procedures. When you look at the potential applications in colonoscopy, and we are specifically talking about image recognition, right from polyp recognition, so improving our adenoma detection rate, polyp characterization, so differentiating between a tubular adenoma versus a hyperplastic polyp, looking at any abnormal areas within a lesion, such as the depth of invasion of a cancer, and, of course, marking the extent of the lesion during colonoscopy. These are all potential applications. Let's look at these and see how this works. The way AI works in colonoscopy is that you have this use case of, let's say, polyp detection. The way this would work is that you then take several thousand images and videos of colon polyps, and then they are fed into the computer, so as to say, and you can see how this is being done in a cartoon format. These are then trained and retrained mainly through deep learning, which includes CNN. Ultimately, you come up with an algorithm, which is tested, validated, and then is revalidated until finally it is ready for clinical use. This is the basic concept of how CNNs work and how AI works in the field of colonoscopy. Let's look at these areas. Bowel prep assessment. As you know, we do grade it using the Boston Bowel Prep Score, and so can AI help us with that? Number two, cecal intubation. We do record when we reach the cecum. Can that be automated? Withdrawal times, and can that be automated as well? Rather than us looking at the clock or somebody clocking this down, let's look at each one of this in detail. This is assessment of the Boston Bowel Prep Score in real time by a computer, so you can see the Boston Bowel Prep goes from zero to three, zero being the worst, three being the best, and you can see that as you're withdrawing on the right of the screen, the Boston Bowel Prep is automatically being calculated by the machine, so it's telling you that in this segment, the Boston Bowel Prep score is more of a one rather than a two or a three where you would like for it to happen. This is a real-time assessment of Boston Bowel Prep score by artificial intelligence to do that. In this publication, again, as I mentioned, you use a training, a validation, and a test set, and what they did is that they compared the overall accuracy of the machine to predict the score and compared it to novice endoscopists, senior endoscopists, as well as experts, and you can see that the machine beats them all. Simple automated tasks like this can be done real-time by AI rather than a scribe or somebody else noting this down in order for us to do it. The second is recognizing the IC valve and assessing the withdrawal time. We all look at the appendiceal orifice, which is right there in the center, and the machine's recognizing it, and then on the left in the cartoon at the bottom part, you can see that it's turned magenta-colored right here, which is telling you that the cecum has been reached, so now you can start your withdrawal. So, cecal intubation can be recognized by the computer rather than you saying in your report that you did reach the cecum. I mean, this is going to be automated. Then, the speedometer on the left is telling you about your withdrawal time. Are you too fast? It will go into the red zone. If you're doing it appropriately, it will stay in the green zone, so informing you what your withdrawal time should be, and in a way, you have a coach in the room who is telling you what to do during the withdrawal time as you start coming back. So, let's look at this after two minutes. If you're coming back too fast, you can see it's going a little bit towards the red, so it's telling you that, hey, slow down a little bit so that you can start examining the colonic mucosa much more carefully, and so again, as you're coming, you know, a little bit fast, the other thing it gives you is that if there are too many redouts, it will guide you and tell you that the view is lost. Please return to the lumen. So, here it also, another issue comes up is training the R fellows and how this can also be another guide in order to tell them what is the best way of performing a colonoscopy. Let's look at other aspects from those quality metrics. ADR, or polyp detection, looking at a polyp and then characterizing it, whether it's a tubular adenoma or a hyperplastic polyp, and if there is a cancer, assessment of invasion, because if it's going deep into the muscle, that's something that you should not be resecting, not even by EMR or ESD, if that's what you were planning on doing. This is how it typically works, just very similar to the upper GI tract as you saw from Seth. These are the bounding boxes which appear over the lesion. This is a flat polyp, which is seen in the right colon, and then these softwares are making you aware that this polyp does exist. Where's the evidence for this? There are at least five randomized control trials which have been published. This is the very first one from China, in which patients were randomized to either HD colonoscopy alone versus HD colonoscopy along with a CADI device or the AI device. What they looked at was the output on the right is again highlighting these polyps with the help of a CNN-based software. Primary outcome was the ADR, or the adenoma detection rate, which improves by nine percent, is statistically significant, as do the adenomas per patient. Not only do you detect more ADR, which is a patient having an adenoma, but also more adenomas within a given patient which could be detected in this situation. This was from China. Last year, there was this randomized control trial from Europe, which was published in Gastroenterology, and very similar design using a screening and surveillance population and comparing again HD colonoscopy alone versus HD colonoscopy with an AI device. And in this situation, they showed a 15% improvement in the adenoma detection rate and also significant improvement in the APCs or the adenoma per colonoscopy, which was detected. And they showed a significant difference both in the recognition of diminutive polyps as well as of large adenomas. So where do we stand with detection for colonoscopy? And this is the meta-analysis published a few months ago, looking at close to 4,500 patients who have undergone colonoscopy and have undergone randomization to either AI or no AI. And what this does show is a significant improvement of at least 11% with AI as compared to the non-AI group, which can be seen here. So there's robust evidence for this. And I think it's based on data like this that we've seen the approval of the first AI detection device, which was approved by the FDA. And it's a matter of time as additional devices for polyp detections get approved and would be ready for use in our clinical practice. One of the things that is an issue with these devices are the false positive. What does that mean? It means that the endoscopist is alerted to images incorrectly. So the box appears or there's a beep, but there is no real polyp. And this does lead to additional time. This was evaluated in this study from gastrointestinal endoscopy, in which they looked at close to 1,100 false positives, and they tried to determine whether these false positives were coming from the bowel wall or within the lumen, i.e. the bowel content. And what they showed was that the majority of them were related to abnormalities within the bowel wall, such as a fold being recognized as a polyp by the machine. Next, they looked at the time which was spent. And you'll be surprised to see that the majority of these false positives need less than three seconds for an endoscopist to look at it and say, ah, this is probably not a polyp, and they move on. Whereas 20% required more than three seconds to look at it. And the rate of false positive for a minute were about two and a half per minute. And this was higher for the bowel wall as compared to the content, which makes sense as I've described earlier. So this is one of the challenges of looking and detecting polyps with the help of AI machines. The second step is to look at characterization. And can the machine help us and aid us in predicting it? This is a system which was published, you know, in Gut a few years ago. And this is looking at it, a polyp with narrowband imaging. And the machine is looking at the probability that this is a 95 to 100% chance that this is a type two polyp, which is a tubular adenoma. And you can see that here is the adenoma diagnosed there. And it's also giving you the nice classification that it is a type two polyp, which was recognized. This was also a CNN-based algorithm. And in this study, what they found was that the sensitivity and the negative predictive value of using this for real-time characterization was extremely high, greater than 95%, meeting the ASGE's PIVI criteria, which have been put forth for the real-time characterization of colon polyps by a machine. Here's another way of looking at it. On the left is a machine which is analyzing it and predicting this area as being a tubular adenoma. You can see that there is a few second delay as this happens. And on the right is again, a flat polyp, which is being looked at, and you can see the box appearing around it, the machines analyzing it, and then it gives you the answer or the suggestion that this is not an adenoma. So devices to assist us in making these diagnosis over time. The next issue is for treatment of this. Can we predict invasion of a polyp in there? And this is what was done. So imagine you're doing a procedure, you see this laterally spreading tumor or LST, and it's a granular tumor, and we would probably resect this with EMR. Sometimes you may not be sure, and can the machine actually help you with this? And this is what this study showed with close to 8,000 test images using white light endoscopy only, published a month ago in Gastrointestinal Endoscopy, showing a very high sensitivity and specificity for predicting non-invasive superficial cancer, telling us that the machine would agree with us or help us in identifying that this lesion could be resected by EMR. Also recently, we have this colon capsule for CRC detection. And as we know, colon capsule can be done in certain situations where the patient does not want to undergo a colonoscopy, maybe he's too sick to undergo a colonoscopy or a failed colonoscopy. And again, in looking at these images, you could probably also in the small bowel help us in pointing out so that where the polyp is, so you're not spending hours looking at endoscopy capsule videos over the time. And this is a proof of concept study showing a reasonable accuracy for us to start evaluating this in there. So once we start looking at all these papers, we need to look at performance metrics, comparison with humans, external validation, and this should all be looked at in all these papers of that. I've told you all the great things about AI and colonoscopy. There are certain limitations and this is not just to gastroenterology or endoscopy, but to medicine in general. We need more data. We need the studies to compare it with external validation against endoscopies as some of the studies have already done it. And we need to show that this does improve patient outcomes in the long run. So artificial intelligence and colonoscopy is here to stay. It's going to be part of our practice. I've talked to you about the right side, which is enhancing the diagnosis. There will be EHRs, which will help gather information for us. They'll link it with the endoscopy system. This will all help us establish better treatment plans for our patients and finally improve patient outcomes. So Seth, again, thank you for inviting me to participate in this wonderful session. I'm going to turn it off back to you for the next stop by Dr. Odze. Thanks very much, Prateek. That was a fabulous talk, a great overview of what's happening in the space. Please be sure to type in your questions. We have some good questions to discuss at the Q&A. Now, Dr. Odze is going to talk about the application of AI in the esophagus. And what we're going to learn is from CDX Diagnostics, Watts has been offering AI in endoscopy now for at least a decade, and you're going to see how. Rob? Thank you, Seth. Thank you, Prateek. Also, thank you to CDX and the ASG for giving me the opportunity to talk about something that I'm so passionate about, and that's, of course, pathology, and specifically that related to Barrett's esophagus, which I've spent most of my career researching. So in this 20 minutes period or so, I'm going to talk to you about some limitations of the current surveillance protocol, which you've heard something about already. I'm going to spend a great deal of time on the unraveling the black box of Watts AI technology and showing you how that works in the most detailed way that I can. I'm going to give you some pathology examples, talk a little bit about a literature review on what's done in its form as an adjunctive tool for detection of Barrett's and dysplasia, and then I'll summarize and talk about some of the things that we need to do in the future. So just as a brief introduction, of course, I'm preaching to the choir here. Everybody on this call knows that Barrett's esophagus is a metaplastic condition of the esophagus, whereby squamous epithelium converts to columnar epithelium, and that the diagnosis is based on a combination of what you guys see endoscopically and what we see pathologically. And what we see pathologically is essentially what we need to do is detect goblet cells to confirm the diagnosis, which is our way of detecting intestinal metaplasia. So ultimately the goal of pathologists in helping clinicians treat patients is to number one, detect intestinal metaplasia, which for the most part means detection of goblet cells, and of course detection of dysplasia and or early cancer. Those are our two main goals, and none of these tasks are actually very easy. I'll talk a little bit about dysplasia later on the talk, but right now I want to tell you something that you may not know, and that is even the simplest form of detection of Barrett's, aka detection of goblet cells, is not necessarily an easy exercise. In this particular example, of course, this is easily recognized as a goblet cell. It's a distended cell. The cytoplasm is bulging out, and the nucleus is compressed to the bottom of the cell by this large vacuole of mucin. There are a lot of pseudogoblet cells and transitional cells, the esophagus, that mimic goblet cells. You can see quite a few here. And in this particular example, because there are so many rows of them, it's easier to differentiate these from goblet cells, but not all cases are so easy. This is actually an example that I used in a test set for an inter-observer study with 10 GI pathologists, and I asked them the simple question, point out the goblet cells and point out the pseudogoblet cells. And I think everybody would recognize that this is a pretty clear goblet cell, but what about all these guys down here, and some of these off on the side? You probably wouldn't be surprised to learn that the inter-observer value of that exercise was less than 0.1. And so, in fact, some of these cells that actually look like goblet cells are, in fact, pseudogoblet cells. So even a dedication of goblet cells is something that pathologists themselves are not very good at, and certainly can use some help. Just briefly, what do we know about adenocarcinoma? Of course, most of the people on this talk know this information better than I do. We know it's a cancer that's increasing dramatically in Western populations, but what's interesting is that we're not keeping up with the improvements of survival that we see with other cancers where we have very good and robust and efficient detection methods for precancer and dysplasia, like we do in the cervix and the colon, which I think probably contributes to the greatly increased improved survival rates that we've seen in the last 20 to 30 years. Unfortunately, with the esophagus, this has not occurred, and I think personally, and I think there's a lot of literature to support that, that one of the main problems there is that we have poor detection mechanisms. So, Seth talked to you a little bit about one of the problems with the Seattle Protocol in Barrett's esophagus. We know that it's time consuming. We know that it samples a very small proportion of the esophagus, particularly in patients who have long segment Barrett's esophagus. We know that pathologists continue to struggle with this. I know I have in the last 30 years, and unfortunately, we haven't seen much better improvement in inter-observer variability since I started my career in the early 90s. There's low physician adherence because of a combination of all these, and we do see high false negative and high false positive rates. So, it's a procedure that certainly has a lot of limitations, and the main reason is because dysplasia in cancer is typically focal and undetectable with the naked eye. So, this is an example of an excised esophagus. You're seeing here the esophagogastric junction. This is the neo-squamo-columnar junction. This is the top portion of the gastric folds, and in these particular areas of dysplasia and cancer and even invasive cancer, the naked eye can't possibly detect what we're looking at in terms of the gravity of those diagnoses. So, with that, I want to tell you about the Watts Diagnostic Platform, which really consists of two basic technologies, and I want to cover each of these individually, but it's important for you to know that it's the whole platform itself that ultimately enables pathologists to, what I think, make more accurate diagnoses. So, the first part of the diagnostic platform, of course, is the brush, which enables wide-area tissue sampling, which increases the percentage of mucosa that is sampled. Here you can see an example of what you can acquire by four-quadrant biopsies compared to a brush technique, which sweeps up and down over the esophagus and grabs a lot more of the cells. And then the second part of the platform is what happens to the tissue once it's received in the laboratory. And with that, I'm going to spend a little bit of time further on in the lecture, but essentially what we're talking about is a 3D imaging analysis system, which is then followed by artificial intelligence slash machine learning. And in this mechanism, it really serves to increase the screening potential of pathologists. The system identifies and ranks atypical epithelium for pathologists, but ultimately it's the pathologist who makes the diagnosis. We're still not at the point yet where the AI technology itself is making independent diagnoses. So, with that, I want to tell you just a little bit about the biology of dysplasia, and that is in this cartoon depiction of the columnar lining of the esophagus. I've colored in here in red where dysplasia actually develops. And we know dysplasia develops in the crypt bases initially, and eventually with time it encompasses all aspects of the epithelium from the base all the way to the surface. But in the earlier phases, it does develop in the deep bases. For the last 20 or 30 years or so, we've had superficial cytology brushes, which as you can see here in this cartoon depiction, scrapes off superficial cells. And part of the reason why this technique has not been satisfactory is simply because it's not targeting and not acquiring tissue, which is exactly where the dysplasia develops. And that's why we haven't seen much efficacy in terms of the old-fashioned conventional superficial cytology brushes in Barrett's pathology. The Watts brush is composed of longer and more hard bristly bristles, if you will, which are designed to acquire deep portions of the mucosa, big chunks of mucosa from the superficial aspect all the way down to the muscular mucosa, similar if not identical to that what you would see with a forceps biopsy. And in this overlay here, you can see the type of samples you get from the superficial cytology brushes. You're kind of scraping off superficial cells in a discohesive manner. You don't have much structure. You don't have much architecture. And as a result, it's very hard to determine which of any of these cells are a dysplastic versus not. With the Watts brush, because you're acquiring deep chunks of tissue, and there are these cohesive chunks of tissue, you're getting a lot more structure with the cells. You're maintaining the cohesiveness. And because of that, it enables us to not just evaluate cytology, but it also allows us to evaluate structure and architecture of the epithelium. The type of samples acquired with the brush I'm showing you here, there's two basic types of samples. One is a brush that goes through a conventional formalin fixation and paraffin embedding process and stained with H&E. And you can see the amount of tissue here on this side. And then another brush is sweeped onto a slide. It's sprayed with a pap stain and it's cover slipped. And this is more what you would see with a traditional sample. However, of course, as I mentioned before, the size of the aggregates are much larger and just higher power. You can see what the cell block specimen looked like. I refer to these as little micro biopsies because this is histologically identical to anything you'd see in a forceps biopsy. It's just that the fragments of tissue are often a little smaller. Whereas the smear specimen, again, is acquiring these larger, more cohesive fragments, which allow us to not only judge the individual cells, but the individual cells in relationship to each other. And that's a key part of how pathologists and cytologists evaluate and discriminate reactive lesions, phlegm, dysplasia. Okay. So I'm going to segue into the computer part of the analysis. And by way of introduction, I just want to inform you that the background of the Watts artificial intelligence program was actually software that was originally designed for the Star Wars missile defense program in an effort to try and identify missiles and discriminate missiles that were potentially devastating from missiles that were not devastating. And since there would be potentially a lot of missiles in the sky, they would overlap with each other. And they were trying to develop a software program that would discriminate good missiles, if you were from bad missiles in terms of their destructive capacity. And this technology was actually applied to pap smears with papnet before the GI tract. And so this is a system that's been around for more than 15 years or so. So let me tell you about this 3D Watts imaging system, which is essentially the first part of the artificial intelligence system. So when a brush takes a sample of mucosa, it acquires these three-dimensional pieces of tissue. As a pathologist, of course, we can't look at three-dimensional pieces of tissue. We have to slice them and look at two-dimensional samples. And that loses some dexterity in that process. The Watts 3D imaging system essentially performs cuts through the tissue almost like a CT scan and applies an artificial intelligence system in order to develop 3D images from all these individual slices by avoiding an overlapping of cells, which may give you a false impression of dysplasia when it's not really present. So just a quick video, excuse me, of what happens in this process. I think this shows pretty nicely. The slide goes into the computerized system. The slide is screened in completion, so no cells are missed with this total screening process. The individual aggregates are identified by the computer, and then slices or individual photographs, if you will, are composed of this 3D tissue from top to bottom. And I think the image will show you that. And then the computer will synthesize a 3D image of these aggregates of tissue, which are then subjected to the artificial intelligence neural network and then presented to the pathologist for evaluation at the microscope. So with that, let me show you what happens to the scanned 3D slides once they enter the computer system. Here's just an example of the 3D-like imaging that you get of individual crypts. This is the base of the crypt now. This is the superficial part of the crypt. And you can see you're not just seeing one two-dimensional surface of the crypt. You're seeing this crypt in almost three dimensions. And that enables a much higher degree of clarity of the cell nuclei, and particularly the ability to look at cells in relationship to each other. How does a neural network formed and then applied? Well, initially, a training set composed of thousands of specifically designed Barrett's dysplasia images are annotated by a pathologist on a slide. And those are used as the training image classifications for the neural network. In this particular case, you might see blood, squamous epithelium, Barrett's epithelium, characterized by the goblet cells, low-grade dysplastic epithelium, high-grade dysplastic epithelium, and cancer. Thousands of these images are used to train the computer into recognizing or helping to recognize new test samples. So here again is the training set. What happens after the training set is validated and the neural network has acquired its education, if you will. Then if you look at a new case on a slide, which is looked at in a microscope, it undergoes a whole slide scan and EDF synthesis, which is something I just showed you. That then is applied to the neural network. And the neural network then does its intellectualizing, if you will, to pick out fragments and clusters of epithelium, which are most likely to be dysplastic, ranks those images, and presents those images to the pathologist on a screen so that the pathologist can use those images in conjunction with the slides to make a diagnosis. So let's delve a little bit further into the neural network itself and how this works. I'm particularly interested in this because this is the kind of detail that pathologists love with regard to pathology and depiction of individual features. So a neural network is really a complicated process that has multiple layers. And I use this simplified image here to show you what happens basically in the first couple of layers, and then later on in the more deep layers, and then ultimately what happens in providing a confidence level, if you will, in what the computer is seeing as potentially dysplastic. So the first layer of neurons, if you will, or sometimes these are referred to as nodes, look at a variety of different features, texture, color, shape, nuclear size, NC ratio. There's a whole bunch of more features I'm just not listing here. And these particular nodes, if you are looking for specific features, and if it finds it and it likes a particular feature and it shows resemblance to a particular training set, which I've always showed you, then that neuron will so-called fire and provide a confidence score of what it feels it is potentially how much it resembles the training set standard images. And then this is passed on to a higher level of neurons, if you will. And these neurons now are not looking for individual features, but they're looking for an evaluation of a summation of weights, if you will, or it's evaluating multiple features and starting to put together the individual features into a composite image that is based on confidence levels the computer thinks that it potentially could be a particular lesion. So with this particular example, that is a high-grade lesion. It's being evaluated at multiple levels. Some of these levels can go through, they can have hundreds of different layers, and eventually a confidence level is ascribed to an aggregate. In this particular case, there's a 0.9 confidence level. The confidence levels actually go from zero to one. So one being the lesion perfectly resembles a training set high-grade dysplasia, and zero means it doesn't resemble it at all. So if something acquires a value of 0.9, it has a very high resemblance to high-grade dysplasia, and then it's ranked as such for the pathologist. And of course, there's always going to be some overlap. So what the computer really wants to be able to do is give high ranks to some things that resemble the lesion it's looking for, and at the same time give low ranks to other diagnoses, such in this particular example, low-grade dysplasia or Barrett's or squamous epithelium. Looking at this another way, if you look at the multiple layers, the first layers are looking at pixelated images. As you go through the neural network, it's becoming clearer and clearer. The nodes are looking at and identifying edges, then it's looking at combinations of edges, it's combination of features, and ultimately giving a confidence value into a final diagnosis. So I'm just summarizing here what the WATS pathologists are doing when they receive all this information. So right from the patient, the patient gets the EGD, the WATS brush, you get a smear that's stained with a PAP, you get a cell block that's stained with H&E. The smear stain specimen then goes through the neural network after 3D imaging, which then provides a screen outlay of high-ranked dysplasia images, which is then viewed by the pathologist on a screen. At the same time, the pathologist has a slide of the PAP stain smear, so they're looking at the slide in the microscope. They also have a slide of the cell block specimen, as you would a normal forceps biopsy specimen, and then they have potentially immunostochemical stains that are applied to the cell block at the same time. So the pathologist is evaluating the images, the slide of the PAP smear, the slide of the formalin-fixed H&E stained specimen, and any immunostochemistry in order to provide a diagnosis. I'm going to show you a couple of examples of Barrett's and various grades of dysplasia. So this would be on the left side, the cell block image. This is exactly what you would see in a forceps biopsy specimen. On the right side, you see an image, a smear stain image of a Barrett's esophagus with regular columnar cells interspersed with goblet cells. This is an example of crypt dysplasia, early dysplasia, when dysplasia is limited to the crypt bases before it reaches the surface, this is what the cell block would look like. These are the deep crypts, which are showing dysplastic epithelium. I'll show you more carefully in the next film. But you can notice at the surface, there isn't any involvement. So this is what's referred to as crypt dysplasia, dysplasia in the base, not involving the surface. And on the smear, in this three-dimensional-like image of a crypt, here's the surface, here's the base, the dark blue cells here are the dysplastic cells, and you can see they're maturing to the surface and not involving the surface epithelium. Low-grade dysplasia in large cells that are hyperchromatic, a little bit stratified, increased NC ratio, very homogeneous population of cells, typical example of what you'd see in a forceps biopsy specimen, and then the smear showing you the equivalent, and then a higher grade of dysplasia where the cells round up, the NC ratio becomes really high, you perhaps may get some intraluminal necrosis, you get loss of polarity and pleomorphism cells that are hanging out at the lumen, which is where they should not be, and then the corresponding smear specimen showing the equivalent of the cells. And if you really look at the cells in the smear, they're pretty much identical to what you see in the cell block, you're just seeing them with a lot better clarity. Here's a photo I wanted to show because I wanted to highlight that really what you're seeing in the Watt cell block is very much what you're seeing in the cell block. So this is a classic or conventional forceps biopsy specimen showing high-grade dysplasia and similarly a Watt cell block showing high-grade dysplasia. So just summarizing, what are the pathology problems that are overcome by Watts? Well, I mentioned that it reduces the potential for misses because the whole slide is screened. It documents the location of the most atypical aggregates and ranks them in terms of atypicality. It provides an on fast or unique 3D-like view, which provides for greater cellular and nuclear detail. There has been one study that looked at intra-observer variability of Watt specimen. The observer, the kappa values were quite high, particularly in comparison to standard biopsies, which over the past 25 years or so has not really improved much. They've always been in the 0.3 to 0.4 range. In the next few slides, I'm just going to summarize some of the literature that has resulted in increasing adjunctive yields of Watts when it's used in conjunction with forceps biopsies. So this histogram shows the increased adjunctive yield of, in the green bars, of detection of Barrett's esophagus or goblet cells, if you will. The blue bars show you increased detection of dysplasia, all grades. And then there's one particular study showing you the increased detection rates up to over 400% of detection of high-grade dysplasia in particular. And that's a study I'll show you in more detail on this slide. This was a study by Vendula Gandhi. This was a multicenter study of 160 high-risk cases. And in this particular study, seven cases of high-grade dysplasia were detected by forceps biopsy, but the Watts method detected an additional 23 cases over the seven cases, greatly increasing the adjunctive diagnostic yield in this particular study by over 400%, as I mentioned before. Does the Watts-detected lesions have the same biology? This is a question I often hear in this particular study, which was presented by Nick Shaheen at the ACG meeting in 2018. This was a long-term outcome study of Watts samples that were diagnosed as either Barrett's or crypt dysplasia or low-grade, and the progression rates to high-grade dysplasia and adenocarcinoma. And you can see here, the rates are fairly similar, if not perhaps a bit higher than the conventional studies that you see with forceps biopsies. With this data, I'm sure most of you know that the 2019 ASG guidelines have included Watts in the guidelines, in addition to white light endoscopy with Seattle Protocol biopsy sampling, and also the American Forgot Society and the SAGES Society have also endorsed the procedure as well. So, just summarizing, Watts is the first commercially available, and it is commercially available. It's been commercially available for, I think, probably about a decade now, or perhaps longer. It's the first guideline-included AI application in the esophagus. It's safe and effective. It adds minimal time to endoscopy. I'm told that it adds maybe three or four minutes. It is key in reducing sampling error, and obviously, I've shown you data that supports its increased sensitivity of detection of Barrett's, aka goblet cells, as well as its neoplastic complications. It reduces pathology misses, which actually is quite a bonus point, because pathologists, when you're looking at a lot of cases over the course of the day, it's really tough to be able to see every cell on every slide. We cut corners because we have no choice but to do that. It would take a long time to look at every cell on every slide. So, having a computer do that for you is really quite a nice advent, and I've shown you that it enhances diagnostic detail. I think there's quite a lot of future research that we need to know. It would be important to know if the Watts platform can be useful in patients who are post ablation in detecting recurrent goblet cells and dysplasia. I think it's important to know whether Watts has as good application in longer and shorter lengths of Barrett's esophagus, perhaps even its use in detecting Barrett's. Of course, we'd like to see potentially some long-term outcome studies on the risk of progression as well, even longer than the ones that I've shown you there today. With that, thank you for your attention, and I think we're probably quite ready for questions. Are we not? We are, Rob. Thank you very much. Prateek and Rob, if you could put your cameras back on, that would be great. We're going to spend the next 15 to 20 minutes to answer questions. First, I'd like to thank CDX Diagnostics and ASGE supporting this informative session on artificial intelligence. The first question is going to go to Rob, but any of us could jump in after the first person gives their response. The question was, the forceps biopsy may go to one lab and the Watts brush goes to the CDX lab. How do they compare the tissue when there's a discrepancy, if one finds something and the other one doesn't? Yeah, that's a great question. It is true, the forceps biopsies will go to the lab of the hospital or endoscopy suite that they're associated with and reviewed by pathologists in those locations. The Watt specimens are sent to the CDX facility in New York and diagnosed by pathologists there. Those reports are then generated and sent by computer to the physicians. In some cases, they're sent to pathologists as well. I think it depends on the individual hospitals or units which are doing Watts and how they design their viewing of the reports. All the physicians get the reports. I think certainly pathologists are capable of getting those reports if they can be worked out with the physicians that they work for so that they compare notes between the forceps biopsy and the Watts. Thank you very much. Prateek, specific to AI and colonoscopy and endoscopy, the question was, when do you think AI is going to be available to the endoscopist in community practice? We know that one system was recently improved, but who bears the cost and do you think that the endoscopist is going to get paid more if they use AI? So yeah, thanks Seth. Some multiple questions and of course not many clear answers at this stage, but the first one's pretty clear cut now. So if we had the same presentation a month ago, we would say that we are still waiting for it to hit our doorstep and it's being evaluated in clinical trials, but now you have the first system which is commercially available. So to that question is when can you use it in your clinic? Well, tomorrow if that's what you want to do in your practice. So it is available right now commercially and as you and I both discussed in our presentations, there will be several more available as many of them are being tested right now. So today you may have choice for one device, but probably by the end of the year you may have several devices to choose from. The second one's a little bit more difficult Seth in terms of reimbursement, in terms of who's going to pay for it. So you'll have to look at it in your own practice, either in your ASCs or at your institution. Is it going to improve your efficiency? Is the throughput going to be higher? So maybe you can justify your cost based on that. So that's one way of looking at it. Of course, through the ASGE task force, Seth, that you and I are part of and we are spearheading certain efforts, we will be looking at working through the advocacy groups and seeing that is there a reimbursement for it. AI getting reimbursed has just started and again, we've just got the first AI device in GI approved. So we have a little bit of work to do in this field before there'll be a code for it or somebody will get paid extra for doing that. Right now, I think the capital cost is on you, whether it's your institution or your ASC. Thank you Prateek. Yeah, the one thing I will say is that as mentioned earlier this evening, we certainly need more data. And if the data is really compelling over the next couple of years and it is better than what we're currently doing, there are pathways to get reimbursement. I'm not certain that we'll ever get additional reimbursement for the physician side, but there are pathways to get the practice expense covered where the device is covered. And recently, this was accomplished with hemostatic powder where you can get reimbursed at the ASC and the hospital. Rob, I have a question for you. When we use the Watts brush, is the AI making the diagnosis or is it the pathologist or both? No, it's the pathologist. Recall that slide. The pathologist ultimately, when he's sitting at the microscope, has one slide of the smear, which he's looking at in the microscope, one side of the cell block specimen, which he's looking in the microscope, potentially immunostochemical slides performed on the cell block if that's necessary. And then on the computer screen, he has the images that are presented to him by the computer ranked in order of atypicality from most atypical to least atypical. So using the images and using the microscope, the pathologist makes the diagnosis. And Prateek, you have a lot of experience in this area for virus detection and surveillance using Watts. Does it matter the order? Do we need to do Seattle protocol first, followed by Watts, or could we do Watts first, followed by Seattle protocol? So actually, it doesn't make a difference. I think you can go either way. And this comes actually from the study that Rob was showing in his presentation, the multicenter US study by Venna Laganti and his colleagues. In that patient's, the order of the procedure was randomized, either Watts first followed by biopsies or biopsies first followed by Watts. And we did not see any difference in the yield of dysplasia, whether the Watts was done first or second. So there's good data to show that either doing either one is fine. So Seth, I mean, since you're the one asking questions, let me ask, I looked in the chat box too. And so the question to you was pertaining to your talk about and looking at confocal endomicroscopy, or just getting into deep magnification. How does that compare to AI? And would one be better than the other? So that's a really good question. And my thought is that the person in practice seeing patients, regardless of your setting, whether it's a private hospital-based or pure academic, the AI systems that were shown tonight are very practical. We need detection in order to interpret. And I think ultimately, if we ever move to a remove and discard strategy, we're going to really need to know those granular details as you nicely showed the nice classification type 1 and type 2. The salvizio system is a confocal endomicroscopy system. We have to inject the patient with some fluorescein beforehand. Certainly, that is probably a bit harder to learn, not impossible. Whereas the AI system, the AI system, think of it as an enhanced second pair of eyes while you're doing a procedure. So I think it's probably going to be more user-friendly and more practical in clinical practice. Okay, so let's move back to some additional questions. And Robin Pratik, if you see something in the chat that you'd like to bring up, please do so. I'll try to capture, I'll certainly try to capture everything. There was a question of, have there been any tandem studies? I was just part of one that finished raising that question of misrate, where we did a tandem study versus standard colonoscopy versus AI. The data has not been presented yet. It's certainly in process, but I could tell you that there was a benefit with the AI system for reducing the misrate for adenomas and sessile serrated lesions. And I think there's a potential great benefit of artificial intelligence to help us with our sessile serrated detection, which often lags our adenoma detection rate. Pratik, there was a question about, it's of course multiple parts for you, but I know you got this. So the first part is, if there's a false positive from AI, is it recorded by the software? And if this mispolyper cancer comes up, could this be a medical legal issue for the endoscopist? Yeah. So again, I mean, great points about the medical legal part. I mean, and it's not just specific to colonoscopy or to a GI endoscopy, but I think in AI in general, I mean, there's always this big discussion is, you know, who's to blame? Is it the machine or is it the physician? And, you know, if you look at the way the attack that the FDA has taken on this is for these approvals is, you know, looking and saying that these are devices which are assisting the physician. So they are not there to replace your clinical judgment or your ability to say whether this is a polyp or not, whether this is something that needs to be removed or not. You know, you make the final call. These are all tools to assist you. So these are adjunctive tools. So at the end of the day, at least, you know, the device which has been approved and any of the foreseeable approved devices that I see, which are in the pike and are being evaluated and tested in FDA approved trials, that that's what I see happening is that these will all be devices, which we will have to use with our own clinical judgment. So you make the final call, Seth. And so if you want to remove a polyp and you think it's not a polyp and you leave it behind, and because that's what the AI device is saying, it's still on you. Right. And I don't believe that these devices are going to record your whole procedure and store them. You know, there are practices that currently without AI record endoscopy and colonoscopy procedures as similar to what happens in surgery, but I don't believe they store them. It's just a real-time highlight for the endoscopy, as you point out, to make that determination if it's a real finding or not. Rob, I have a question for you, a very detailed question, and it says the following, got a report back that just shows rare MUC positive cells that even Watts didn't catch. What are we doing about this type of finding in our patients? Yeah, so that's a good question. So, well, first of all, what is MUC2 for those who don't know? MUC is our mucoglycoproteins, which are produced by mucin-producing cells, and there's a bunch of different MUCs that are expressed in different mucin-producing cells in the GI tract. So, for instance, MUC2 is an intestinal-specific mucoglycoprotein, which is produced only in goblet cells, whereas different MUCs are expressed in the stomach versus other areas of the GI tract. So, MUC2, for instance, is an intestinal-specific immunohistochemical stain that detects mucin in goblet cells, but it also detects mucin in various stages of goblet cell development. And so, theoretically, you could see MUC2 positivity in the background of Barrett's mucosa in patients who have not yet developed goblet cells because it's one of the stages of maturation. This is referred to as non-goblet intestinal metaplasia. There is some emerging data, not a lot of it to suggest that those patients are clearly at risk for the development of, I would say, full-blown Barrett's esophagus, but that really means goblet cells, and that's just a late stage in the development of goblet cell metaplasia. But there are no guidelines on how to treat patients who are MUC2 positive but don't yet have goblet cells. Like I said, there's emerging evidence to suggest that those patients, a great deal of those patients, already have goblet cells, but because they're few in number, they've been missed by sampling error, or they will develop traditional goblet cell metaplasia in the future. So, for now, we don't really have, you know, guidelines on how to treat those, but I'm hoping in the next years or so, we're going to see some papers out that would support that. Thank you. Seth, about the AI training part, do you want to take that? What do you think, I mean, I think that's one of those common questions on AI is, do you think it will make us more lazy and we'll be lazier endoscopists? And what do you think of the impact on trainees? Do you think the trainees will, you know, our training programs will start becoming more hands off? Do you think it's going to help them or make them bad endoscopists? What are your thoughts, Seth? So, when I think of AI, I sort of think of narrowband imaging, you know, years ago, which, as you recall, was thought to be the holy grail, you know, for luminal endoscopy when it came out, you know, in the early 2000s. And I think at that time, I actually was a trainee. And, you know, it sort of taught me, right, to appreciate things in white light that I was easily able to see in narrowband imaging. And when we think of, you know, AI and training, I think it's actually going to help the endoscopist, you know, pick up subtle findings. And again, you know, as you mentioned, you know, physicians are doing many cases a day, you know, there is a fatigue factor, you know, as you go through your day. So, I think there's going to be value. But I do think that AI, in a subconscious way, is going to teach us, you know, it may improve, you know, sessile serrated detection for endoscopists, not just the trainee, but the person in practice. And you also have to remember that the trainees over the next, you know, five years or so, there's no guarantee that they're going to leave a program, and they're going to go to a practice or hospital that's going to have AI. But they'll definitely have a better appreciation of subtle lesion detection. And I think AI could certainly complement that. Pratik, a question for you has come up, you know, there were a few of them, around, you know, how do you move a concept forward, you know, to the FDA, if you're developing an AI system? But what's the role of, you know, the ASGE, for instance, as the chair of the AI task force, you know, if someone's developing an AI system, you know, what do you think is a good pathway? Should they go about it their own? Is there is there value partnering with a society for guidance? We've had long discussions about this. What do you think? So, yeah, I think all of the above, Seth, I think there is no easy or straightforward pathway, you know, for something like that. I mean, you obviously, as you know, through the task force, we've been discussing that as to how, you know, do we work with not just industry, but by with researchers as well. So number one is obviously through the ASGE, through the ASGE, somebody who's interested in AI, developing it or working with the ASGE, they can reach the AI task force through the ASGE portal. And, you know, the experts are available to sort of help guide them. The FDA process, as you know, is quite complicated. I mean, simply put, which sounds very straightforward, is that, you know, they have this QSUB thing in which or a pre-submission meeting in which anybody or, you know, in the US can approach the FDA and say that, hey, I want to have a meeting with you because I have this proposal of this device, an AI device or any device that I think I need some questions answered on how to get it approved. And the FDA, you know, that's a free meeting which is available to anybody and everyone. So that's obviously a pathway that you can take through the FDA directly and avail that. It also depends as to, and the question as I'm reading is, you know, you have an AI, you know, development or an AI system, again, it comes in different ways. You know, an AI system is a very generic term. If it's a quote unquote device, I mean, the FDA does categorize them into level one, two, and three, depending on the risk it brings. So the current devices fall under, you know, category two. And so they go through the de novo pathway of application such as the current machines for CAD-E or detection of polyps are going through, but there are several softwares which don't go through level one or category one, you know, and so it all depends as to what that AI system is and how it does. So, I mean, it's a very complicated question in terms of how you can get it FDA approved. I mean, as you know, even large companies struggle with it to see how they can get it done. I mean, the society or specifically ASGE, if it's endoscopy related question, I mean, you can reach out to the ASG and to the task force, and we'd be happy to sort of guide you in that process. Thank you. So I'm going to tackle two questions. One was today I have a role in warning for complications and, you know, as Pratik showed during his presentation, the ability to identify the appendiceal orifice or making sure that the bowel prep score is good in terms of Boston bowel prep, or if you're not having a lot of red out or if you're having too much red out to get more in the center. You know, I don't see why, you know, future iterations of AI wouldn't be able to tell the endoscopist you just did a large resection and I'm concerned that there may be a target sign, you know, in this part of the resection field where you're giving too much pressure or force with the instrument and you're at risk for, you know, causing a perforation from excess looping. In terms of will the physicians get paid less with AI, so that's a really good question. It will all depend on what happens with AI and if it makes the procedure more efficient. We're a time-based, most of medicine is time-based and every several years CMS will reach out to the AMA and ask for codes to get re-evaluated. I encourage you, if you do get those surveys, please fill them out because they are very important for those of us that defend values of existing codes and try to get new codes approved. Rob, I'm going to finish up with a question for you. You showed the adjunctive benefits of WATS for Barrett's esophagus. What other areas in the GI tract do you foresee, you know, WATS potentially be beneficial for? So, you know, potentially any area of the GI tract that, number one, has a tough time being sampled or is subjective to sampling error, and then two, where there's diagnostic difficulty. So that pretty much encompasses the whole GI tract and biliary tract, but I do know that there is R&D being done in the biliary tract in particular, where I think it's particularly amenable to the benefits of WATS because of the difficulty of acquiring samples in the biliary tract and the focality of the lesions, and also potentially in the stomach in patients with chronic gastritis for detection of intestinal metaplasia and dysplasia, and then potentially also in IBD as well, whereas we know most of the dysplasia that occurs is typically in the distal rectosigmoid area, and also, if it's not polyploid, typically invisible to the naked eye. So potentially the stomach, the biliary tract, and even the colon. Thank you. So I want to just thank both Rob and Pratik for their presentations and for a very interactive and engaging Q&A. Thanks to CDX Diagnostics for supporting this program in conjunction with the ASGE. I do believe that this will be available on GIE. Thank you and have a good night. Good night, everybody. Okay, thanks. Thank you, everybody. Thank you, Dr. Grohschreiber, Ozzie, for some fantastic information and presentations, and also to our participants tonight. We would again like to acknowledge our appreciation for the support of tonight's webinar by CDX Diagnostics. You can still learn more about CDX Diagnostics and their technologies by visiting their website at www.cdxdiagnostics.com. Finally, just as a reminder that as a registrant, you can access a recording of this webinar to listen to the presentations again or send along to engage your peers by sharing those recordings and the lessons learned. Those will be available in GIE LEAP by going to learn.asge.org in the next couple of days. Our next webinar will be ASGE's Indo Hangouts for GIE Fellows. This will be next Thursday, May 6, 2021 at 7 p.m. Central. This will be moderated by Dr. Todd Marin on EUS-guided transluminal intervention for pancreobiliary. This concludes our presentation. We hope this information is useful to you in your practice, and have a good evening.
Video Summary
The video is a recording of a webinar titled "Demystifying the Black Box: Where AI Meets GI" presented by the American Society for Gastrointestinal Endoscopy (ASGE). The webinar discusses the role of artificial intelligence (AI) in medicine, specifically in the field of gastroenterology, focusing on emerging technologies in upper and lower GI endoscopy. <br /><br />The first presentation by Dr. Seth Gross discusses AI's ability to classify and localize abnormalities in the gastrointestinal tract, using examples of AI systems identifying abnormal areas in Barrett's esophagus and squamous cell dysplasia. <br /><br />The second presentation by Dr. Prateek Sharma explores AI's role in colonoscopy, highlighting its potential to improve speed, expand access to services, and deliver consistent diagnoses. He discusses the use of AI in polyp detection, characterization, and predicting invasion in polyps. <br /><br />The third presentation by Dr. Robert Adzi focuses on AI's application in the esophagus from a pathology perspective. He introduces the Watts Diagnostic Platform, which combines wide-area tissue sampling with a 3D imaging analysis system and AI/machine learning to improve diagnostic accuracy in Barrett's esophagus. <br /><br />Overall, the webinar provides insights into how AI is being used to improve diagnostics and outcomes in gastrointestinal endoscopy, specifically in the detection of dysplasia in the esophagus using the Watts brush and AI-driven 3D imaging analysis.
Keywords
Demystifying the Black Box
AI Meets GI
American Society for Gastrointestinal Endoscopy
artificial intelligence
medicine
gastroenterology
upper GI endoscopy
lower GI endoscopy
Barrett's esophagus
squamous cell dysplasia
colonoscopy
polyp detection
×
Please select your language
1
English