false
Catalog
Gastroenterology and Artificial Intelligence: 4th ...
Panel Discussion
Panel Discussion
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
I just wanted to introduce Nick Patrick. Nick is Deputy Director of the Division of Imaging Diagnostics and Software Reliability at the FDA, so we thought it would be when we're talking about the endosuite and applications that his presence and input would be much appreciated. So can we go back up to Zoom here and have Mike Wallace online as well, and we'll get started with our panel discussion for this. So, Mike, are you on? And if any of the other moderators are on who can be there, but while we're waiting for Mike to do it, perhaps I can get started. And I mean, I think Irving, you had mentioned a little bit and we started talking about, you know, fellows and training and de-skilling. And I know there's a section Raj is going to be talking about that. And also for all of you in the audience, as well as the virtual audience, at the end of the day, we'll be having a industry panel discussion in which the goal of that is to discuss education and training, you know, in the field of AI for GI and endoscopy. Mike, can you hear me? Yes, I can hear you well. Can you hear me? Yes, your volume is just coming up. Mike, if you want to take this on and perhaps I think you're it. So if you want to start with moderating the panel discussion, you can ask your questions. And even perhaps I see there are some questions in the Q&A box, if you want to bring those up and have our panelists answer that, we can do it as well. And if there are any sort of AV video glitches or stuff I can take over. So, Mike, go ahead. Hi Prateek, I'm also on. It's Amrita. Oh, hi Amrita. Sorry, we can't see you. Is your camera on Amrita? It is. Yep. Okay. Can we figure out a way to get Dr. Sethi on as well? So Amrita, while we're waiting for your camera, because we can't see you right now. Mike, you want to get started first? Yeah, I have a question for Dr. Patrick at the FDA. I know the so-called software is medical devices, which most of the AI systems falls under are regulated in a different way. Can you just give us a quick overview as to how these systems go through the process? How do we get all these wonderful tools that we're seeing in the research space? How do we get them to reach a level where the Food and Drug Administration would consider them acceptable software medical devices? Yeah, it's good. Okay. Yeah. So I will give a talk later this afternoon where I'll at least give you a little bit more of a feeling of what the regulatory process is, or at least some of the main approaches that we use. And I'll talk more specifically about the CADI types of devices that are on the market. But in general, for software, this gets down to sort of the risk and benefit profile associated with it. So the more sort of risk associated with it. So if you're going to diagnosis or you're going to automated analysis without the clinician, those will be higher risk types of devices. So therefore, the regulatory burden is likely going to be higher for those devices. Some, as you're talking about software as a medical device, may be administrative or used in particular context of integration into, you know, like the natural language processing and so forth. Those may have different bars and lower level sort of oversight associated with them. Some software, again, if it's using administrative systems may not have any sort of regulatory process, you know, at least a direct one that we're going to do a review or have data support on it. So it's really a very large variety and it's really based on sort of this risk benefit associated with these devices. So a tool that helped you generate a report obviously should have negligible risk. So that would be seen very differently obviously than one that helps you detect and classify and eventually tells you what the right treatment is for a polyp. That's right. Yeah. So those would be different risks and have different data requirements associated with them and so around them. So again, filling in a report is a little bit of a gray area associated with what we really mean by that. Those could be diagnosis which pre-fill in the diagnosis for a device, or it could just be these natural language processing that are trying to interpret what's being said at the clinical setting and try to put those within a report that'll be reviewed. So again, in those scenarios, those would be lower risk types of devices. Nick, just sorry, following up on that is, but you know, at the end, the physician's still involved in the decision making, right? None of these are totally automated like self-driving cars as yet, right? We still have our hands on the wheel. We are still helping with the diagnosis, treatment, whatever it is. Where does that fall in? I mean, then why should that be considered as a high risk if the physician's still part of the decision making process? Any thoughts on that, Nick? Yeah. So again, part of this is associated with how do you interact with these devices? And we know that once they are introduced in the clinical setting, it can change that practice. Now maybe that's all for the better and that's what we want to see, but it can also be that you start to rely on that automation. So it's not used the way we might want it to be, or the way that it's labeled, and it starts to become more of a primary diagnosis before it's secondary. So that's where there needs to be at least enough data to support that these are actually going to be helpful in the clinical context for these aid devices. But you're right, none of the ones in endoscopy yet are anything more than aided devices. We do have some devices that do pre-screening, say in Pap smear, where there'll be a subset of slides that'll be read by the AI and it will determine normal and those won't be looked at by clinicians. So depending on what the device is in what area, there's some that are a little more automated than others. I don't know if there's any devices that are really, here's the whole task, do it for the AI and nobody's ever going to look at it. But there are some that are starting to say, these are the normalest of normal. We might not need to read those. And that's probably the evolution as we get better at these things. Now endoscopy is a little bit different because the procedure has to be done as well. And so the separation between the AI and the procedure isn't so easy in endoscopy, where in radiology or reading slides, you may have this ability to acquire the data and then have the AI assess it sort of after the fact. And that's not so easy in endoscopy these days. Amrita, you're on screen as well. Amrita, you want to take some, any questions from the virtual audience? Amrita, you want to highlight? Sure. There is actually a question about effect on things like reimbursement, particularly Raj, I may throw this your way, for things like time for reading capsules, or even if you think about even visits, clinic visits for us are like, we're now down to 15 minutes and being kind of reimbursed based on that. If we start judging things like time to read a capsule based on what AI can do, are we looking at reductions that aren't going to be feasible by those of us who don't, or who still, as you said, the endoscopy still needs to be done. Are we now going to be judged on computer rates and stuff like that? How is that going to affect reimbursement? It's a great question. I have no idea what the answer to is. So I'm curious if Prateek or Mike or anyone else on the panel has a, it's a concerning question. I think the biggest thing you highlighted is it reduces, AI reduces the read time or the procedure time for some people, but not everyone has AI. Does the actual entire reimbursement for the procedure go down for everybody, even if it's taking you as much time as it did before? So I'm not aware of any insight into that, but I'm curious if others know anything. Actually, I don't know if Chuck, you want to make a comment on that about, because I mean, it's the same concept of radiologists reading, you know, based on AI, does the reimbursement there differ anyway with that, with a radiologist reading an image without AI? Okay. So Amrita, I don't know if the... We couldn't hear that. If you could just... Yeah. So the response from Dr. Khan, who's, you know, experienced radiologist and he will be giving a lecture on the role of AI in the field of radiology is that right now, even in radiology, there's no differentiation with that. I think your question's a good one, Amrita, because it talks about the time, which, you know, is an issue because several of our CMS, you know, the reimbursement comes from the time which is spent on it. So I think the bottom line right now is we don't know. I think Mike had a response as well. Yeah. You know, we saw this a little bit with the anesthesia issue when we knew as gastroenterologists that someone else was doing the anesthesia for us, namely a nurse anesthetist or anesthesiologist. Medicare, after some years, recognized that was the new standard and said, we're not going to pay the gastroenterologist for something that we used to assume was part of their job. So I suspect as these systems become the level of standard of care, like Propofol did for anesthesia, there'll be a reassessment of what the work value is for our role. Obviously, if we're reading a capsule in six minutes, but we're paid based on a 90-minute read, that will at some point get reassessed. But I doubt that will happen any time until it becomes widely practiced and really standard of care. I'm going to follow up that sort of question on time reimbursement to one about how might this translate into regulation, particularly when we're talking about quality metrics, like for example, the bowel prep or recognizing appendiceal orifice. Do you anticipate that eventually one day it'll be that an alarm goes off if somebody starts withdrawing and it hasn't actually reached the cecum or the prep is not good enough that one shouldn't continue with the colonoscopy? Or is it just going to be something that will be measured and reported and people may not get reimbursed or may have dings against them as an endoscopist for reporting that as a complete colonoscopy? Just curious about that translation. Yeah, I think that's an excellent question. The way I think we're seeing this develop is that while it's all new, it has to come into some kind of framework that makes sense. And so one of the things that comes to my mind is, as an endoscopist, we has a high quality exam, for example. They're constantly making it to the appendiceal orifice. The preps are being documented appropriately, but most through an automated system. But most importantly, their ADRs are reflecting a high quality exam. We'd like to see a shift in value based care. In other words, that that value is actually being assigned for average ADR or finding for endoscopists that have high ADRs, their reimbursement is better as compared to those that are traditionally lower ADR finders. And so I think that that would be a way you could potentially quantitate or capture how AI is helping because now it's actually impacting the ADR. I just want to make a quick comment and then, I guess, question for Dr. Waxman, which is, you know, a lot of these AI algorithms sort of rely upon a close examination. And obviously, you as I well know that sometimes for esophageal or gastric lesions, the time spent in the esophagus on a normal esophagus is, can be measured on the order of milliseconds sometimes if not seconds. Did you come across anything in terms of really, did you get a high quality examination of the esophagus or stomach in your reading? Because that's going to impact whether these algorithms can work or not. And if you haven't seen it, anyone else has worked on that area. Seen it in the colon, but nothing in the upper side. Yeah, actually. Well, there is actually reported papers looking at quality in upper endoscopy and the minutes you spent in the upper GI tract in regards of detecting gastric neoplasia. I don't remember the number. I suspect it's around seven or eight minutes that you actually have to spend to really detect early neoplasia. I think the British guidelines is what they suggest. There is, there was one paper I came across where the same thing that AI does for the colon in regards of quality exam actually was exactly the same for upper endoscopy, but I only found one paper suggesting that AI can actually measure the quality by how much time you spend in the esophagus, whether you retroflex, et cetera, which was very interesting. And the other thing along that Raj was not just the time, but looking at the landmarks and what areas have you actually examined, right? So rather than saying seven minutes in the stomach, which is Irving is right, but you could be sitting in the antrum for seven minutes, it actually looked at the landmarks and it clicked off every box as every landmark was seen. Nick, one last comment before we wrap up. I mean, there is systems out there, not, not in endoscopy yet, but in, in ultrasound that are associated, not with trying to do a better diagnosis directly, but for improving the acquisition. And so this would be the type of AI where you would be able to use that whether it's based on time, or as you're talking about, whether you're visualizing the right areas that may be a good use of AI and help in to improve the overall acquisition of the data and make sure you're looking in the right places at least. And would be low risk, right? A little lower risk. Okay. I'm just trying to push him on this one. So that's okay. Yeah. One last question. Sorry. From the audience. So, so Irving, sorry. Can we, there is a full session on training and our right after Raj's talk. So my apologies, but we'll take your question at that time and help Raj and the panel do it. So Mike and Amrita, that question was related to training and descaling of endoscopies. We'll get to it in session two. Amrita and Mike, again, thank you very much for being with us. And, you know, again, to our panelists and Nick and excellent talks by each one of you. So thank you very much for this session.
Video Summary
In this video, a panel discussion is held about the use of AI in the field of gastrointestinal (GI) endoscopy. The discussion includes topics such as the regulatory process for AI systems, the involvement of physicians in decision making, the potential impact on reimbursement and quality metrics, and the importance of training and descaling for endoscopists. The panelists include Nick Patrick, Deputy Director of the Division of Imaging Diagnostics and Software Reliability at the FDA, Mike Wallace, Amrita Sethi, Raj Shekhar, Prateek Sharma, and Irving Waxman. There are also questions from the virtual audience.
Asset Subtitle
Irving Waxman, MD, FASGE, Rajesh Keswani, MD, Shyam Thakkar, MD,FASGE,Nicholas Petrick, PhD
Keywords
AI in GI endoscopy
regulatory process for AI systems
physicians in decision making
reimbursement and quality metrics
training and descaling for endoscopists
×
Please select your language
1
English