false
Catalog
ASGE Annual Postgraduate Course: Clinical Challeng ...
Panel Four
Panel Four
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Okay, so my first question is for Michael, because we really like the relationship between value and outcome. However, in cancer prevention, it may take a long time, even 10 years, to show the impact of AI on the main outcome, such as cancer incidence and mortality. What we can have in the short term is only evidence on intermediate outcome, such as polyp detection, that is what you show for mammography. So my question to you is, can we define the value of AI in endoscopy only on intermediate outcome of the patient, or do we need to wait 10 years in order to have information on the main outcome as cancer incidence or mortality? That's a great question. Sorry, I was having some camera problems, but I agree. Diabetic retinopathy is a chronic disease in itself. Diabetes is a chronic disease. So we have the same problem. The good thing is that there was a lot of evidence from decades ago about what you see in the image or images, and you can measure. There were actually clinical trials where people were left untreated. And so we know what happens to them even decades later. So that may be the lowest hanging fruit. And that's why probably, you know, we were a little bit ahead. But the hierarchy of reference standards that I showed, this includes other proxies. And I'm sure there is evidence of biopsies or even what a clinician sees, you know, other studies being related to outcome. I know that actually I looked it up a bit. And so the more consistent you can relate what clinicians see, both in terms of intra-observer, even evolution over time. So where we used a proxy, what I call the prognostic standard, there's actually data for decades where people keep rereading decades old images to ensure that they're consistent, and that we're not deviating over time, right? You see deviation over space, meaning in Europe, you know, I came from Europe, right? And so there were some systematic differences to clinicians here, what you call a hemorrhage, etc. And so couldn't agree more. I think where FDA is going, if there is a higher level, I mean, there's a hierarchy now, and if there's a higher level, try to do that. But they also look for undue burden, meaning if it's going to be too expensive, and it will lead to not an AI ever being going on the market, there's an argument to be made there. So it's a little bit back and forth as they develop their thinking on this, especially in your field. Okay, this is a very reassuring, because at least adenoma is an adenoma, whatever, in Europe, America, so we have a strong intermediate endpoint. But I guess now that Zoramti, you have a question also? Yeah, I have a couple of questions for Michael, and then I have a bunch of questions for you guys too. But Michael, one of the things that I was, when I was listening to your lecture, is you make a statement that AI developers should also accept liability. I think it's kind of, I mean, we've been kind of dodging that, so I want you to kind of explain that a little more. And the second question I have is, you do mention about the CPT code 92229. Can you explain a little bit as to how, like, is it the retina scan that gets reimbursed, or how is it so that we can see how we can apply that concept in endoscopy? You ask great questions. I don't want to dominate the entire panel, so I will, but these are deep questions, right? So to start with the second one, the CPT code is for a patient with diabetes, so it's very narrow. It's for a specific diagnosis that needs a specific diagnostic process, in this case, a diabetic retinopathy exam through an autonomous AI. So it describes a service, right? One of the back and forth that CMS is going through is how much is an AI, just an algorithm hanging somewhere in a cloud or whatever, versus is it a complete service with image quality, and I saw the previous speaker discussing that as well, right? What is the quality of the image, and how is it acquired, you know, the hardware, etc. So part of that is the FDA evaluated that as a system, and also the CMS looks at that as a service for a process, like not just an AI, but so, and now I forget the first question, sorry, very quick. The first question was AI developers should take some, oh right, right, yeah, sorry. And so this is for autonomous AI, the American Medical Association has put it into their policy, so it's very narrow, right? Because I think most of the AI that is being discussed today is assisting a GI doctor, like me as a retina specialist. I mean, if it assists me in a clinic, I make my medical decisions, and the AI helps me, and it may override it. If you're now a primary care doctor using an AI for something where you don't have the expertise, I wouldn't use that if I was carrying the liability. So it's sort of, we're never going to move the needle on, in my view, and all these advantages that AI has, autonomous AI especially, if, you know, someone doesn't say, I stand for what I do, which is assume liability. For the performance of the AI, no one knows how juries, how judges, how this will be litigated, that is unknown at the moment. We'll see how that goes, but I think it is good to take a stance. And, you know, of course, I'm happy to discuss it further, but I don't want to dominate the discussion. All right, Cesare, any questions? Otherwise, I'll jump on to mine. No, I have, there was a question very interesting in the Q&A. What should we, as clinicians, know about the training data set of CAD-E or CAD-X? For instance, just imagine that I'm scoping a Lynch syndrome patient. I should know how many non-granular LST were put in the training data set of the CAD-E. But at this point, as clinician, I don't have any information. Companies don't tell us what patient and what diseases were put in the training data set. So I'm curious to know from Sham, because he did the talk on CAD-E and CAD-X, what he thinks. And then I'm curious to know from Nicholas, why we, as clinician, don't have the information, the clinical information that we may use in our clinical field? So, Sham, I think what he's getting at is what kind of data and what kind of populations do we need to get this data as we're training these algorithms? Well, I think that's been a major question of the day. I mean, I think what we really need to look at is be able to standardize that training set, so to speak, across the populations that that clinician is or that question is addressing. And so it's been a difficult part. There's been studies out there on how to properly conduct AI studies and how to properly select those training sets and develop those training sets. And I think that that's really the key algorithm to follow if you're going to be looking at evaluating any type of AI system. Otherwise, it remains a black box. Nicholas, do you have any? Yeah. I mean, I would just add, from FDA's perspective, we're trying to learn what data makes sense to put out in public domain. And of course, we see a lot of information that comes in in a submission. Some of that's proprietary that won't go into a summary of the device. But we're trying to understand more about trying to get things like demographics and information about the data sets from the training and the test into the 510k summaries and other documentation more. So that's something that as we learn about what might be there, we may make some adjustments. I think the other side of the equation is the way the system works is the companies do the studies. The studies come to the FDA or any other regulatory agency. They review those information. There may be some public information, but it's up to the clinicians to decide what devices they want to buy and what information they feel they need in order to make a decision about buying or using that device. And so the more the clinicians and the other parts of the medical system push on the companies and say, we need this and that, the more likely that information will show up into some of these summaries or the documentation for the device or other areas. So Nicholas, what you're suggesting is we need to demand for transparency of data on these studies. Although FDA reviews that, it is only to a certain extent. Right. And again, we do certain types of studies which may not answer the fundamental question of, is this going to reduce the rate of cancer in 10 years from now? Those are not necessarily studies that we'll do for every device that comes in. So we may not answer all the questions in our studies. I think the other aspect is to think about is what do you think you really need to know about the AI to use it? And we use our phones every day. We use natural language processing at meetings and some of these areas. We don't ask the questions of how was it trained or how was it used? We just use it because it sort of works pretty well. And I think from a clinical perspective, that may not be good enough, but you're also not going to become experts on what the training set was that would have this population or that. So you need to understand, you try to figure out what is the pieces of information from a clinician I really need to understand in order to be effective at this, or what types of evaluations when I buy a device do I need to implement from my QA, QC perspective before I put it into my clinics and start using it? So, Naina. Maybe one of the other parts of this is to have like, Sravanti, you talked about an audit of your data once you've used that product, whatever it may be. And maybe that's something as newer versions of products come up that the FDA might have to look at. Maybe would you consider that to say? Absolutely. Getting real world data, having, looking at data over time, because even if you have a great AI to begin with, you know, technology's changed for acquisition. So you may get a new scope that does something different. And all of a sudden that AI is incompatible anymore. So all things are changing. So the ability to track performance over time is, would be a panacea for lots of us who are interested in understanding how these work and how they evolve. So on that note, Naina, I know at Mayo, the IBD patients and stuff, you're collecting data and stuff like that. Do you have in your protocol built in just for other people who are, you know, trying to build those data sets, like some kind of an audit before you start processing that information or do you have any checkpoints or any guidance? Sure. So, you know, we, because we have so many providers doing these procedures, we don't always get standardized imaging of polyps. But as you know, there's so many things that if you don't have a video recording, which is really the gold standard that you should get to the variability about sizing based on how far away you are, et cetera, makes a difference. I think also not only having video capture, but having it at the correct pixelation and that, whether you're collecting it at 750 or 110 or whatever those numbers are, I'm not a technical person, but they tell me that there's different fidelities and those things make a difference. And so if you don't decide that upfront, which is a mistake we made initially when we started this process of collecting data in IBD dysplasia, it was not easy. And so we had to go back and discard some of those patients and then use only a standardized criteria. So if you can set some of these parameters upfront before you start recording, and it's quite difficult to do a little bit of a switch in your recording system will change your fidelity. And so it's something that requires ongoing audit as you go through so that you make sure that you keep up with those standards. So Farvo, you have a comment? Michael? Dr. Abramoff? You had your hand raised. Yeah, short comment. This is related to liability. I mean, if you're a clinician and using this, you better know, you know, the populations was used on and as FDA and CMS talk about guardrails, you know, big thing is equity and racial, ethnic bias will be a big problem. If we don't, you know, we need to be upfront about right now, because I don't know whether you've seen it, but OCR, the Office of Civil Rights in HHS is coming down on AI bias. And so be very, very, very, very well attentive to that is the word. You know, Srivanthi, just just to comment on the on the liability piece as well. I think one sector that we're all kind of, you know, jumping over, which is to consider is the malpractice insurance standpoint, you know, perhaps this will open up a sector of, you know, a malpractice AI that will allow for, you know, AI physicians and clinicians that are taking on that liability, or so to speak, even if it's transferred to the developer, there's an AI piece to malpractice that might now become an insurance option in care. And I think that that is something that probably we'll see in the next five years. That's a brilliant idea. I think Charles also mentioned that previously that, you know, you have this AI malpractice. Cesare, questions? Yeah, I want to come back to Michael, because Michael, you were very clear, there is AI assisted on one side, and AI autonomous on the other side. But endoscopy is always a bit odd. So we can have some grey area. Okay. For instance, I can have a software for sizing the polyp. My eye, you are ophthalmologist, is very bad in sizing the polyp. So even if I'm there, even if this is an AI assisted procedure, I am a passive, I can only trust the computer, because I cannot challenge its diagnosis. So this is a hybrid, where the procedure is AI assisted, but the operator must fully trust the computer, because the computer is more competent than the endoscopist in the first place. How would you classify this hybrid scenario? You're muted, Michael. Yeah, sorry about it. There's now for the CPT editorial panel, there's appendix S that was just published. And that separates autonomous, augmented, and assistive. And augmented is exactly where you position it. So that we're starting to be more sophisticated about exact classification, which will, of course, have both regulatory, but especially reimbursement consequences. So it's ongoing, I completely agree with you. But it's ongoing, what effect it will have, how it will work out for the reimbursement, especially for the physician fee schedule in the US. But, you know, we can talk for hours about that. But yeah, I agree. And there's more sophistication now about the different types of AI. So Pratik is here next to the mic. So I think that means it's a cue for us to stop arguing here and discussing. Oh, no, I mean, I think it's Chesire, Michael, excellent conversation. And as they say, all good things have to come to an end. So we'll put your debate to an end. But it was great. Good to see both of you on the camera, hopefully see both of you live in person. And again, wanted to thank Nick and Michael, sorry, we missed you here in person. But your recorded lecture came out really good. So thanks for that. And for Naina and Sham. So guys, thank you very much for this wonderful session.
Video Summary
The video transcript features a panel discussion on the value and outcomes of AI in medical fields such as cancer prevention and endoscopy. The panelists, Michael, Cesare, Naina, Sham, and Nicholas, discuss topics such as the time it takes to see the impact of AI on main outcomes like cancer incidence and mortality, the value of AI in endoscopy based on intermediate outcomes, the need for AI developers to accept liability, and the transparency of data on training sets for AI algorithms. They also touch on the importance of standardized training sets and the need for real-world data and ongoing audits to track AI performance over time. The panelists express the need for clinicians to be aware of population biases and the potential for malpractice insurance specifically for AI. The discussion concludes with a classification of AI as either autonomous, augmented, or assistive, depending on the level of trust and reliance on the AI system.<br /><br />Note: The summary does not include the Q&A portion of the video.
Asset Subtitle
Michael Abramoff, MD, Nicholas Petrick, PhD, Nayantara Coelho-Prabhu, MD,FASGE,Shyam Thakkar, MD, FASGE
Keywords
AI in medical fields
cancer prevention
endoscopy
transparency of data
AI performance
×
Please select your language
1
English