false
Catalog
ASGE Annual Postgraduate Course: Clinical Challeng ...
Session 9 - Panel Discussion
Session 9 - Panel Discussion
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
What we'll do is we'll just talk more about the practical aspects of integrating AI in endoscopy. So for the panel, my question is, what are the things that our gastroenterologists today should know when they were thinking about implementing these AI tools, whether it is administrative tasks or computer vision tasks or any of these bazillion EHR toppings that we'll get in the next one year maybe? What should we see? What should the clinicians know about AI? So this is very difficult, because when you have your CAD in your unit, you just switch it on as you do with an app on your mobile. And actually, it works very well. So I would not force people to study something that is not critical in their clinical practice. So I would like to pick up only those two or three concepts that may be very relevant. So for instance, in what patient can I use this software? For instance, if a software has been trained on skin colonoscopy, please do not use in ossetic colitis. It will end up with removing a lot of nonsense lesion. And secondly, how to deal with a false positive? Because when you scope a valid patient with a CAD-E, you have a lot of activations. You cannot do EMR for all of that. You should not do biopsy for all of them. So before you use a software, please consider whether you are good enough to differentiate between a true positive and a false positive. Having said that, I don't need to answer that. You need to know much more. The rest of the panelists, two bullet points each. Yeah, I think there's a lot of interesting questions. And I'm going to answer your question with a question. But should we be informing our patients before we actually do examinations of them with a GI genius or another computer vision model? Should they be aware that that's happening during their exam? And I'll tell you, from my experience in talking with patient trials where AI is being used, almost all are actually extremely into it. But I think they wanted to know ahead of time that, yeah, computers will be taking part of their care. And in general, I think that there is a huge amount of promise. I think the one thing that we do know is that current AI models are built on very limited data sets that are often trained at massive academic centers that cater to particular populations. So it very well may not function very well in your community practice where the baseline demographics are quite different. And that's just something to be aware of. I'll take two points. Yeah, so I think just like how you evaluate a new therapy in your clinic, right? So we have the reps coming back. And they'll be telling you, OK, we have this new drug. It works great. So how do you evaluate that? I would look at your AI and your tools also in a similar fashion. So what was the patient population on which the data or the machine was trained on? What is the training set? What was the validation set? How many patients were there? What were the demographics of the patients? So see, does it pertain to you in your practice? So example always, just as Cesare was saying, is you may think your ADR is great. And what you really want help with is sessile serrated lesions. But if the software was not trained on sessile serrated lesions, it's not going to help you increase your detection rate of the right-sided flat polyps that you may be interested in. So just like evaluating any new therapy, you should go through the steps of evaluating the data set on which the model was trained on. So Scott, a question for you, because big tech is now getting into the trillion dollar market. So as you bring these clinical tools that are assisting physicians, like your ambient clinical intelligence and some of the things that we discussed and things that were toppings on the EHR, that's what I call it. What should we look for when we are bringing those into our clinical space, and especially when clinician is still liable for some of those mistakes? So what should you look for in the vendors? A couple of things. One is, I encourage you to get hands on and try this. It's like when the web first hit a few decades ago, people were like, what's the web? Get on there and go buy something. So the same thing for you've seen with hours of BARD and Chachapati, go play with this in other areas to familiarize yourself with this. And the industry wants to bring this to help you, but they want to know from you, how is it useful like you just said? How do you make this into practice so it's not a burden, actually takes away the cost? And I think we just had a really interesting discussion here before this about whenever something is introduced to practice, you have a randomized, an RCT to basically validate it. And in software industry, we speak alpha release, beta release, features, and functions. And she's like, no, no, I speak RCT. And I think, how do those communities come together to start talking about what's an RCT for some software? And it's really interesting to talk about, how do you bring this in so you have the confidence and trust in the actual algorithm and you know where to use these? And so, for example, we're finding is that AI, if I work at NASA, they treat AI as an idiot savant or as a first draft, which means it's interesting. Let me go evaluate this in my own way to make sure before I go execute on what it gave me. And I think if you start to treat these tools like that, they may be very helpful. It's where you try to take the doctor out of the loop, that we find those things historically just don't perform very well. I think in total, what we are hearing from the panel is let know if it is relevant to the use that you're using and make sure that the data on which it's trained is good and that the software that you're going to use is reliable and reproducible at every single clinical visit or endoscopy visit or clinical encounter that you're having and ask questions to the companies that are selling these. So moving on, I just want Anil to give us a quick one or two liner on what's happening in the pancreas and biliary world. Like what kind of solutions do we have to bullet points? Yeah, so it's interesting. The first computer vision model that I'm aware of to assist clinicians during US exams for pancreas cancer actually was in 2001. And it was trained on VHS tapes, which is amazing. But obviously, now with the advent of deep learning models and now we have much larger video data sets, we had a publication where we created an US AI model that could differentiate AIP from PDAC, which is obviously a significant clinical dilemma for several of us. But I think the next step that we need to do is, and I think this is true for a lot of these AI models, is the wastebasket of history is littered with unused models. A lot of models that are considered to be valuable in the journal, they get a good impact factor, but they actually don't make it into clinical practice. And that includes models that we've made. So I think the big next step for us is true implementation, studying these in a rigorous clinical trial setting. Because I think we can all see a smoke and mirrors AI model that has a high AUC of 0.99, but is it actually going to work in a clinical trial? I think that's what we need to see. And in terms of biliary, cholangioscopy and this issue with indeterminate biliary strictures, I think is ripe for something to address. I think for us, we're very interested, and I know several other groups are interested in trying to use cholangioscopy to take the video, not just use clinical features that we as clinicians think are significant in terms of diagnosing something as cancer, but could we have the AI teach us that these are actually the features that you should be considering when evaluating a biliary stricture? Great, thanks for that excellent overview, because I think a lot of times we just focus on some of the luminal gastroenterology. But shifting gears back to some of the administrative tasks, and I want to ask Scott here, because he works with a lot of healthcare companies and healthcare organizations, and especially Google helping with prior authorizations, because we are working with UnitedHealthcare, and we have those battles ahead of us. So how do you think some of these softwares would help the clinician and the insurance companies fight against each other to get the prior authorization to do the right thing for the patient? What are some of the solutions that Big Tech is putting out? I think it's emergent. I mean, I'm lucky enough to be able to talk to both sides of the table. On the clinician side, my brother leads, for at least for a year he did, a lot of the basically community cancer centers, and they're overloaded, like 60, 70 patients a day. And where they want it is like they go home, they've had dinner, and they want to, after they're playing with the kids, they have so much time to do charting. At that point, they're saying, can you just give me a rough draft? I've got to go through 45 of these. And so they want to use something there to actually generate that they have to fax the next day, because their EHR is basically a bespoke system. They fax that to the other side. Then if you talk to the payer, they're like, I get so many faxes from the community, right? Can you help me just understand red, green, yellow, where do I focus my attention? So the ones that should just sail through, let's make those go through. And those that have attention, let's spend our human time on those. And so we see that AI is helping on both sides. One is, can you write something well to help them as a rough draft so they spend less time charting and more time with their family, because that's where the time comes from. And then on the payer side, they're saying, can you help us if they come in, if this would just go through, right? We'll still sample, trust, but verify. But can you help us figure out how to let those go through and focus our attention on ones where we do need to have a conversation, right? And so that's where I find AI is really coming in a prior authorization. There's all these tools around that, but that's the bulk of it. When is it coming to us? You want to work on a clinical trial for that? We should talk. We need a randomized control trial for that. It's the coolest thing. I saw the class AI and it's really interesting. FDA may have something to say about that, but I think it's a really interesting area. And we're trying to get back from y'all is like, what's a good first draft? How do you build that trust so that what we're providing is enough and you know what to check? It was like having an assistant for you that frees up your time. Right. And moving on to another piece and I want to engage Cesare here. What do you think are the immediate challenges for adoption or interclinical practice for any of these AI tools? Thanks, Salamati. Can we ask the audience, how many of you have a Cat E at this time in your endoscopy unit? Please lift your hand. Okay. So probably not more than two or 3%. And this is the same in Europe. And it is now five years that we have a Cat available in Europe, probably three years, United States. So the spreading out of this technology in medical field is extremely painful. I mean, when you launch your app in one second, everyone have in medicine, it takes a lot of time. So your future in 10 years is fascinating, but if in 10 years we can have one Cat E for each of us, it'd be already a bright future for us. So it's extremely painful to spread out software in the medicine. Secondly, there may be several aspect on the operator. First, in our unit, all the trainees are trained with AI. This means that, and this comes from mammography, if you remove AI to them, there will be an immediate de-skilling. So one of the challenge in the future is de-skilling if you don't have availability of AI anymore. And then the second issue that comes from radiology is that people don't switch on the Cat. You know why is that Avanti? Because what is really frustrating is that you realize that the AI does a mistake, but you cannot correct because different from your app, we cannot say, look, Cat E, this is a poly, next time, please recognize me. So the system does always the same mistake. This is frustrating. You know what? And yet you don't use. So my dream is that all physician in the world could collectively instantaneously correct any mistake. Maybe we would have a super algorithm that will be better than any of us. But once you know the defect of an algorithm, you are frustrated. It's like when you are with a person and you know that this person has a defect, then you become obsessed with the defect that you don't see all the rest of this person. And you just, what are you laughing at? So this is what I would like from a medical software. So Cesare, I think those are obviously reasons. I think the biggest reason I feel at least in the US is also reimbursement, right? Or payment, right? I mean, I think that's a huge thing. I would have definitely seen 50% people raise their hands if this thing was free, right? I mean, if he asked the same question, how many people use Google or how many people use chat GPT, right? Is that's more than the number of these people, whereas that's been around only for three months. So that's a major thing that all societies and industry have to work together is, will we be able to get a CPT code for this, right? Will we be able, will payers actually end up paying for AI assisted colonoscopy versus a colonoscopy without AI and how do we get there? So I think those are the things to me that financial part is a huge driver of why it's difficult to incorporate this in practice. So it's the financial piece, but also the feedback piece, right? And that's also what the data sharing, but I think that's a technique we've all read about, reinforcement learning with human feedback, right? Where you're actually saying, this is how you make it useful. And how do you collect that from you all in a safe way that's probably preserving, that's an active area of research, but we need doctors like yourself pushing to say, we'd like to get it back to say, that's a pilot, that's not, right? And all kinds of things. Thank you for a great presentation and a great panel discussion. I hope you all got a glimpse of what's happening today and what will happen in the future. And we'll move on to the next session. Thank you.
Video Summary
The video transcript discusses the practical aspects of integrating AI in endoscopy. The panel discusses what gastroenterologists should know when implementing AI tools in their practice, including considerations for administrative tasks, computer vision tasks, and EHR integration. They emphasize the importance of understanding the limitations and relevance of AI software and how to deal with false positives in order to make informed decisions about patient care. The panel also touches on topics such as informing patients about AI usage during exams, evaluating AI models based on data sets and demographics, and challenges in adopting AI tools. The discussion concludes by highlighting the potential benefits of AI in pancreas and biliary diagnostics, as well as its use in administrative tasks like prior authorizations. The panel acknowledges the challenges of integrating AI into clinical practice, including the slow spread of technology, possible de-skilling of trainees without AI, frustration with software mistakes, and the need for reimbursement and payment models for AI-assisted procedures.
Keywords
integrating AI in endoscopy
gastroenterologists
limitations of AI software
patient care decisions
challenges of integrating AI into clinical practice
×
Please select your language
1
English