false
Catalog
ASGE Annual Postgraduate Course: Clinical Challeng ...
Introducing AI in GI Fellowship Education
Introducing AI in GI Fellowship Education
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Thanks again for the opportunity to speak and sort of, in contrast to the prior lecture and my first lecture, there's almost no data here. So this is, I think Prateek and Mike know that I'm just good at pontificating. So they're letting me give a talk where we know nothing and we can all just sort of think about what we think the right answer is. And that's introducing AI in fellowship education, but I want you to think about it, not just in fellowship education, but remediation in general, because it's not just trainees who need to get better. It's all of us in practice who need to improve and how does AI impact that? And so I think of this in a sort of a way, these slides were transferred over, but they don't seem correct. So I'm hoping they work out. When we think about AI, how do we think about them in terms of measuring competency and then how do we improve competency? And so what can we do to have AI really help us in those fields, both in training and remediation? So I think all of us know, and this was data from our institution 10 years ago, right when we started measuring ADR, that quality measures widely, right? So ADR was fivefold variation between low performers and high performers. And we've seen that data a lot, but I think about this a lot because we all start at the same place in training, but then we end up in vastly different places where the quality varies dramatically. None of us were, you know, high ADR detectors when we started fellowship, but somehow we end up as different. And so I think that AI can really help us in trying to reduce that variation in practice. It's important for those of you who don't have trainees to understand that we talk a lot about the average trainee in these studies. The average trainee achieves colonoscopy competency at X number of colonoscopies, but the average trainee isn't really average, right? We see competency achieved for the average trainee varies dramatically. Some people get competent at 100 colonoscopies, some 200, some 500. And so there is that variation. And so we need to have some innovative solutions to try to reduce that variation. You'll see the same thing with upper endoscopy. I can plot the average trainee on the left. When do they get to the second portion of the duodenum? But it varies for each trainee. Some people just need more time to get competent versus others. And there's a limit to what we can do in training programs or for remediation programs, because it's hard to get a second gastroenterologist in the room to actually pay attention to what someone's doing and try to give feedback. You know, when we did a really simple survey study of trainees and we asked trainees, you know, is your attending actually watching you when you do colonoscopy? And they're not. So if you look at what's happening during the insertion phase, that's the phase where we most often, we just pray and hope there's not a lot of diverticulosis. Most of the time, the attendings aren't actively watching the screen during insertion. But even during things like the third year of fellowship, where you want to watch and how they're performing polypectomy, give them pointers on how to be more efficient in polypectomy, you know, we are like, oh, they're third years now, they can be on their own. So they're not really getting much feedback. They're really getting the lowest common denominator of how we train, right? Just keep trying and you'll do a good job. And so we need some sort of technology solution to help us improve the quality of care we're delivering. So I think what we know is that there's variation in attending competency and quality. That variation likely begins during endoscopic training. And that's still very focused on volumes, rather than competent assessment, you know, how many procedures did you do? Not how did you do them? And so I think we have to find ways to flatten trainee learning curves and remediate trainees. So how do we improve skill? There's a lot of options. In the interest of time, I'm not going to obviously talk about anything besides AI, but we can do things like education, you know, programs like this, where we're teaching people on the high quality care, what's an adenomatous polyp versus serrated polyp, etc. Deliver quality metric feedback, which you've heard a little bit about, AI can help with that as well. Skills feedback, AI can help with that. Hands on training, which obviously all of us have participated in, but AI sort of fits into all of these areas and can maybe accelerate our adoption of these interventions to improve the quality of care we're delivering, and again, help trainees acquire the skills they need to. So we wrote a article on this, it's coming out soon, that's purely just thoughts off the top of our heads. So you can disagree with it, and you would not be wrong. It's just what we think about it, but how do we think we should introduce AI in training? So first of all, I think that it's incumbent upon us as a society, as trainers, and quite honestly, as clinicians in general, to introduce didactics in a formal manner early on in training. So what do I mean by that? So we need to say early on, what is AI? You know, how would data process in AI, what are the strengths and limitations of AI models, teach our trainees these things, so they can understand what model validation means, you know, why there might be biased data sets, because otherwise, if we don't teach them in a formal manner, they're really, I mean, going to maybe not be worse, and I go to YouTube and learn from there, which is actually a pretty good place to learn about AI. But we need to try to do that in a more formal manner. So I'm suggesting that, you know, obviously, we need to think about a more structured way to train people in AI, so they understand it. Then we'll move into the feedback and competency phase. Once the trainees understand what AI is, then use AI to give them competency feedback. Are they meeting quality metrics? Are they actually doing a good job of looking at the esophagus, looking at the colon? And we'll touch more on each of these in the ensuing slides. And then finally, once they've gotten these two phases done, maybe in the second half of the second year, or into third year, then turn on the, you know, the computer aided detection and diagnostic systems, then, you know, have it help them, you know, assess polyp sizes, really think at that point, you know, bring in, take off the training wheels and let them, you know, sort of see the power of AI. So, you know, obviously, this is just, you know, our sort of approach to it, but it's something that we can work on as a society and as a field of how to introduce AI. And again, it's probably not just for training. You know, we can't just introduce AI to a 75 practice, a 75 gastroenterologist practice and just say, here you go, if they don't understand the underpinnings of why it's there. So why do I think we need education? You know, I think I look at radiologists as the most tech savvy of maybe IR and radiology, diagnostic radiology is the most tech savvy of our field. And when you ask radiologists what they know about AI, I like the people, they're probably just trying to ruin the survey. You know, 47 of them said they'd never heard of AI at all. That's just seems like they're just being rude. But you know, about a quarter of people basically say they don't really know about AI. This is radiologists. I don't, I have not seen a similar survey in gastroenterologists, but I have to assume it's more than just a quarter of people who have, you know, just a passing familiarity with of AI. So at the minimum, there is a significant minority of our trainees and probably even more of our attending gastroenterologists who don't really understand any of the concepts of AI. And so I think that trying to understand how to develop formalized didactics for trainees and people want to adopt AI would be very helpful because we can see in radiology, at least there's a need. And so obviously this is the sort of, you know, the pitch slide that I think we need to have industry societies like the ASGE and obviously training program and trainers come together to develop didactics. It doesn't have to be done every year. It can be developed as a, a, a my online modules, which, which you basically say the trainees should understand at the beginning of their fellowship, because I think we're getting going to becoming so much a part of our fellowship that we really need to do this early on. So that trainees understand what's going on. So moving on from the didactics, you know, I think it's really needs to be highlighted. We don't do a good job of giving feedback both to trainees and to independent physicians. We mostly look, when we did the survey study years ago, we asked our program directors how they give feedback. Most of its procedure volume, oh, so-and-so is a great endoscopist, they did 3000 colonoscopies last month. That's not enough to say how someone is, is doing right. There are these great skills assessment tools like the Mayo Clinic skills assessment tool. We've created some called the T-SAT for ERCP and EUS, all these ways to measure competency assessment, but maybe a third, a quarter of programs will use them because they're cumbersome to use. So we need better ways to deliver, to measure outcomes in trainees and to deliver that feedback, right? So this is the idea that I think AI can really help us with as well. This is a tool we're using to help remediate low-performing colonoscopies, and it's essentially takes a colonoscopy video and then calculates some quality metrics. And it's sort of a dynamic process where I can look at stuff, right? I can see how many times did you go to the cecum? How many polyps did you use? And so I click on, you know, it says you've reached the cecum. I can click on the appendiceal orifice marker and say, yes, that looks like the appendiceal orifice and that they've reached the cecum, so that's the appendiceal orifice. I can say, I want to see how they've removed a polyp. I don't have to scan through the entire video looking for the polyp. The polyp is marked for me, you know, using obviously relatively simple AI algorithms now just to find polyps. It's, you know, by and large, relatively easy. We can find the polyp and I can then grade the polypectomy quality. It tells me how much, what the withdrawal time is, how much of that withdrawal time is actually interpretable, because I think we all know that withdrawal time by itself isn't particularly useful. You actually have to be seeing something. So if your scope is, you know, up against the mucosa and it's read out the entire time, that's probably not that helpful. But this is something that is relatively easy for us to build and we now use it to essentially give feedback and I call this AI augmented video review. I used to actually watch videos of colonoscopy both for trainees and independent physicians and give feedback. This lets me give feedback much more quickly because it cues up all the quality metrics for me and lets me sort of give some idea of how well people are performing. And the reason I think measuring skill is the future is we've done this study before, which is I could watch five videos of a colonoscopy and you've seen this in the bariatric surgery literature. We can watch one bariatric surgery video and I can tell you the likelihood of that surgeon's patients going back to the ER or needing some sort of re-intervention. The same way I can watch five colonoscopy videos and I can predict with pretty high likelihood what that person's ADR is. So if I can do it, you know, while I'm watching a video and also watching, you know, Hulu in the background, it means that AI, which is very focused on watching that video can do it as well. And I think we can get there. I think we can basically ingest videos and have AI not only just give us quality metrics, but give us an idea of what kind of skill is demonstrated during that colonoscopy. And Dr. Tucker's group published this beautiful study that was in Gastro, I think a couple years ago, which basically said that they can take AI and measure some core colonoscopy quality metrics like, you know, how clear is the image during the colonoscopy withdrawal? What's the bowel prep look like? How distended is the colon? And so you can do these sort of algorithms that are meant to remediate, not to just measure, but to say, hey, you're not doing great because you're not cleaning the colon or you're not doing great because most of the time you're just staring at a readout image. And so we can sort of get that metric and then try to help our trainees and independent physicians get better. So the ideal future state is that trainees would get some sort of report card, right? And say, hey, here's how you're doing. You've reached the cecum 95% of the time, but your withdrawal times are three minutes that, you know, and you're mostly just, your scope control during polypectomy is all over the place. You have no scope stability. And I think we can do that with AI, we just have to invest in it. And then finally, once we get to that, I think we can turn on, you know, computer aided detection and diagnosis, right? Help the trainees now that they've done a year, year and a half of training, they understand how they're doing in the core skills, help them understand that AI can augment our ability to find polyps. And I think it's important for those of us who are trainers to understand that trainees don't find polyps very well when they start training. I think we all know that. We did a tandem colonoscopy study that just completed last year that said about when you look overall, about half of serrated polyps in the right colon are missed by trainees. And then about a third of adenomatous polyps in the right colon are missed by trainees. And so this sort of led us to think we need to do something else because if you just basically let all these polyps be missed, it's obviously not good for patient care. And I think AI can help us here where we can have trainees understand that what the, what the morphology of some polyps are and why they're missing them. And what do I mean by that? Trainees do get better over time. We miss polyps less frequently as we progress through training, but can we actually shorten or flatten that learning curve to where trainees get more competent earlier on? I think we can because trainees miss polyps and they don't even know why they're missing it. This is a, I would, I hope you'd agree in our tandem study, this was a polyp that was missed, a clinically relevant polyp that was missed by one of the trainees who was finishing their first year. And I don't, I think no one should be missing this polyp. As you can see, this was found on the tandem colonoscopy by me when I did the second look. But this shouldn't be missed by our trainees and we need to figure out ways to help upskill our trainees so this is not missed earlier. And we know that AI can help us find polyp. We're recording both routine colonoscopy, all routine colonoscopies are recorded with and without AI. So this is without AI. You can see that there's some subtle finding there that all the experts in the room understand. But I don't think all of our trainees would be able to see that. And I can't show it to you because I can't control this, I think. Oh, maybe I can here. And with AI, you can see that we can, you know, it's very hard for the trainees to ignore that, right? You have to be a very adversarial trainee to just keep saying I'm not going to pay attention to the green box there, right? So I think that first, it really does highlight to them what they need to be seeing. And I think that this is important. When we looked at our tandem colonoscopy study, we see all these clinically relevant polyps that trainees were missing. A lot of these are actually detected by the CADD system. The trainees just don't even know that these are polyps, right? They don't even understand some of these morphology. They don't understand what a serrated polyp even looks like. So when they look back at this video, and they look at this polyp, they're not sure that this is actually a polyp. They look at this over and over again. And they say, oh, the bumpy mucosa, because they haven't learned enough yet in their first year that this is what a serrated polyp looks like. So I actually think that these AI systems can actually help trainees understand what subtle morphology looks like so they don't miss them in the future. Because again, when trainees miss polyps, 80% of the time is they fail to identify a polyp that was on the screen. So it wasn't as if they just didn't do a good job of looking at the colon. They don't know what to look for. So I really think that we can make a difference by implementing AI into training and having trainees understand what some of these polyps look like. But it gets to the final question, which we just don't know the answer to. Ultimately, does AI improve trainee recognition of subtle neoplasia? Or does AI result in reduced development of independent skills? And I think if we intelligently implement AI during training and into practice, we can hopefully push us towards AI helping us improve subtle neoplasia in the esophagus, stomach, colon, everywhere, where we're saying, oh, I did not realize that was a high-grade dysplasia in Barrett's. I'm going to pay more attention to these features when I'm doing my next operandoscopy, not just shove the scope in, not pay attention, and wait for the computer to beep. We can't control it, but I think we can at least try to push us in the right direction. So obviously, in summary and conclusion, I think we can intelligently implement AI into training and into practice in general, but we just have to work together. So I thank everyone for their attention. Thank you.
Video Summary
In this video, the speaker discusses the role of AI in fellowship education and remediation in the medical field. They emphasize the need for AI to help reduce variation in practice and improve competency. The speaker highlights the variation in competency among trainees and the lack of feedback and guidance they receive during training. They propose using AI to provide competency feedback and assist in skill development. The speaker suggests introducing AI education early in training so that trainees understand its concepts and limitations. They also discuss the potential of AI to aid in detecting and diagnosing polyps during colonoscopies. The speaker mentions the importance of industry societies and training programs working together to develop formalized didactics and improve feedback mechanisms. Overall, the speaker advocates for the thoughtful implementation of AI in training and practice to augment and improve quality of care. No credits were mentioned in the video.
Asset Subtitle
Rajesh Keswani, MD
Keywords
AI in fellowship education
AI in remediation in medical field
Reducing variation in practice with AI
AI feedback and guidance in training
AI in detecting and diagnosing polyps
×
Please select your language
1
English