false
Catalog
ASGE Annual Postgraduate Course: Clinical Challeng ...
AI Solutions in Pancreas Esophageal Motility and C ...
AI Solutions in Pancreas Esophageal Motility and Capsule Endoscopy Rajesh N Keswani
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
So, moving along to our next presentation, which is about AI solutions in pancreas, esophageal motility and capsule endoscopy, and that's Raj Keswani. You know, Raj is an interventional endoscopist at Northwestern. And over the last few years, I've had the pleasure of working with him as he's put together the programs for DDW, and besides his ability to do that so seamlessly, he's also taken the lead on several initiatives on quality and endoscopy. So Raj, welcome and looking forward to your presentation. So I will start talking while they're pulling it up. So obviously, thank you all. I've been given a really broad list of topics to cover in a very short amount of time. So what I'm attempting to do here is really distill down what I personally think is of highest interest in these three areas, again, which are including pancreas disease, esophageal motility, and small bowel. And so, you know, when you think about something like a broad topic like this, I like to think about it from, you know, sort of my hat that Prateek mentioned, which is, you know, quality, right? I think the biggest issues are in endoscopy and gastroenterology more broadly at this time. And so using that sort of quality hat, we have to figure out, you know, where we need to go. And so, you know, we had to define the clinical needs in these broad categories. And once we do that, we can think about what solutions are there. And obviously, we hope that those solutions, you know, change the needs that we have and create new needs, right? And so what do I mean by that? How do we define clinical needs? We have to think about gastroenterology, not just in terms of simple things like detection, but very complicated concepts like value-based care, population health. And so when we look at these bigger topics, how can we sort of focus on what we think is really impacting our patient care? Because it isn't always finding the small pile, but maybe other things that we're not thinking about. So when we're trying to define the clinical needs, what do I think about in the pancreas? And so I can't talk about everything here. So I'm talking about the things that I think that are really affecting our healthcare system. Pancreas cysts, I think for those of us who are therapeutic endoscopists, we'd all agree the pancreas cyst management right now is really a representation of, you know, inefficient care, very expensive care to hopefully find one high-risk cyst. So the current care isn't value-based. And then pancreas adenocarcinoma, which is ineffective care. We really just don't have opportunities right now to identify pancreas cancer at an early stage to essentially arrest the disease and hopefully get a cure. And so diving into those areas is what we're going to do today, but we can talk more in the panel discussion about other things as well. You'll see many slides like this, but for those of you who aren't in this field, pancreas cyst is basically an epidemic that you kind of wished your patient didn't get that scan that showed a cyst. When you get to your 60s, if you get an MRI, there's about a 20% chance or greater that you're going to have a cyst on the scan. And then you're going to be sort of asked to get a long duration of MRIs or CT scans. And so this is a real opportunity, and there's a lot of work in this area to really change the approach to how we manage these patients. So I heard we're going to have a little talk on radiology later. This is something that I think is just a nice word for all of us to hear, and I'm sure because of the audience, everyone's aware of this concept of radiomics, right? So this is, you know, radiology is always fancier than the rest of us. So this is their way of thinking about artificial intelligence. This is the idea that there is information in radiology studies that we aren't capturing as expert readers. And so the way this field works is you have to work through this concept of segmentation. How do you find the organ of interest, in this case, the pancreas? And that's its own set of algorithms. Then how do you get quantitative data out of that organ, right? So not just saying the pancreas looks normal, but really getting the size of the pancreas, the texture of the pancreas, all these sort of features that some are imperceptible to us or some we don't bother calculating. And then using AI, really trying to determine some sort of potential outcome of this patient. And so there's a lot of work on this. And Mike, who's obviously the course director, is one of the experts leading this work. Ulas, who's at Northwestern, has been also sort of working a lot on this idea. And one of the key issues here is really segmentation. You don't want to have to sort of outline the pancreas for the AI algorithm, right? So this idea of segmentation is that we can use some sort of algorithms to identify the pancreas, like you can see here, and then start to develop some advanced algorithms to determine outcomes. And so this is an example of work where basically they are able to segment out the pancreas and then identify the diagnoses of IPMNs. This is a useful thing because if you can identify the IPMN, you can then measure its size. You can look for high-risk features. You can look over time, has this cyst changed or not? Some things that may not be easy for a radiologist, I mean, I have to think this is the boring part of the body radiologist's day, which is the 3-millimeter pancreas cyst. Are they then missing the synchronous, you know, 1-centimeter solid mass in the tail? And so this lets you pull out all that information and identify whether something is more concerned. And even from this study a few years ago that Mike's team led, you can see that AI can actually perform as well as all of our clinical guidelines that people spend many meetings in ballrooms like this creating at identifying which cysts are most likely to progress to cancer, right? The high-risk pancreas cysts. And this work has progressed a lot in the last few years already. And so the idea is that AI will help augment the radiologist to say, hey, this is a cyst that you probably don't need to get another scan on for a few years, or this is a cyst that probably needs surgery because these cysts are already doing it as these algorithms. Even a few years ago, we're already doing as well as some of our clinical guidelines we already had. So moving on from cysts, you know, obviously, again, pancreas cancer is sort of the other end of the spectrum. We find it too late and we just wish we had picked something up earlier. And the reason this is such a challenging disease is because, you know, when you look at those curves of incidence and mortality from pancreas cancer, they are far too close to each other, right? We very rarely identify patients who can have a durable five-year response after a diagnosis of pancreas cancer. And so there's two parts to how we can potentially and how AI is helping us reduce morbidity from pancreas cancer. One is we need to identify these patients earlier. We can't just start at the radiologist stage because that's often too late. And so that's this area of research that's around the electronic health record using things like natural language processing, but also just, you know, pulling in all the lab values that are there, all the new diagnosis codes. The idea is that if you can bring everything in, you can then utilize that with some training to identify which patients ultimately get pancreas cancer to give a predictive model at the end of all this training to say, hey, you know, you get all these alerts for these patients in the hospital all the time. You can imagine a useful primary care alert is, hey, do you know that your patient has lost five pounds? Their A1C has gone up 0.3 points, and they have a family history of pancreas cancer. This is someone you probably want to get a scan on. And so this sort of AI work, I think, is very exciting because it lets us move towards earlier diagnosis. And obviously this is an area of intense research. This is just a very nice slide that was created in a symposium based around this concept, which is that we always see pancreas cancer at that right side within the three to six month frame, which is when the patients have abdominal pain or really, you know, significant weight loss. But if we can identify all these other factors beforehand, you know, the sarcopenia, the familial risk, the patient's a smoker, you know, the fasting glucose that just ticks up a little bit, then we can figure out which patients to scan. And again, there is already work out of this is, I think, work out of the Mayo Group, which is just in press right now, which shows that you can actually go back to MRI scans months to years before the diagnosis of pancreas cancer was made. And if you can actually look at those scans and AI can look at them, AI can tell you which patients ultimately were going to develop pancreas cancer, which didn't. So let me say that again. We look at two groups of patients, normal patients, and those who are going to develop pancreas cancer in a year or two. If you look at the scans and AI looks at the scans of the patients who are going to get pancreas cancer in a year or two, AI can identify those patients. So they're seeing something that our radiologists are not. They're able to see subtle, or they, personifying even the machine already, AI is seeing something that humans are not. It's seeing features that are going to predict the development of pancreas cancer. And so what does that look like in this study? The reader one is actually AI and the ROC curve is far better than reader two and reader three. So these are radiologists looking at pre-cancer diagnosis scans who cannot really see the cancer, but AI is able to tell that there's cancer there. So we can utilize these together, right? We can identify the patients at risk through the EHR. So machine learning can tell us, Hey, this is a patient who should get a scan, but it's not enough to just get the scan because we know that radiologists can't always see the condition before it is more advanced. And so we can also then use AI to augment the radiologist read to help make sure that we can identify these patients at an early stage. So completely shifting gears to esophageal motility. There's similar issues in esophageal motility, right? We have delayed use of appropriate diagnostic tests, again, inefficient care, and then inability to interpret diagnostic tests again, ineffective care. We just, we don't know when to use the tests and we don't know how to read them when we get them. So it's a very black box. I've been in Northwestern for 14 years and I'm still not sure what motility is. So, so I think that this is an area that's ripe for advancements. And we know that there's delayed diagnosis in esophageal motility disorders, right? For things like achalasia, patients have symptoms, they have the office visit, clinic visit, they get a normal endoscopy, then they get persistent symptoms. They eventually get another endoscopy. And oftentimes, as you look in the achalasia literature, there's years between the real onset of symptoms to when they're diagnosed. And that obviously implicates or has implications on how well they respond to treatment. So the idea is that AI can potentially arrest that delayed diagnosis by really saying, was that normal endoscopy normal? And you know, obviously all of us in this room and online know that, you know, anything that we humans can see, computers can see. And as we've seen more and more, even things that humans can't see, computers can see. And the question is, is esophageal, as early as esophageal motility disorders, are they actually endoscopically visualized and we're missing them? And we've seen that we've been able to show that there really are features that are subtle features that we all kind of recognize that are features of esophageal motility disorders. And most of these actually can be assessed through still frames. So we've been working through those endoscopic features of motility disorders to show that, yes, there are these features and even human readers can take a very early endoscopy of someone who's going to have achalasia and predict that they have achalasia compared to a normal. So if we as humans can identify these patients early on that have motility disorder, obviously AI will be able to as well, which is a funded project we're working on right now to basically have the ability to identify patients who might be at risk for a delayed diagnosis. Your endoscopy is done, AI looks at the whole video and then determines whether or not there's something a diagnosis such as a motility disorder that you might've missed, because those are patients who could then potentially be flagged to get advanced motility testing to try to reduce that risk of a delayed diagnosis. But what's really interesting and what's already basically here is automated diagnostic testing. Once you've decided to get these motility tests, can we actually make it easier for us to interpret them and get the interpretation correctly? And because of the difficulty interpreting some of these advanced tests, a lot of GIs in some places don't order them. They just keep doing endoscopies over and over again. But when they do read them, they sometimes read them incorrectly. And the current state of the art, as you know, through the Chicago classification is basically a very complicated decision tree. You look at the tracing, you try to figure out things like the IRP, you figure out the peristalsis, all these things. You bring it all together in this algorithm that seems to get more and more complex every year, which is to keep the esophagologists in business, and you hope to make a diagnosis. But there's probably ways to improve upon this and deep learning. And so John Panofino's group at Northwestern has really worked hard on using AI to help automatically interpret high resolution manometry testing. And so I'll just show you this sort of one concept that AI was able to basically accurately classify swallows during HRM studies with a pretty high F1 score and on a study level actually also able to classify whether this was a patient who had a hypercontractile esophagus or absent peristalsis, all these other things. So this is already here again, right? So AI can basically tell us on a manometry study what this patient likely has. And this is compared to experts. We can imagine how good it's going to be for people who don't have advanced training that this is going to be a very helpful addition for people reading manometry readings. So I'm going to skip over that. And then it's moved on into the concept of the endoflip, which you all are aware of. We can also look at AI to help us during endoflip, right? So it's another study that is difficult for some people to interpret, but it's getting more and more interest out there. Same concept, looking at some of the features that even endoscopists look at, and AI can be trained on how to interpret that using a typical CNN set of algorithms. You can basically have the AI train and tell you this is an abnormal or normal flip study. So I've condensed years of research down into it works because of time, but understand that this is a helpful concept, right? For again, things that we don't use enough because we don't know how to interpret them. And just, this is the F1 scores again for the endoflip, which are again, very good. This is another thing that AI can do and help us deliver higher value-based care and more effective care. And then finally, small bowel. And I'm going to talk a couple of slides on this because I want to leave time for our panel discussion. You know, this is the same issue. I used to read capsule endoscopy. I unfortunately do not have to read it anymore, but these are tough studies to read because this is the classic study that if you're not paying attention, you miss something important. And so that's ineffective care. You're spending loads of time reading frames that are irrelevant, looking for the needle in the haystack. And so this is just not a good efficient use of most endoscopist time. And this is ripe for AI to help us. And it's already there. This is a nice table that summarizes a lot of groups work saying, hey, we can do this. We can find a red spot. We can find blood in the small bowel. You'd expect it to be able to do all the stuff you're seeing already today. This is obvious. We should be able to do this. And so you're seeing lots of work out there that says, yes, we can find blood or a polyp or a lesion in the small bowel. And the question is, how do we just make sure this is out there and everyone's using it? And what does it mean to use AI? How much of a change in efficient care is it? Sorry, this is an example of what it would look like that AI can identify a lesion. You can sort of see in figure C, it identifies it in sort of the interpretability. It's finding the right thing, right? It's telling you, hey, look at this. This is an erosion and it knows why it's an erosion. What is it doing? It's really just, it's very, very accurate AI in small bowel endoscopy. But more importantly, it's saving a ton of time when you use AI compared to reading a capsule. So I used to fall asleep frequently reading capsules. That was a way I get my midday naps. Look at the time on that right side where you can see the time spent. If you read a, the whole video is on the order of obviously hours. If you read the capsule study without AI, still taking a long time. It's taking like an hour plus for some of these people to read these capsule. But if you look at the bottom right, right here, this is when you read a capsule. I can't show you that, but the bottom most right of the figure, that's when you read a capsule with AI. You're basically, it's highlighting the frames of interest. You can read the capsule and you can move on to the intervention of choice. And so I hope I stayed close to in time for a very wide variety of topics. Obviously in summary, there's a definite clinical need to improve the quality of care and pancreatic esophageal and small bowel diseases. And early work suggests that AI can be a disruptive technology in the management of these common conditions. Thank you.
Video Summary
In this video, Raj Keswani, an interventional endoscopist at Northwestern, discusses the potential use of AI solutions in pancreas, esophageal motility, and small bowel disorders. He highlights the need for improving the quality of care and the challenges faced in these areas. For pancreas diseases, he focuses on pancreas cyst management, which is currently inefficient and expensive. He also discusses the need for early identification of pancreas adenocarcinoma to improve patient outcomes. In terms of esophageal motility disorders, delayed diagnosis and difficulty interpreting diagnostic tests are major issues. AI can potentially assist in identifying these disorders earlier and interpreting diagnostic test results more accurately. In small bowel disorders, capsule endoscopy is a challenging and time-consuming procedure. AI can help in identifying abnormalities and reduce the time required for reading capsule videos. Overall, AI shows promise in improving patient care and outcomes in these areas.
Asset Subtitle
Rajesh Keswani, MD
Keywords
AI solutions
pancreas
esophageal motility
small bowel disorders
patient care
×
Please select your language
1
English