false
Catalog
Gastroenterology and Artificial Intelligence: 3rd ...
Panel Discussion and Q&A - Session 1
Panel Discussion and Q&A - Session 1
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
So we have Tyler and Rahul with us and also Yuichi Mori and Yutaka Saito and Pratik, will you give an introduction to professors Mori and Saito? Sure, absolutely. Thanks, Doug. And so, Yuichi and Yutaka, welcome. Yeah, you know, both of our Japanese colleagues, one based still in Japan and the other one who is in Norway right now so you know Yuichi's interest is currently I think for trying to see how AI can be implemented in colonoscopy practice and impact. And, you know, in the future interval colorectal cancer rates, and what was also the leader for the sun database so welcome Yuichi. We all know from his expertise on ESD for colorectal lesion and that was a natural fit for him to start getting involved in AI studies so welcome both a few, you know, to the panel as well along with Rahul and Tyler. So, Doug, while we are doing this, maybe I can just get started. I know there are certain questions coming in through the chat function which we'd like to avoid so for the participants please try to use the q amp a function. So Rahul I'd like to just get started off with you as Doug's browsing through the questions as well as other panelists and participants are putting in their questions as well. So Rahul a couple of questions. The first one I have is, what do you see the future vision to be used on the edge, and, you know, so what kind of soft hardware, would we be looking at, and any sort of thoughts around that and how does that look. That's a great question so here being a cloud company we don't really have that expertise so we are building that muscle. But one thing that we have announced last year and still sort of finishing it for general availability is almost like a small box you could call it a hub that sits, especially for enterprise applications. This takes a bunch of GPUs that can connect to different video streams, and then publish back results. So what you could do is still build your models in the cloud. And then you say I have this device and you deploy to it. So, and we are not the only ones I think Microsoft has announced a similar device and so on. Right. So that's one. I don't know if you'll for deep learning models go as far as x86, like your general PC on your machine. They have a GPU but it doesn't really work as well. So I think it will be specialized devices, whether they get to the level of homes will likely not happen but for enterprise that will happen. Another way this will I think start emerging is that this whole internet of things and sort of different ways to connect sensors. So far that's happened for your doorbell camera and your switches and lights in your house. So, a lot of work is I think being done in the industry to use those protocols for more heavy tasks, right. So pre process the video send notifications and all of that is going to be done on the IoT infrastructure. I have a general question for for the panelists about detection during colonoscopy, which, you know, it's, it's, we have this whole range of of lesions that we see, some of which are actually obvious to virtually everybody, you know, you could take a small show them a picture of an obvious polyp, and they'll recognize the next one. And what what's really important to us is to is to highlight extremely subtle disease. And I'm wanting to to know, are we making sure as these programs come out that we can do that and what sort of a problem that that could be for example can tell you we're talking about over training. Does the training set that we use potentially affect the ability to detect subtlety because that's actually what what high level detection is about, or is detecting subtlety just a matter of adjusting the sensitivity and the specificity, how often we're getting a ping from the, the program. Is this is an issue and and how is it, how would it be solved. If it is an issue. My initial response to that is that this is one of the big challenges with proprietary data sets versus open data sets so the reality is for many of the approved systems out there. We don't know a ton about the polyp data set used to train it. And this is why folks like Dr. Mori are working on these more open data sets where we can have an open conversation about exactly what polyps are in there, what they look like and what we're trying to train for as gastroenterologists so you each is that something you can talk to in terms of the sessile versus pedunculated versus large and small polyps that you chose to put in the sun database. Yeah, sure. Tyler, thank you very much for giving a chance to talk about it. Yeah, it's very important issue because everybody is focused on getting a high ability to find out this kind of subtle appearance of the polyps such as deepless lesion or LSD with the non-granular type. But honestly speaking, I think the images used for machine learning for the most of the AI tools available on the market are basically from the protruded lesions or normal flat lesions, not necessarily deepless lesion or LSD non-granular. But at the same time, this kind of AI tool can even find out a part of the deepest area or part of the LSD as a kind of the protrude area or a reddish area, which may help us to identify this kind of very challenging lesions. So still we don't have the optimal AI on the market, but I think it is able to find out this kind of challenging lesions. Of course, provision of the more challenging data set would be the optimal solution, but we'll see. I guess still we have, yeah, already we have a really nice support even for this kind of subtle lesion detection. Okay, thanks Tyler and Yuichi for that. Yutaka, a question perhaps probably well suited for you is related to AI and its application for therapeutic procedures, removing large polyps, ESD, boundaries of resection, need for closure, is there a perforation happening? Any thoughts on how AI could be helping us as therapeutic endoscopists? Thank you, Prateek. Yes, maybe in the near future, AI technology will help such kind of therapeutic procedure to reduce the complications such as perforation or delayed bleeding. Actually, comparing to the polyp detection or characterization, it is still a little challenging to apply the AI for the therapeutic procedure. But in the near future, I believe the AI will help such kind of therapeutic procedure. Thank you. Doug, while you're looking at it, perhaps, you know, Dave Fleischer, our past ASG president, I mean, has a question sort of which is generic. And Doug, perhaps either you or me, we can answer that. I mean, the first one is any body or organization that oversees coordination? And of course, Dave's, you know, questions right on, because I mean, we're talking about cross collaboration here between, you know, GI societies versus, you know, the tech companies versus industry. And of course, you know, can Amazon or similar companies help team up with ASG on some research? And, you know, Dave, to your point, very good questions. I mean, the second part, yeah, I mean, that's our plan is through the summits and through the task forces trying to work with, you know, the tech giants such as Amazon, Microsoft, Google, and trying to figure out how we can partner together, whether it's creating our own database, that we are interested in using their cloud services, using their edge technology and how we can do it. So, you know, absolutely. Yes, David on that. Doug, do you want to add anything to it as from the officer and the presidential side? No, nothing, nothing additional. I'm going to move on to a question about training. I think there's a there's a concern expressed by by a question about, is there any understanding right now of how we should incorporate CAD-E into training? Is it going to facilitate training or should we have our fellows go through a period where they don't rely on CAD-E? Do we have any sense of what the optimal use of it is in training right now? I think this will be a very interesting area to explore. I've heard that expressed by a number of particularly senior gas entrologists, sort of in the way my dad said, I can't believe people aren't driving stick shift cars anymore. And is that going to be a lost tool or is it what about all these young drivers who are using the lane assist warning and backup cameras? Are they going to be able to drive older cars safely? And the reality is, I think that we've we've now done a number of at least several hundred CAD-E assisted procedures with our fellows. And I think it's eye opening and a learning experience for them when they notice something on the screen that they might not have picked up with their eye and they think about it, they engage with it. And then we have a discussion about is that truly a polyp? Is it not a polyp? Why did you miss it? Why did I see it? Why did the computer see it? So I think this is a net positive and I understand why there's hesitation over these sort of newer technologies, but I think it's going to really bring the sea level up for all of us in terms of detecting these subtle lesions. So I don't think it's going to be a crutch going forward. I think that when Mike Wallace was doing his training on detection, part of that was actually learning how to characterize lesions. And I think just looking carefully at a lot of lesions increases recognition, but it'll be an interesting area to watch. Okay, Rahul, a couple of quick and maybe quick answers, since we're coming towards the end of this. The first relates to sort of privacy and the legal issues. I mean, a participant wants to know that this is great of the technology you've shown, you know, about the segmentation part you showed, etc. The concern is raised about, you know, uploading patient records on the cloud. And what's the privacy issues there? Who's liable for the data if something goes rogue here? So any brief thoughts from your side on that, Rahul? Yeah, absolutely. So, I mean, security is number one for an organization like Amazon Web Services, right? It also should be taken as a shared responsibility. So, like, Amazon can provide security controls, but then they have to be implemented by the user. All the AI ML services I talked about run securely. So they are certified through our application security to make sure that data is not leaking outside of the service when you use it either for training or for inference. And there's no need to actually store the data. So the services will, by default, store the data. But in most cases, you can opt out and you can say, I'll use the service. And after that, once I get the results, the data is deleted. So there's many other ways to do it. And anonymization and trust services are another way, right? So you could anonymize the data for upload, and then you have a token that lets you recover that identity back. There's one question here about, you know, expansion beyond polyps. Can anyone give us a sense of what the timetable is that we're looking at to see introduction, commercialization of programs for detection and merits, other entities? I think the detection for the ballot displays are already on the market in the UK and in Europe. So that's the way to go. Also, I'm very interested in another research area, which is called a computer-aided quality assurance. You know, for example, to find a blind spot during the endoscopy or something like that. That would be, I think, the new wave in the field of colonoscopy in collaboration with the AI. Maybe in a couple of years, we have a lot of the lineups in this field. Very good.
Video Summary
The video transcript discusses the future vision of using AI in colonoscopy practice. It mentions the development of specialized devices for deploying AI models in an enterprise setting. The transcript also touches on the challenges of detecting subtle diseases during colonoscopy and the importance of using open data sets for training AI systems. There is a mention of potential applications of AI in therapeutic procedures, such as removing large polyps and identifying boundaries of resection. Furthermore, the transcript highlights concerns about privacy and liability when uploading patient records on the cloud and mentions the availability of secure and certified AI ML services. Finally, the video discusses the commercialization of detection programs and the potential for computer-aided quality assurance in colonoscopy.
Keywords
AI in colonoscopy
detecting subtle diseases
open data sets
therapeutic procedures
privacy and liability
×
Please select your language
1
English