false
Catalog
Gastroenterology and Artificial Intelligence: 2nd ...
Panel Discussion: Innovative Applications of AI in ...
Panel Discussion: Innovative Applications of AI in Other Medical and Technology Fields. What Can We Learn?
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Now, that brings us to the panel discussion for our first session, so if I could have Tom, Ulas, and Shravanthi on their webcams, and so you've heard from all of them in this session. Tom, thanks again very much for leading us off with the wonderful presentation that you had. Eric, are you on? And Eric's going to join us as a panelist, and of course, welcome, Eric. Eric DePaul, as all of you know, he's the founder director of the Scripps Research Translational Institute and also vice president of Scripps Research. And Eric, we've always been impressed with sort of the books, the latest one, which was in relationship to deep medicine and how AI can make healthcare human again, so welcome to all of you. Tom, I'm going to start off with you as we are trying to get questions from the audience for here. You know, you briefly mentioned about why AI may not be good for all and maybe be good for some, and you did point out some of the challenges in how the algorithms are built and the biases in that, but besides that, if we have sort of ImageNet type of libraries in which we can ensure that at least some of those biases in building those algorithms are removed, what other challenges do you see in this being applied or AI into healthcare in general? Well, first of all, thank you for having me as part of your session today. And second, I think the speakers that came after my starting point raised a lot of great points. I mean, obviously the issues we are most challenged by are things like bias. And New England Journal of Medicine in June, single best article I've seen in a while that points out all the ways in which the disparities and inequities in the real world are crossing over to the digital world, and we have lots of ways to mitigate that, but it happens for the same reasons. You know, I think that's where it starts with awareness. It is followed by, as organizations, the data scientists are putting algorithms out in the wild of making sure they're not only aware, but they've done things like stress tests for bias. I think the other thing, and the last speaker I think spoke to it well, is the whole issue of transparency, where one, we have to have the ability to understand how it works, what its reasoning is, number one. Number two, to build on that, so much of what we're seeing are continuous learning algorithms. So think about that for a second. You have something that starts because it's been programmed and pointed in the right direction by a human, but as time goes on, it's continuously learning the logic, the way in which it reaches decisions will change. Our ability to stop the tape at any point and look at what's happening is critically important for all the reasons our previous speakers just noted. Tom, thanks. If I can, I want to bring Eric into this discussion. Eric, you've been a real leader for all of us in Madison and how we approach artificial intelligence. Can you, for those who haven't read your book, Deep Medicine, can you tell us a little bit about how you foresee AI making medicine more human again? Many of us have suffered the digitization of medicine, where we spend much of our time clicking and doing mundane tasks. Help us see how AI is going to make medicine the good old days, when we just talk to patients and listen to patients. All right. Well, thanks, Michael. Great to join you, and I really enjoyed this morning's session here. I think the issue here is that the here and now that we're talking about today is what is imminent in the years ahead. In fact, GI has led all the fields in terms of randomized trial to get better accuracy, to get some of the cardinal aspects of medicine that we need to improve. But the bigger far-reaching part is to restore the human relationship between patients and clinicians and doctors. I think that the story there is that it's attainable. It will take an active effort, and it is far more pressing than ever because there is this global crisis of burnout, as perhaps even worsened by the pandemic. The point here is that by decompressing the clinician's load, no longer being data clerks, and also outsourcing, if you will, many of the functions, giving autonomy to patients that they want, which doctors have just not let go. By this combination, a lot of the things that I mentioned, like Tom alluded to, about processing a lot of data is not the strength of humans, but it certainly could be done to make life easier for clinicians. We already are seeing algorithms that are empowering patients. If we just foster that rather than trying to control things, we'll eventually get to this point. Our biggest problem, though, Michael, is that administrators who run medicine in the United States are much more interested in the financials and seeing more patients, doing more scoping, more slides, more scans. We have to put a stop to that. That's going to take the medical community standing up. The ability to reach this attainable goal of enhancing the human bonds is going to be dependent on activism in medicine, interestingly. Thanks, Eric. Ulas, there's a question for you about your thoughts on using transfer learning as a mechanism, especially when it relates to computer vision images. What is the situation, or what are some use cases that you would utilize transfer learning for? Yeah, transfer learning is one great way of transferring the knowledge from one domain to another domain. And I think this is pretty close to also the true intelligence as well, because we can infer from some other task to learn another task. So that is a representation of actually what is going on in high level human recognition. So the studies actually show some mixed results. And there are so many papers showing transfer learning is good enough to run algorithms in medical AI, many applications. There are studies showing that also the opposite way, like getting the data, cleaning them, and after this well creation, training from scratch may be better than transfer learning. But what we see that actually Tom also mentioned, like once you clean the data, create well, and reduce some of the biases, algorithms show similar performances in both ways, training from scratch or transfer learning. We see more great examples from computer vision field that transfer learning is really effective when the task at least shares some similarities. What I see, especially for the application in medical AI, when the data is really a problem in many radiology applications, it is. Transfer learning is really the way to go. Once the new algorithm comes, which is more intelligent and inferring the knowledge better than the current standard, I think this is going to be a very strong way of solving problems which were not solved before. Sravanthi, there's a question that I think is probably best addressed to you about privacy of health information and these large medical images. You talked a little bit about your recent manuscript, just doing a simple web search of medical images and extracting those. But every one of us has massive stores of medical images, particularly radiology departments where images are stored in a standardized way. But in gastroenterology, we're taking still images and increasingly video. How do privacy issues come into play here, and how can we use what is a huge library of medical images in a way that respects patient privacy but also advances the field? Thank you for that question. I'm not sure if I'm the right expert for that. But going back to the paper that you mentioned, it is based off of open access literature. So this is data that's already published and available for everybody, and that was the data set that we built. But coming back to the question of how do we build large libraries of data, whether it's images or videos or something similar to ImageNet, I think there are multiple layers of collaborations that are needed. And then you need a data cloud or some sort of a centralized area where you can actually store these images and more importantly, large amounts of data that comes from the videos in a HIPAA-compliant platform. Of course, all the metadata that is associated with patient identifiers has to be clearly scrubbed off, which has been an issue for some of the radiologists because they have to take extra steps to get that off of that image to make sure that the image is completely de-identified. So I think a lot of technology is going into it to kind of remove this kind of patient identifier metadata on images. And I would actually like to hear from Tom Lawry to see how big tech giants are actually working on this kind of process to keep this cloud HIPAA-compliant and also to help medical folks like us to transfer images or videos. Well, thanks for tossing that question my way. So I mean, the short answer is when you look at what Microsoft's doing with the Intelligent Cloud and really any of the other big players, it's incumbent upon all of us to make sure that that is as secure, as trustworthy as anything else we could possibly be doing with the data and how it's historically been stored and used. And I think if you use that as a standard, we're doing very well, which is not to say that anyone is 100% secure and perfect, but then again, no systems are. You know, I think when we talk about things like HIPAA here in the United States or GDPR in Europe, those are great fundamental standards and starting points. But when it comes to AI and it's typical of technological advances compared to regulators and policymakers catching up, there is a huge gap right now between a lot of questions, whether it's legal, regulatory, or ethical, that the regulators themselves admit they're working to keep up and whether that's the great work of the FDA or others. But witnessed by the fact that the American Bar Association has something they call their SciTech Journal, the fall issue is totally dedicated to the area of AI and health and how there are things that can be HIPAA compliant, legal, and keeping with all regulations and still by anyone's values be highly unethical and how AI could be put to use in healthcare. So it's things like that that we together, and especially the clinician's voice, has to be involved in coming up with the standards and the guardrails by which we use the technology, including the cloud. Eric, if I can just have you expand a little bit more and share your expertise and almost like guide us. So you really said it right, is that we are getting into this AI age, but the goal is that whatever time we can save, 10 minutes, for a procedure is to use that time not to talk to the patient, but to do an additional procedure. And so what are your suggestions, I mean, and how do you look at, you know, in this world when, you know, you have to also balance reality with maybe a little bit of fantasy of what we may be expecting or maybe what is right. I mean, how do you balance that and how do you see sort of societies or big organizations leading efforts in order to make that change happen? So I know it sort of is a very, maybe a philosophical question as well, but how have you made that impact? And maybe, Tom, also, if you want to pitch in, that's absolutely fine, but Eric, I'll start with you, please. Well, thanks, Pratik. I think this is the central story of why there is the problem we have in medicine, is back to the future story. And it's gonna take a lot of active effort because the default mode is, as you said, there's very little time with patients. This gift of time, which is the greatest gift that AI could bring to medicine, is attainable. And, you know, having been the patient undergoing a colonoscopy on multiple occasions, I had very little time to speak to my physician who did the procedure. And if we could change that across the board in every discipline of medicine, where that time was given priority, because now we're reduced in primary care to seven-minute appointments, 12 minutes for a return visit on average. And this is ridiculous. And, of course, that's not even with connecting. That's not even with eye contact for most of that time. And very limited physical exam. You know, all the things that we learned, especially old dogs like me learned in the 70s, how to practice medicine, have been largely abandoned. So we have to work to make that happen. And, you know, I don't know if it, we have the tools now to do that. That's what AI has brought us. And the short term is really clear, and you're working on that, but we have to look at the longer term potential, because it may be the last chance we have to bring medicine back. And giving that time, you know, we learned in the pandemic, we can use a lot of telemedicine, and with patient autonomy, we can decompress the load of what is an appointment, you know, what is a visit, an encounter, and make those encounters that are, when people are truly together, make them really matter. And so I hope that we'll be able to do that. That's my dream for the future, if you will. Tom, how do we bridge that gap? I mean, how do we get, you know, physicians and, you know, medical societies working with, you know, Microsoft tech giants to try to make this happen, so that what we want is also there at the forefront, rather than the other way around, which is you come up with a technology, and that's driving medicine. Right. Well, first of all, I'd say it starts with interactions like we're having now, and the leadership role your society is taking, and bringing people together to have this conversation, educating your constituents about the possibilities, the issues. To me, the second part of that, then, is how are you mobilizing, either as a physician in your practice, wherever you have privileges. I get to serve as a strategic advisor. I'm privileged to work with some fairly prestigious organizations who are saying that they want to go full throttle into artificial intelligence across their clinical enterprises. What's always interesting to me is when I personally visit organizations, the level of engagement with clinicians is vastly different across organizations. There are places that get it. Clinicians are involved from the beginning, and there are other organizations that download it to the stereotypical technology people, data scientists, and many times, the clinicians feel like they're a kid with their window, or their nose pressed up against a glass window looking in. So, I would say, I would challenge everyone to say, wherever you practice, do you understand what's happening in your organization today on that move towards AI, towards being an intelligent health organization, and what's the level of involvement that clinicians not only have been invited into, but should be driving? Well, last I think you had a question, sorry. There's a number of questions regarding how we gather the necessary information, particularly in the field of endoscopy and images in a way that we can push this field forward. And this question's for Sravanti, because I know you've been involved with this. You talked about ImageNet. How can the ASG play a central role, or other groups like the ASG, in gathering, annotating, labeling images that's going to drive this? We've had this project called EndoNet. Maybe you can share a little bit about that project. Sure. I think one of the discussion points that we've had as part of the ASG AI Task Force, which you all are involved in as well, is trying to build a centralized image network, or rather videos of endoscopy, which is basically the first step that you need to develop any algorithms or validate any algorithms. Now, as discussed before, there needs to be a lot of collaboration between different institutions in terms of data use agreements, so that they can share the de-identified videos. And a structure like having it on a cloud platform, or some kind of a centralized platform that can store all these images and have some auto-annotation feature available, because it's a large volume of data that needs export annotation in the future. And then having an edge device that would probably work at the processor level or something like that, would make it being implemented at the clinical workflow level when you're in the endoscopy suite. So, but to get to that collaboration, you definitely need support, both from academic institutions, community institutions, your private practitioners who should be willing to share that information for a not-for-profit cause, which is basically developing the future of AI in endoscopy. Mike, there are two questions here, and one is related to, from Carolina here. And her question is, the first one is, what are societies or ASG doing about education? And I'll just take that, Mike, which is that as we move forward, I mean, of course, we have the second AI summit here today, and then moving forward in the next year from the ASG AI task force, you will see several other webinars, publications, and an outreach. We've done a member survey to understand what the needs are of the ASG membership in terms of education. So you will see that in the next six to 12 months, a lot of education around artificial intelligence as it pertains to gastroenterology and endoscopy, and the ASG is taking the lead on it. The second question, which I'll open up, and maybe Mike and Sravanti, you can take that, it's also specific to the payer issue, is that if now, and maybe, I don't know, Eric, if you've used that, you can comment in non-GI users about payment for using AI tools. So we end up using an AI tool. It costs money to have that plug-and-play device in your unit. It's helping you detect more polyps, more precancerous lesions. Who's gonna pay for that? And either for that device, for the physician's time, is there a model? So any comments from either, Eric, maybe we'll start with you if there are any of those examples elsewhere, and then maybe Mike or Sravanti, you can comment on that. Well, there haven't been that many disciplines where this has occurred. I mean, GI is one for sure, radiology, but if you get beyond that, there's not that many examples to draw from. The hope is, of course, that the cost-effectiveness here would be beyond question, but this is still part of the earliest phase. I think because you're so far ahead in GI and endoscopy particularly, you don't realize that most of the world is way behind where you are. So I think this is something we'll have to grapple with over time. Mike? I think my comment here is that what's, and me going to drive this more than anything is demand from the patients to have the best care that they can get. Unfortunately, I think in the reimbursement marketplace is extremely challenging in many areas. I doubt we're gonna get paid more for doing a better job, to be honest with you, but I think the patients are gonna drive us to do the best that we can do. I think Eric posted some interesting tweets. I think around the time of your publicly tweeted colonoscopy, you were asking if your doctor was using AI systems to find more polyps. And I think more and more patients are gonna be asking the same. And if the doctor says, I don't know what you're talking about, they're gonna go to the person who is to get their colonoscopy. I'll just talk about, there's one use case I think that CMS recently approved for CT for stroke, an AI algorithm to detect a stroke using CT scan. It's approved as a novel device, which is really encouraging because there was none that had reimbursement in the past for AI algorithms. So that's, I think a huge deal in the medical AI field. Hey Prateek, do you mind if I jumped in on that one just real quick? Yeah, please. If you've already heard of this, I apologize. If not, I want you to know you heard it first at ASG. So there's a new acronym with the government of CMS called NTAP, which is the New Technology Add-on Payment. They just recently approved the first algorithm for reimbursement. And so I believe that's a sign of things to come. This specifically relates to, as Dr. Topol pointed out, imaging, radiology, but essentially it's using an algorithm to look for abnormalities in an image. And then the way it's put in the congressional record, it's very specific. The algorithm pre-specifies clinical conditions and then notifies the specialist to have a closer look. But beginning in 2021, CMS has actually agreed they will start reimbursing when this algorithm is being used. So I think it's an interesting model. I think it's particularly an interesting model when I think about the specialty to which you all are in today. Thanks, Tom, really helpful. And David Armstrong from Canada has a couple of comments and questions. And one of that relates to that, are you guys aware of any fields in which the patients are actually being brought into the programs or this field to get their input? So any thoughts there? Are you aware of any of the fields in which that's being done? Either Eric or Tom, any thoughts, Mike? Have you heard of that? Well, I know one of our major efforts here at my institution is to really have the patient at the center of this interaction between AI. We've talked a lot about how the physicians interact with, for example, something to improve our adenoma detection. But a lot of this is going to happen on the back office end. How do you make appointment scheduling user-friendly? How do you get reliable data and best practices information back to patients? So there's a huge amount of work being done, I think, to improve the patient experience with healthcare that we would typically consider back office function. But I think as both Eric and Tom pointed out in their talks, the patient experience with the healthcare environment is quite negative right now. And there's a huge opportunity to improve that. And I think we need to continue to focus, not just on how we as physicians do a better job, but how we make the experience better for our patients. And then there are a couple of questions related to endoscopy image library and how can physicians contribute to it? How can they help in annotation? And all I'll say is that, again, along with Mike and Sravanti, that I run the task force in AI and we're talking about it. And there's more to come. But for those of you who are interested, again, go to the ASG website. Please provide your comments on it and we'll reach out to you. And we'd like to get the larger GI and endoscopy community involved in that because that will be a goal. So guys, I mean, there are a bunch of questions still coming in. You know, we are at the end of our sessions. Actually, we are about three minutes over time. But having such an esteemed panel, I think we could go on and on. But it's unfortunately time to end. We'll try to answer some of those questions in our future panels if we have the time at that time.
Video Summary
In this video panel discussion, Tom, Ulas, Shravanthi, and Eric discuss the challenges and opportunities of integrating AI into healthcare. They touch on topics such as bias in algorithms, transparency, data privacy, and the potential for AI to improve the doctor-patient relationship. Tom emphasizes the importance of clinician involvement and awareness in AI initiatives. Eric discusses his book, "Deep Medicine," and how AI can restore the human relationship in medicine and reduce burnout. Ulas explains the concept of transfer learning and its potential applications in medical AI. Shravanthi discusses the need for collaboration and a centralized platform for storing and annotating medical images. The panel also addresses questions about education, payment models for AI tools, patient involvement, and building an endoscopy image library. The video concludes with a call for collaboration and engagement in advancing AI in healthcare. The panelists include Tom, Ulas, Shravanthi, and Eric, and the discussion is moderated by Pratik.
Asset Subtitle
Eric Topol, MD
Tom Lawry
Sravanthi Parasa, MD
Ulas Gabci, PhD
Keywords
AI in healthcare
bias in algorithms
transparency
doctor-patient relationship
transfer learning
medical images
×
Please select your language
1
English