false
Catalog
2024 Gastroenterology Reimbursement and Coding Upd ...
How Might AI Affect Documentations, Coding, and Co ...
How Might AI Affect Documentations, Coding, and Compliance
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
All right, thank you. I'm going to cover what is really a new topic because I guess AI is somewhat new to all of us and the impact on medicine is growing and controversial and will be evolving rapidly. I'm going to cover a part of what could be a multi-hour talk, but I'm going to highlight a number of slides and kind of skip others. The question is, what can we trust AI to do appropriately well and in a way that hopefully will help us? A lot of this becomes scary stuff to us. I mean, we see articles like this in particularly trade journals, AI is coming for your job, and anyone who says otherwise is just wrong and hiding their head in the sand. I asked, actually, ChatGPT to make me an infographic of how AI will affect healthcare, and it answered that their text-based AI can't generate a visual content like infographics, but they gave me some general advice on how to proceed, which I did not take, and this was a couple of days ago. On the other hand, when I did a little bit of literature search through Google, I could see that there are companies that are going to help me be able to make infographics, PowerPoint presentations, using AI, and who knows what's going to be in them, but next year when I give this talk, we'll see. So, we're all faced in one way or another with burnout related to our EHRs. This is just epidemic amongst physicians and nurses and all kinds of practitioners because of the burden of documentation we face and the problems we have trying to deal with EHRs when we're trying to actually talk to patients. Not long ago, a direct observation of Time Motion found that physicians who are in ambulatory settings, it was primary care, but I think it applies, that they spend half their workday on the EHR with two hours of EHR and desk work for every hour of direct clinical face time with patients. This is pretty abysmal. No wonder people are discouraged and burned out and retiring early and everything else, but AI is coming and the question is, are they going to be smart enough and in what way will they impact us? Just very briefly, what we're talking about is what's referred to as generative AI or large language models. The idea is that these programs train on tons and tons of data, maybe trillions and trillions of what's in the internet. The entire content of the internet sort of went into chat GPT. More medically engaged ones train on sets of medical records or sets of radiologic images. A lot more limited, but they're basically trained to predict what's going to be the next word in language or the next image or the next object based on what they've learned. If an AI program is trained to look at chest x-rays, they're trained to pick out abnormalities of nodules, masses, and infiltrates, et cetera, and they're trained on a large database of such images. They're guided by algorithms. They're tweaked by humans. To some degree, they're subject to review by humans, but right now they're also subject to a lot of weird things that happen. They're prone to what's called hallucinations. Example, you ask one of these AI platforms to write a medical paper for you and come up with 20 references. Well, they know what a reference looks like. They may have a number of legitimate references, but they may also make up a bunch of fictional medical journal references in journals that don't exist or articles that don't exist. Rather rapidly, these are becoming more and more accurate, more and more appropriate and helpful. Recent examples in medicine, one of the medical licensing exams, an AI platform performed very similar or better than physicians on board exams. They were able to identify questions and get the right answers, at least as well as physicians could. Based on reviewing medical records and learning on medical records, they were able to formulate notes comparable to or indistinguishable from physician notes. We have a number of AI platforms in colonoscopy where the platforms are able to identify polyps about as well or maybe a little better than trained endoscopists can. We're trying to figure out, are these really going to be a help or will all these little green squares just be distractions and make us not pay attention properly? There are examples of bots that are now answering medical questions fairly appropriately about as well as a human could who's trained pretty well. There are lots and lots of questions. Well, is any of this AI stuff subject to actual payment? Right now, there are AI CPT codes that are mostly add-ons. If a radiologist uses an AI program to help look at mammograms or help look at x-rays, there are evolving CPT codes, but there's virtually nothing that's actually paid for separately. The exception right now is a retinal imaging code where it's autonomous. It's something independent. The patient can go get a retinal image. An AI program will review it, send a report to a physician who ordered it, and that actually is paid for, code 92229. There's nothing NGI like that. It's in question how much will we have of that nature in the near future. When you're looking at all of this, you have to think about who is behind all this and who's going to profit from all this. Who's doing the training based on the materials? Who's designing the algorithms and testing them against what standards of validity? Who owns the data? If some of these things are training on medical records, well, somebody is feeding a lot of potentially private health information, PHI, into a system. What happens to all that data? Is it really de-identified? Can patients actually be identifiable from data that goes into these databases? Who's going to use it for what purpose? Who's going to profit from how these things are installed and implemented? Then ultimately, are patients going to be happier, practitioners happier, less burnout? Will staff be happier, less burnout? Jobs may be different, but not necessarily lost. Let's talk about just briefly how an AI-assisted visit might go. A patient might be able to call up an AI bot any time of day or night and say, okay, I'd like to get an appointment with Dr. Littenberg. The bot may look at my schedule and all the other schedules of my multi-site clinic and be able to say, well, you're going to have to wait three months with first appointments, February 3rd, 2 p.m., but you could get an appointment with his physician assistant in three days at 2 p.m. Or if you'd like, there's an opening with Dr. Jones in the practice in Burbank. It would take you about 20 minutes to get to his office the time of day that you'd be going. How quickly can your front office staff answer questions like that right now? And then the bot could proceed to take the entire information base down for a new patient or a follow-up patient, update all the forms, scan all the documents, etc., etc. When a patient arrives, they may well just interact with a bot of some sort that just checks them in from the data that's already in the system, tells them what room to go into, gets an electronic scale reading. Patient can apply their own blood pressure cuff and the data goes into the system. What's the provider encounter potentially like? Well, you may have a scribe that's automated somewhere else in a computer listening into the entire encounter that's going on and basically creating a note that summarizes everything nicely, not to mention updating all the medicine lists and cluing in people about what the preventive services are that ought to be ordered. And based on analyzing the information, decide what sort of orders make sense. Well, patients complaining of a certain kind of abdominal pain, it may prompt you to do a CT scan and just click on the order for it and even the blood work necessary before doing the CAT scan. Checkout and scheduling could be also extremely automated and helped by appropriate algorithms in the background so that a lot of the work right now of checkout and scheduling could be done. I've already referred to documentation. And then the coding, the billing, the revenue cycle management. In theory, you could have bots, you can have programs reviewing notes, reviewing procedure notes, following algorithms of how the appropriate coding works, all of the conventions, the modifiers, everything else, being able to look at bills that are being generated and review them. Not to mention all these other things, clinical decision making, like I said, auto population of lab orders, preventive services, reaching out for related data. If my eScribe understands that the patient had an endoscopy at a certain hospital, they could be reaching out electronically across the internet, getting into the medical records and retrieving the data I want. Think about how much the authorization and pre-cert process might be automated. But also think about how much on the insurance payer side they may want your entire medical record and review it to make sure that what you're requesting is really medically necessary. So it could be beneficial or it could be really destructive and build roadblocks to your doing what you need to do. So the coding and billing certainly could be automated in large part. And what impact that would have on your jobs might be threatening, might be beneficial to help you do your jobs better. You might be able to just focus on the outlier claims, the outlier bills, do the things that Kristen was talking about, analyzing the data of the way people bill in the practices to find the outliers, compare to benchmarks and provide reports to you so that you can focus your compliance work really where it needs to be instead of digging through a bunch of records and making your own independent judgments on every single record. So all of this could potentially be done and automated in a creative and helpful way. But let's say these things exist. Are they going to be add-ons to existing practice management systems? Are they going to be external services? And how do these fit together? Does all your data need to go to somewhere in the cloud where this work gets done? And what's happening to your data? Who owns it? What's going to be the cost, the accuracy? Who's going to oversee some of this stuff by item or by reports? And then when it comes down to compliance, who's going to be responsible for errors, abuse, fraud? Lots of unanswered questions here. Keep in mind this picture. This person may look familiar to you. Sam Bankman-Fried was just found guilty of seven counts of fraud in his particular industry. My guess is that there will be a comparable individual in future years who will wind up just as bad as Sam Bankman-Fried, but in the AI area, potentially in medicine, certainly some field. So it's really an open question. Who's going to win? And will they play nice with our EHRs and with our practices? It's likely that these big tech companies are going to be among the players. They certainly are pursuing this very avidly right now, even though none of them, I'd say, are incorporated in any meaningful way yet with AI in our practices. And there's always questions about equity, ethics, not to mention transparency. Some of the things I'm talking about, if patients are not internet-enabled, they don't have smart devices, they're not very literate, they may have language issues, even if they're very literate. How are people like this going to be accommodated in all of this further development? We know there are already issues when it comes to EHRs and a lot of the current ways we practice. It's just going to get aggravated by the rapid progress in AI. So right now, an interesting concept is this notion that comes out of Gartner, which is a consulting group, about what they call the hype cycle. And you can look at new technology and follow kind of a hype cycle. Cyber money had this, and now it's crashing. What's called the trough of disillusionment sets in after a lot of hype builds up, expectations are high. But at some point, there's a peak of inflated expectations followed by a trough of disillusionment. So right of disillusionment. So right now, we're on the upside in AI. Cryptocurrency is probably now heading towards the trough. And at some point in the future, there'll be some appropriate enlightenment, some appropriate utilization, some appropriate controls over how this develops. And at some point, it will actually help our productivity and maybe make us happier. It's clear EHRs have not made us happier. Maybe it's made us more productive. It's arguable. So these are the people that we need to care about. And you may recognize some of these faces, others you may not quite, but that's okay. They're still going to largely control your life. There are headlines we see nowadays, like Google ads guardrails to keep AI in check. They're aware of their potential problems. A lot of people are aware and they're trying to decide how to deal with this, what kind of regulatory environment to put in. Just now, October 30th, there was a significant executive order that came out of Biden and the White House, setting up task force to start looking at things like AI use, safety, equity, privacy, et cetera, et cetera. So it's on a lot of people's radar, but lots and lots of unanswered questions. And the fact is industry is highly likely to outpace our ability to adapt. We got way behind in even looking at how to regulate internet and how to regulate social media. So think about how AI is likely to be. This was a HIMS survey, 2017, which I don't think has changed much. Majority of people think AI has the potential to transform healthcare delivery, yet only 10% of providers feel their organization's ready to deal with it. And only 4% of providers feel they could meet AI goals with current staffing levels and in-house expertise. I don't think that's changed much if we were to get a current update. So there's my 15-minute version of how AI is going to affect us. And I hope some of these speculative views have been helpful to you. More questions than answers. Happy to have your comments at my email address. Feel free. Thank you very much.
Video Summary
The speaker discusses the growing impact of artificial intelligence (AI) on medicine and its potential benefits and challenges. AI programs, specifically generative AI or large language models, are trained on vast amounts of data and can predict language and images based on what they have learned. In medicine, AI platforms have shown promise in areas such as medical licensing exams, formulating physician notes, and identifying abnormalities in medical imaging. However, there are still limitations and concerns, such as the potential for hallucinations in AI platforms and the need for oversight and regulation. The speaker also explores the potential future role of AI in various healthcare processes, including patient appointments, check-in procedures, clinical decision making, coding, billing, and compliance. The speaker emphasizes the need for consideration of ethics, equity, transparency, and the potential impact on healthcare professionals and patients.
Asset Subtitle
Glenn D. Littenberg, MD, MACP, FASGE
Keywords
artificial intelligence
medicine
AI impact
generative AI
medical imaging abnormalities
×
Please select your language
1
English