false
Catalog
2025 Gastroenterology Reimbursement and Coding Upd ...
How Might AI Affect Documentation, Coding and Comp ...
How Might AI Affect Documentation, Coding and Compliance-
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
So, I'm going to speculate here for a little while and talk about how AI may affect our documentation coding and compliance, rapidly evolving as I'll even demonstrate by a couple of the slides that I tried to create last year and couldn't versus what I could create this year using the same AI app. So we're being asked to trust AI as it evolves and many, many, many uses of it are proliferating and are available to us and some of us, which I'll try to get a sense of here in the audience, have adopted one or another form of it. But I want to at least raise a few concerns that you ought to have as you go forward looking at implementing AI, living with it, maybe getting some benefit out of it. Maybe someday even being able to get reimbursed in some aspects for it. We'll see. So I wanted to ask first, get a sense of the audience now, how many of you are now using or at least testing out any of these types of AI, scribes for creating notes, AI for coding purposes or AI even for any other office function? Pretty broad, but let's get an idea here. All right, OK, that surprises me a little bit. I thought there'd probably be a little bit more use or testing out of some of these things at this point. I think next year, if we ask the same question, I suspect it'll be quite a bit different. So just for those of you who haven't even toyed with the scribe functions, there are quite a few companies and it actually looks like it'll be beneficial for many of us in trying to streamline our documentation. These are interesting. They just sort of sit in your computer listening to the total interaction between you and your patient and after you're done, it creates a note, seems to effectively get rid of all the chitchat about, oh, how's your grandchild and, oh, my, you know, I got a flat tire on the way over here. All that stuff sort of disappears and it creates not only a medical note, but in some cases gives you the CPT code that it seems to think is appropriate. So we'll see. For many of us, though, it's still kind of scary. It's questionable. Is AI going to take over some or all of our jobs? And if you don't even think about it at all and you don't even look into it at all, you can be leaving behind both the potential advantages, but also setting yourself up to potentially have bad things happen in your life and in your practice. So last year, I asked ChatGPT to make me an infographic of how AI will actually affect health care. And all they could answer is, oh, I'm a text-based AI. I can't create visual content like infographics directly, but I can give you some ideas. And I said, well, no, thanks. So I came back this year and asked the same question. And here's what I got out of ChatGPT, a very nice infographic. And then I asked them to create a nice little visual of how the world of AI when it's applied in health care might look. And it gave me a nice little outline of what AI theoretically could do, enhanced accuracy in analyzing patient data and clinical notes for better coding, reducing human errors, automated documentation, like I was just talking about, efficient billing, data analytics, training and support. And certainly in data analytics, there's a lot of AI being used. And I think increasingly, we'll see things like AI being used to process your prior authorizations, trying to find more efficient ways to deny what you're asking for. But hopefully, we'll have AI on our side, and we can automate submission of the prior authorizations to get around the slower payers and get our things approved. Training and support also potentially could use a lot of help from AI. So those of us who are still doing things the old fashioned way, this is an AI generated image of me when I was looking at CPT books, trying to figure out some of this stuff. So big question to providers is whether use of AI will help or will aggravate what is basically a lot of burnout, a lot of frustration with our electronic records. Back not too many years ago, a direct observation time motion study, in other words, this was kind of real time actually watching how providers worked, found that it took about two hours of EHR and desk work for every one hour of direct patient face time and sort of primary care internal medicine practices. And yeah, I would share the frustration of this AI generated image of how doctors were getting burnt out by EHRs. That's the question I asked, what would that look like? So is AI really smart enough? Can we trust them? Where is all this coming from? And where is this maybe going to go? So for these purposes, when I use the term AI, I'm really talking about, you'll see the terms generative AI, large language models. They're basically algorithms that get trained on tons and tons of data, mostly internet of, you know, the entire internet to some degree, and when they can get access into large medical databases with medical records, they get trained on those. And what they're trying to do is figure out what's the next word or image or object that likely to be an answer to a question posed to it. I mean, all of these algorithms are created and tweaked by humans, but they kind of feed on themselves. The software will tweak its own self and take more and more current information and make its output better. But there's still lots of what's called hallucinations, a lot of crazy mixed up stuff when you ask questions. But it's getting better and better and better. I mean, over a matter of months at a time, different versions get much better. So examples just over the past year, AI performed similar or better to physician on board exam questions. Notes generated by AI seem comparable to and distinguishable from physician notes and in some cases better. And it seems like it will streamline documentation with some of the better AI apps for scribes. The apps seem to be as good and perhaps better at identifying polyps during a colonoscopy. Many of our practices are starting to use AI in conjunction with our colonoscopy to try to help spot things that we hope will be polyps, we hope will be things worth taking out. And bots of various sorts powered by AI are able to answer medical questions not only just appropriately, but with even better patient satisfaction and certain research that's been done. So it certainly has a lot of potential promise and is evolving quickly. And there are CPT codes that are being devised as AI applications are brought to the AMA CPT panel. CPT kind of has a classification category that talks about whether there's sort of direct clinically relevant data detection. I mean, for example, an app could just, you know, detect whether like your Apple watch, are you an atrial fibrillation or not. But also can do things autonomously like analyze retinal images and decide if this individual has diabetic retinopathy or some other form of retinal disease to replace or maybe augment an ophthalmologist or optometrist retinal examination. There's almost no payment for any of these as yet. FDA has actually approved over a thousand AI applications of software that are healthcare oriented. And most of these are just sort of marketed to patients directly. Those are things like what's in your Apple watch. But there are more and more of them that are being looked at and implemented within practices. A lot of them have to do with analyzing images, CT scans, EKGs, things of that nature, colon polyps, small intestine diseases with capsule endoscopy. AI is evolving to make that more efficient. But whenever you're looking at AI, you need to think about what's the training based on? Where is this coming from? Who's designing this? How are these being tested for validity against what sort of standard? And it's often not easy to figure that out or to get any kind of verification that these are legit and have appropriate backing to them. And then who owns the data? If it's gathering all this information, you really have to think carefully, especially vis-a-vis HIPAA and other things. Are companies listening into your patient information and pulling it out of your records? Who then owns the data? Who's going to profit from this when this is installed and implemented? And if you're investing in it, will it be worthwhile to you in some way? Make your patients happier, make you happier, less burnout. Will staff be happier? And the jobs may be different, but hopefully not lost. So if you think about how could an AI-assisted visit go, it's not at all far-fetched for me to think about an AI bot in the very near future, maybe now, being able to answer your phone, patient wants an appointment, it could gather all the necessary data, it could look at not just your schedule, but my PA schedule, my other doctor's schedule, my other groups in other cities nearby. Could give them their best appointment if they wish to change providers, you know, because our EHR is one across multiple service sites. It could even tell them, well, you know, if you take this 215 appointment with Dr. Brown over here in Burbank, it'll take you, I'd estimate, 40 minutes to drive from your home. If you go over here, you know, it could take you an hour at the time of day that your alternative appointment is. Could you like one of these alternatives? When you arrive, it could very well guide the patient into an exam room to step on the scale, record the data, ask and record a few questions that is pertinent to the diagnoses already in the patient chart. Of course, an AI scribe can then be listening in and help create the note, you know, check out and scheduling could potentially be automated, documentation of many kinds could be. And certainly coding could evolve fairly readily out of these enhanced documentation capabilities leading to, you know, more automated billing, revenue cycle management. But of course, somebody has to look over all of this and make sure it's valid, make sure that there's no, you know, fraud, abuse, et cetera, being created in this process. Clinical decision-making could also benefit potentially. There could be a lot more auto-population of orders for lab, for preventive services, not just clicking, you know, throwing little reminders in our face, but actually doing a lot of the work. We could be asking for related data for the patient in a more automated way, searching the web to pull in the most recent pertinent lab and imaging studies instead of us having to send our staff on, you know, long journeys looking for things which then come to us to review and decide what to do with. I already mentioned pre-certification authorizations. So there's a lot in the area of coding and billing that can certainly be subject to AI and probably already is in certain experimental areas, research areas, beta test areas. I'm hoping to hear more about these as they go on. There'll probably be add-ons to existing practice management systems. They may be totally external, but if they don't integrate in some way with their own systems, that's going to be clunky. But you know, the algorithms need to be able to follow appropriate rules about modifiers, sequencing multiple codes. It's complicated, but is it really, you know, something a machine can't learn, an algorithm learn and figure out, you know, follow all the nuances if it's built into proper software. I think it could well be done at least as accurately as human beings can. And you know, billers may be able to process a lot more services, visits, and procedures using AI to assist them. But you've got to look at the cost, the accuracy, the oversight of it. And then there's the compliance issue. Who's going to be responsible for errors and fraud and abuse and all that? I just did a quick Google search, and Google's AI threw up a whole bunch of names of companies. These are just a handful that I have no connection with, can't tell you anything about, for AI scribe software. Probably if I did the same for billing software with AI, it would give me a whole another bunch of things. But cautionary thing, here's a nice picture of Sam Bankman Freed, who was found guilty of seven counts of wire fraud, money laundering, conspiracy. They're going to be bad players in this area too, and I'm sure they're out there looking on, you know, how to take advantage of us. And then of course the big info companies, you know, who's going to win in this sphere and will they play nice with us and with our EHRs? Epic already is making utilization of an AI scribe software, and that means that an awful lot of healthcare entities will be utilizing that soon, or if not now. And you know, the whole other area in all this is equity, ethics, transparency. How is this going to be made fair across our patient populations and the different kinds of practices we have in the less advantaged parts of our country? And that's a whole other area of AI. And is anybody out there trying to help us? Well, not quite so clear. Is this really going to become that big or is there a lot of hype? I like this conceptual chart that came from some consultants at Gartner Consulting Agency. They kind of devised what they called a hype cycle for AI. And the notion is, you know, as things are brand new and they look shiny and look promising, there's a great deal of expectation that builds up rapidly. And then as things get implemented and it doesn't turn out quite as nice as you think or crashes or steals money from you or you wind up in jail for fraud committed in effect by AI, there's what they call a trough of disillusionment. And I think we're probably somewhere in this area here. We were thinking AI would rapidly come across and be just tremendously good. We're finding out it's really not simple and it's not going to be easy to get to where we're getting some really good achievements out of it, but it is evolving fast. So here are the people that, you know, are in back and investing billions and billions and billions of dollars in all this and obviously will become and are now big players in all of this. We like to think that Google and all of these other folks have added guardrails to keep AI in check from creating harm in all kinds of areas of society. This other photograph here, I think, is probably more what I feel is kind of the state of guardrails as it exists now. We'll see. The feds have gotten involved. An executive order was passed by the White House not long ago to try to look at AI use, safety, equity, privacy, maintenance, availability of documentation, et cetera, et cetera. I don't think this task force has gotten very far and no telling what will happen in the upcoming administration to any oversight over any of this area. A survey not too many years ago, and I think probably still about the same, is that a lot of people in the information industry believe AI has a great deal of potential to do good and transform how we live and practice in health care, but very few are prepared or have implemented. So I think since then, the numbers in your own survey today show that more of us are getting involved, but I don't think it's changed a whole lot as far as our ability to move ahead and implement all of this. But I think the content of a talk like this is going to change a lot year by year. So if you have questions, please forward them to chat GPT-4 first, but feel free to contact us. And just as a reference here, you might note Edward Sun and Glenn Luttenberg are co-authors in a paper, I don't know what the final title will be, but it has to do with AI and gastroenterology talking about regulation and reimbursement. It'll be in the GI clinics of North America sometime next year. And considering we wrote the paper earlier this year, I have no idea how much of it will be out of date, but nonetheless, there we are. Okay, so I'll stop here. Happy to take questions later during our Q&A, and for now, I will turn it back over to Kristen. Thank you.
Video Summary
The video discussed the growing impact of AI in healthcare, particularly on documentation, coding, and compliance. It highlighted the rapid evolution of AI technologies and their increasing adoption in medical practices, from AI scribes that streamline documentation to applications assisting in coding and billing. The speaker raised concerns about potential job displacement and the importance of understanding the source and training of AI systems, including ownership and data privacy issues. AI's potential to improve efficiency in tasks like prior authorizations and compliance monitoring was noted, though there are still challenges with oversight, accuracy, and fraud prevention. The presentation mentioned the need for robust regulations and ethical practices to ensure equitable and ethical AI use in diverse healthcare settings. It concluded with a notion that AI's role in healthcare will evolve significantly, but current promises should be cautiously approached, considering potential issues and the market's hype around AI technologies.
Asset Subtitle
Glenn D. Littenberg, MD, MACP, FASGE
Keywords
AI in healthcare
documentation automation
data privacy
ethical AI
job displacement
×
Please select your language
1
English