false
Catalog
2021 GI Outlook (GO) Conference | November 2021
Hello Siri - Artificial intelligence in GI Practic ...
Hello Siri - Artificial intelligence in GI Practices
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Moving on to the next one. It's another very interesting talk. Again, talking to us all the gastroenterologists and those of us on the conference today on a topic that's really new to us, and it seems to be evolving faster than we are comfortable with. And next speaker is Mr. Badri Narsimhan. I think, thank God I'm an Indian, I can actually say that name very easily. He's a successful serial entrepreneur, founded and sold his first healthcare IT business, which is AlertMD. Many of us may have heard about that. And he graduated from Georgia Tech, has two graduate degrees, he's MBA from Babson, and I'm just really intrigued about this topic and the speaker. And I'm sure the audience will greatly benefit from listening to him. He's unfortunately not with us live. This is a pre-recorded talk. So, looking forward to listening to what he has to say. Thanks all. Welcome to SGE 2021, the session on demystifying the application of artificial intelligence in daily practice operations. My name is Badri Narsimhan, and I'm here to tell you a little bit more about what the subject is and how you can use AI in your daily operations. First of all, my conflict of interest declaration. I do not have any professional or financial relationship with any tools I discuss. I'm not compensated by the solutions. My hands are not in the proverbial cookie jar. Having said that, we'll address the topic today in four different sections. First, kind of talk about what is AI? That's a word that's thrown around a lot. And the next is how can you practically use it in a GI setting? And how can you tell whether it worked? Can you go back and check it, so to speak? And where do you can progress from here? Now, as we all know, sophistication in technology has led to a lot of data around us and no clear way to figure out what the next specific action ought to be. And if you look at artificial intelligence, the way it is defined is how can I build a system that changes the way we act by using one of three methods? One is it literally pretends to be a human. We call that strong AI. Another is you don't need to understand how human reasoning works. You just get a couple of things to work together. And the third is kind of use how people would think as a model and not necessarily try and pretend to be a human. And, you know, if you're taking a baby step, the first step towards AI, and frankly, for a lot of applications in the gastrointestinal setting, the third method is in and of itself a good enough first step that you can take. So what we are going to do is kind of look at your gut feeling. Gut feeling in GI, you know, could have pun intended, but we certainly will not rely on that pun. But let's say that you called your rev cycle leader and you said, hey, can you tell me three reasons why a claim gets rejected? They're going to have three good generic reasons. But if you pick a specific claim for a specific patient and say, can you tell me whether or not this particular claim is going to get approved or rejected? They're going to have a guess. And that's based on their experience. They may not be able to write down exactly why. And that is what I mean by when people use their instinct, we can quantify that instinct with real data and we can give guidance. And that is what you're going to see today. So let's figure out what in GI would be a good area to look at. And I always say you want to look at repeating activities. In other words, if it is an online review, how can you understand what kind of patients are more biased towards leaving favorable reviews versus unfavorable? If you're focusing on your fill rate, what kind of patients are more likely to show up for their appointment versus not? And these may be the good outcomes. The bad outcomes could be a rejection of a claim, a cancellation of an appointment, and so on. Once you identify one of these repetitive activities, you want to think about what are the potential causes. And, you know, was there a snowstorm? Was there not a snowstorm? Excuse me. Within reason, you can't control the weather. And so within reason, I would only put things that you have a way to control. For example, you may say, I want to understand how age impacts whether or not one of these things on the left happen or don't happen. Is there a correlation with the provider? Is there something to do with one location versus another? And so on. And I would take out things that you can't control anyway, because at the end of the day, you know, we need to figure out actionable items that come out of this. Now, once you have figured out one of the good or the bad, and we're going to see a practical example walk through it just to kind of set the stage for what we are going to do. We're going to look for data sources. Data sources may be reports from your practice management system or your EHR. You may then want to download it into Excel or put it into whatever your tool of choice is where you can cook the data in a row and column format. And one thing to remember is you really don't want to go for data that is five, six years old. That's a different world. You know, we live in a newer world. And also, if you download an entire report with a lot of text, at least in the first phase, I wouldn't venture into interpreting unstructured data. Let's just stick with numeric data that is structured. And there are fancier tools that we could use, you know, R and SAS and things like that. For now, we're going to stick with Excel and take the baby steps. So one of the things we're going to do, and I just picked an example here. What is the likelihood that a claim is going to get approved and not rejected? So I would sit there and say, gee, I don't know which one of these is the main reason. I don't even know which one of these is or is not a reason, which one contributes to what degree. But I have a reasonable experience to say who, you know, saw the patient, who documented the, you know, note, the procedure note probably has something to do with it. What kind of procedure, the age, gender, day? I, you know, I can put the date of the procedure. But do you really think patients who get their procedure on February 3rd are less likely to get their claim approved or rejected? Probably not. So you might as well not use that in your data set. And, you know, what is the difference between when the procedure happened and when the claim itself was filed? Most people in the lifecycle side will say that's important. But what we are trying to do is quantify it and say how important, how does that compare with the provider? If there are two claims with the same lag time, does it matter which procedure, you know, the particular claim was? That's the kind of stuff we are trying to quantify. And then, you know, was there a pre-auth or not? Who was the referring provider? These are just examples that, you know, we're drawing upon. Assume that we had a report that looked like it. We would create an Excel spreadsheet and I'm actually going to open up a spreadsheet so you can see what we are talking about. And in that spreadsheet, you would have these as individual columns. Some of these columns don't do anything that is meaningful. In other words, knowing, as we talked about, knowing the exact date of the procedure is only useful to compute the lag time between the procedure date and the date the claim was submitted and not much else. So I wouldn't really use the exact dates. Having a pre-auth number is not useful. All we want to know is was there a pre-authorization? So you do need to use a little bit of your common sense and your experience in prepping the data. So let's assume that you prepped your data. And you kind of said, OK, I'm going to compute the lag time. I'm going to see whether there was a pre-auth. There is something missing there. And that is the something you are looking to predict. And that something is did the claim get approved? So in this sample scenario, we are adding the result of did the claim get approved? And as you can see, it is there on the far right hand side. Now, the question we are asking is based on everything that is here. Can you tell me, mirror, mirror on the wall, or AI or predictive modeling, whatever you want to call it, mirror, mirror on the wall, which factor or factors is most responsible for a claim to get approved? And what is the next one down the line and the next one? And let me, as the practice administrator, let me, as the rev cycle leader, figure out what I can impact and what I can't impact. Maybe there's some training for a provider. Maybe there is a different way of documenting for a certain procedure type. Those are the actions. First, we want to use the tools we have to quantify what is the main driver and what else we can do about it. And so one of the things we're going to do is look for a tool. And as I said, I do not have any relationship with any of these tools here. We're just going to stick with something very simple. Excel. Within Excel, there are many different tools that you can use. There is one that is free. If you have a toolkit, you can use it. It is called Stats Plus. So I'm actually going to create a sample analysis and a couple of very elementary things. The independent variable is that which we are trying to understand, meaning did the claim get approved? Yes or no. The dependent variable or all the individual factors such as the name of the provider, the kind of procedure, what is the lag time and so on. So we're going to throw a whole bunch of dependent variables into the proverbial mirror and ask the question, can you tell me which one mattered? And if so, how much? So here is an example of what is called a multiple linear regression, and it is a mathematical concept and the building block. In other words, chapter 101 for artificial intelligence. We are making the machine think about, hey, can you tell me which one matters and how much? So let us actually see this work. I'm going to pull up a sample analysis. And walk you through it. Here you go. So here is my sample Excel spreadsheet. And in that spreadsheet, I have a variety of columns. And here is the column. Was there a pre-auth? Was there a result? And here is the toolkit. If you have stats plus already installed as an add-in, you would go here and say, I want a multiple linear regression. And my independent variables are provider, procedure, age, gender, day of procedure. As I said earlier, we're going to skip the exact days, but we're going to pick up lag time. Whether there was a pre-authorization. And we'll throw the referring provider in there. We want the dependent variable or the thing that we are interested in to be column K in this hypothetical example, which is did the claim get approved? At this point, it is as simple as go. And what you're going to see, we'll ignore the error for a second. I'll tell you why the error comes. The error essentially was saying that you need more data to give a better model. But I'm going to give you a model. It may not be perfect. And we only had 12 rows in there. And in your real life example, you would have had 200, 500, 10,000, whatever the right number is. What this is telling you at the end of the day is an equation. This is saying the average claim has an 87 percent chance of getting approved. I'm looking at this area right here. However, it either goes up or goes down based on certain factors. So if somebody were to stick a claim in front of you and say, hey, what are the odds this specific claim is going to get approved? Without knowing anything else, I would say it's approximately 86 percent. If your data was what I showed you. However, we're seeing that as patient age increases, the odds of a claim getting approved increases because this is a positive. But as lag time, meaning the number of days between the procedure and the submission increases, the odds decreases by a much bigger number. And clearly we are using simple examples. And this also says if you have pre-authorization, then your odds of an approval are greater by 3 percent and so on and so forth. But the point we are trying to make here in this scenario is you can literally make your gut feeling better by adding a couple of things. So if I were to now understand what do I do for this particular practice, assuming my data on a much larger sample said the same, I would focus all my efforts on lag time because that is the single biggest driver. I can't do anything about bringing older patients into the practice. My patient mixes what it is. I do want to focus on pre-authorizations where they are required because pre-authorizations clearly increase the odds. But you don't need pre-authorization in all cases. But for this particular practice, lag time is the king, queen and minister in the sense that you want to focus on lag time. Your practice may give a very different answer. It then gives you the tools for you to focus as needed. So I'm going to pick up again from where we are. So we ran the analysis and the analysis said, hey, you know what, the average chance is 88 percent. And then if you wait, you're going to lose. If you get pre-aughts, you're going to win. And this simple exercise with 12 rows in Excel literally put your first step into AI. There are a thousand other things you can do. However, at the end of the day, if you can get this first step done and you're welcome to contact me if you went past this first step and you want additional help on where do I go next. There are ways for you to create a method to score a claim even before it's submitted. And say, how likely is that claim to be authorized to be approved? And then you can take actionable steps to then go look at all the claims that have a less than 70 percent chance of approval, for example, and intervene to then increase the odds. So at the what we have reviewed today is, you know, how can you create a simple tool to help you? Then the question is, how do you know it worked? You know, the model is all lovely. Well, the best way to do it is take the model and give it at least 24 months of data, but then stop three months before today. Let's say that you started from month 24 before today and you gave the model data until month 21 before today and then go look at the last three months and pick a random claim and you know the equation, 88 percent plus, you know, lag time minus and so on. And you see if the answer is right. And, you know, all models will have a margin of error, but it will give you kind of additional insight into does that really work? And that's a very healthy exercise for you to do anyway. Where can you go next? You know, in this simple scenario, lag time is the only thing to control and pre-odds. But then what you really want to do is say, assuming lag time was what I wanted to control, how can I submit my claims five days earlier? Make your business case to your provider leadership, make your business case to practice management, whoever else is involved, and take the next step towards improving the practice based on the learnings. But at least you now are off and running on your first exercise. The next level of AI could have many additional things you can do. You can improve the model. You can make it self-learning. You can do machine learning. But all those are fancy, wonderful things. But taking the first step towards understanding your drivers would be what I suggest you do at the end of this presentation. So my message today is AI is not an automatic car that is being driven. You can use simple and powerful applications of AI that you can use today to improve your practice. I've included a couple of additional resources that if you want, I'm sure this presentation itself is going to be made available to you. Thanks a lot for your time. Here's my email address. You are welcome to contact me. Have a nice day. Thank you.
Video Summary
In this video, Badri Narsimhan, a successful serial entrepreneur, discusses the application of artificial intelligence (AI) in daily practice operations for gastroenterologists. He explains that AI can change the way we act by pretending to be human, by using a combination of factors, or by modeling human thinking. He suggests that AI can be used to quantify and improve gut feelings in the field of gastroenterology, such as predicting claim rejections or appointment cancellations. He provides an example using Excel to demonstrate how to analyze data and identify the main drivers for claim approval or rejection. Lag time between procedure and claim submission was identified as the most important factor. He suggests taking actionable steps based on these findings to improve practice operations. He concludes by emphasizing the importance of taking the first step and exploring additional resources for further learning.
Asset Subtitle
Badri Narasimhan
Keywords
Badri Narsimhan
artificial intelligence
gastroenterologists
practice operations
AI application
×
Please select your language
1
English