false
Catalog
Gastroenterology and Artificial Intelligence: 2nd ...
AI in Big Data and Population Health
AI in Big Data and Population Health
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
It's my pleasure now to introduce our next speaker who's going to give his presentation live. Dr. Tanui Gupta is a medical doctor with an MBA. Dr. Gupta is the vice president of Cerner Intelligence at Cerner. He's a healthcare executive with more than 15 years of experience across a variety of fields, applied artificial intelligence and machine learning, population health, healthcare IT, pharmaceutical industry. Dr. Gupta, we're looking forward to your talk on AI and big data and population health. Thank you all for having me here. I'm excited to share with you a bit of what we're doing. Now, I confess I don't have a GI slant to this talk. This is adapted from something I give to general audiences, but I'm hoping you will see how it applies to the GI space as well. I want to talk to you about three things, three technologies, machine learning, natural language processing, voice recognition. With machine learning, I want to give you a sense of how could this practically be used in the practice of medicine. We'll walk through that. Great that I'm following Mark because I have some thoughts around the medical legal aspect of it. Somebody mentioned in the second session, documentation improvement, how can AI help? I will show you a very specific solution we're using with natural language processing for documentation improvement that's in production today, that's being used by our clients today. And then third, somebody mentioned in the first session, not becoming data entry clerks, how do we reduce that data entry burden? Well, I'm going to show you a voice solution that we have in alpha testing today that is tackling that problem. I apologize if you hear any background noise, that may be my kids. Okay, so machine learning. So imagine this, a patient goes to their doctor in the future. The doctor says, I'm going to order a CBC, I'm going to order a BMP, and I'm going to order three algorithms, and I'm going to use those five data points to diagnose you. It's effectively saying that these new machine learned algorithms for diagnosis are like the new lab tests. And so if that's the case, then how do I ensure I'm comfortable with these new types of lab tests, these non-invasive data-based tests? Well, there has to be some sort of calibration frequency. So for example, if I'm a cancer patient, this is the Theranos example. If you know the company Theranos, they claim to have a lab machine that could run 500 tests off a pinprick of blood, and it was inaccurate. So if I'm a cancer patient and a simple white blood count comes back normal, and it's actually low, and I've now made a decision to treat this patient with a second round of chemo, and I dropped their white blood cell count to zero, I've introduced patient safety risk, I've introduced morbidity and mortality. So the lab tests have to be calibrated. Well, in machine learned algorithms, because the underlying data changes so frequently, more frequently than our blood chemistry changes or body chemistry, then there has to be more frequent watching or calibrating of these diagnostic algorithms. And so that is, to me, the sign when there is patient safety risk attached, whether it's going to be regulated or not. And there's going to need to be some regulation of how frequently you calibrate these algorithms. But they will become new diagnostic tools for us. On the treatment side, they are like the new drugs or devices. We use chemistry in the form of pills to change patient outcomes. We use hardware in the form of implantable devices to change outcomes. We are soon in the future going to use software in the injecting it into the workflow in order to change patient outcomes. So the practical example I think of here is, I'm a diabetic patient, the doctor prescribes me a treatment pathway and an algorithm to watch my data coming into the chart. And they prescribe me on monotherapy, on metformin. And the algorithm now says, based on the data I'm recommending, the algorithm is recommending dual therapy, adesulfonylurea. And I, as the clinician, concur and prescribe it. So that's a real scenario. And again, patient safety risk. What if the algorithm is wrong? Who's accountable? To me, my shorthand is, who's got the risk for practice or malpractice for a diagnostic or treatment decision? And if the algorithm is providing enough basis to understand why it's making its decision, the risk is on the licensed professional. But if the algorithm is a black box, then the risk is on the manufacturer of the algorithm. So it's just like we have today with drugs and devices. Another simple example I might give is heart attacks to opponents. So I, as a clinician, am responsible if I get one troponin level, it's abnormal, and then I ignore that and keep going. Because we know the standard is to get the three troponin levels and then assess. But if I have a black box algorithm that says, this person has an MI with high likelihood and doesn't tell me why, then the liability belongs to that device. So we need to trust that device as clinicians. And that's why we need the FDA certification for devices like that. So both are possible. Class one, class three, I think is the term Mark used. Both are possible in the future, but I hope you can see there's a real future for machine learning and diagnosis and treatment. Well, what does it look like? Or how do you know when to choose a machine learn algorithm versus another type of algorithm? By other type of algorithm, I mean rules-based algorithm. So we do prediction today. We use decision support rules. So if I'm a data scientist, what does it look like? What does it look like under the hood? For a rules-based algorithm, this is an example of a sepsis algorithm that's well peer-reviewed. Under the hood, you see either a flow chart of if-then-else statements based on the vitals or demographics or other lab values. Or you might see a numerator and a denominator with inclusion and exclusion criteria for both that then spits out a score. And based on that score, we map them to low risk, rising risk, or high risk. So we know how to do this in healthcare today. And this is often just fine. It's a sensitive enough test that we don't need to replace it with machine learning. So all machine learning does is gives us a new tool for prediction. Whereas in that first example, we need to spend years to develop national guidelines we accept and then turn it into an algorithm. In this example, and this is a readmissions, detecting the risk of readmissions within 30 days of a discharge. In this example, forget the national guidelines, we are saying based on the big data set you have, show me all readmission events, show me all variables that correlate, and I'm going to do some sort of regression. I know I'm oversimplifying machine learning, but this helps me understand it. So assume linear regression is what I choose. I see the variables here, and then I see the coefficients, and I do the best fit in line. And then the learning that comes with machine learning is as the data changes, that best fit line changes, and then I can extrapolate different predictions. So it's another tool it doesn't necessarily replace. How do I know when to use a machine learning algorithm for diagnosis versus a non-machine learned method like a blood-based test or an assessment or something else? To me, it goes back down to the fundamentals. So this is an example that comes from Department of Defense. They're asking us to help predict risk of suicidality in the veterans. Well, the gold standard is the Columbia Suicide Risk Scale. It has a sensitivity of specificity to it. It doesn't make sense to create a machine-learned algorithm that is less sensitive if that test is perfectly fine. Now, what could make sense is that we can create a more sensitive test, or at the same level of sensitivity, if the Columbia Suicide Risk Scale requires the patient to answer 10 or 15 questions, and there's a data veracity problem that they may answer incorrectly. And machine learning picks it up out of the chart and reduces that problem. Then you can make the trade-off and say, I'll take a little less sensitivity for better data veracity or vice versa. So it's going to be our fundamentals. If we think of it as lab tests and we think of it as devices, the same fundamentals we use for any other tool for diagnosis or treatment we would apply to machine learning. Hopefully that gives you a good practical sense of how this can work in the future. So let's talk natural language processing, and let's look at an example of software that's in operation today that eases the burden of clinical documentation. So there's three use cases in this software. It is using natural language processing and some rules-based programming to scan the EHR for the structured data plus the unstructured data. So about 60% of data in healthcare is unstructured. And just like the EHR attempts to structure data so we can act on it, NLP to me is as important as the electronic health record to structure more data for us to act on it. So use case one is a potential missing diagnosis. This is a little blurry, but in this case, the patient had a sodium level that fell below normal. There were signs and symptoms detected by the natural language processing system in the various documentation in the chart, and it looked like serial sodium levels were being checked and a saline infusion was given as if it were a treatment, but there was no diagnosis of hyponatremia on the chart. So the software is flagging it as a potential diagnosis. It is reducing the cognitive burden on the care team because the clinician here, instead of combing through the chart, can click these hyperlinks to find out exactly what's there. It's presenting all of the information for the clinician to make an independent verification. It is doing it concurrently while the patient is still here, not post-discharge where often you will get queries from documentation improvement specialists. It's effectively automating some of that role, the documentation improvement specialist. There are smarts behind that. So if you were diagnosed with SIADH, it wouldn't flag for hyponatremia. It knows that that's another reason for it. This is a second use case where the diagnosis is not specific enough. So the clinician put heart failure as the diagnosis, but the software detected somewhere written an ejection fraction less than 40 to indicate systolic dysfunction, and then they found a ProBNP value greater than 3000, which suggests acute. And so it is suggesting acute systolic heart failure versus heart failure. And then the final use case is almost the reverse. You have a diagnosis on the chart that's unsupported by the evidence in the chart. And this is a signal to the clinician to either support the diagnosis or remove it as a potential audit risk. And all three of these together then reduce the risk of a denied claim when these are submitted. So practical example of documentation improvement with AI. And then I'm going to take you to the last example, using speech recognition in healthcare. So this is a case where we want to, ultimately it could be eliminate data entry for the care team, but at least in the interim, can we chip away at that problem? This is an idea you may have seen out there floating around, which is, can I, when the patient consents to being recorded, have a virtual scribe in the room, listen to the conversation, and then using NLP, extract from the voice, the relevant medical concepts. So I'm going to show you a video of this in action and watch on the left-hand side, what is missing between the doctor and the patient and their interaction. And then on the right-hand side, you will see in real time what the NLP is picking up. And it's not perfect. This is an alpha product that we're playing with today, but it is, hopefully you'll see the utility of this and the possibilities of it. So let's watch. Hey, Dr. Nadeau. What brings you in today? Well, you know, I've been kind of feeling a little down lately. My asthma has been acting up and my albuterol is not really working. I don't know, I just feel kind of depressed. Well, let's see what we can do for you. So I saw you were already taking Welbutrin 150 milligrams daily. What happened with that medication? I was getting really dizzy and I started getting really bad dry mouth, so I stopped taking it. How was your sleep? Not good. I was getting eight hours and now I'm looking to get two. Well, why don't we do some changing on your medications? Do you have allergies to medicines I should know about? I'm allergic to shellfish, but I'm pretty careful about it. Okay. Let's switch your medications. I want you to stop your Welbutrin, start Prozac 20 milligrams in the morning, and then at bedtime for sleep, take Seroquel 25 milligrams. Remember that is a sedative, so be careful with that. And then I'll see you back in about a month and we'll go over your progress. If things aren't better, we can always incorporate therapy at that time. Okay. So how's that sound? That sounds great. Thanks, Dr. Nadeau. Great. So hopefully you saw what was missing was the keyboard, the mouse. You had the doctor and patient completely paying attention to each other. And that's what we're hoping is in a 15 to 20 minute visit, I think the data shows eight or nine minutes are spent on the computer. And instead that visit can be focused on the patient. And when you get back to the chart, there's charting recommendations, potentially one day the full dictation or the full note created. And I will stop there. Hopefully it stimulated some good thinking.
Video Summary
In this video, Dr. Tanui Gupta, a medical doctor with an MBA, discusses the practical applications of artificial intelligence (AI) and big data in the field of healthcare. He explains how machine learning can be used for diagnosis and treatment, and emphasizes the need for calibration and regulation to ensure patient safety. Dr. Gupta also demonstrates the use of natural language processing (NLP) to improve clinical documentation, highlighting the software's ability to identify potential missing diagnoses, provide more specific diagnoses, and flag unsupported diagnoses. Additionally, he presents a voice recognition solution that aims to reduce the data entry burden on healthcare professionals during patient visits. Overall, Dr. Gupta highlights the potential of AI and technology to enhance healthcare practices. There is no specific credit given in the transcript.
Asset Subtitle
Tanuj K. Gupta, MD, MBA
Keywords
artificial intelligence
healthcare
machine learning
natural language processing
patient safety
×
Please select your language
1
English