false
Catalog
ASGE Postgraduate Course at ACG: Innovative Practi ...
Using AI to Mitigate Health Disparities in Gastroe ...
Using AI to Mitigate Health Disparities in Gastroenterology
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Dr. May is an associate professor of medicine at the David Geffen School of Medicine at UCLA, and we are very excited to have her kick the session off speaking about artificial intelligence and endoscopy. Thank you very much for that introduction. It's wonderful to be here and to kick off the weekend, and I want to say thank you to our planners for inviting me to participate on what I think is a very important topic. So the title of this talk is Using Artificial Intelligence to Mitigate Health Disparities in Gastroenterology. Here are my disclosures. So as I would like to provide just a quick overview of what we'll be talking about in the next 20 minutes or so, a little bit less than that, I'm going to start with a quick primer on health equity. You've got an excellent talk this morning by Rachel Osaka and others on this topic, but since we have new people in the audience today, I just want to make sure that we have that information. I'm going to talk then specifically about considerations about health equity within the world of artificial intelligence, examples of where we possibly are introducing bias and what we do using our AI modeling, and then I'll end with some recommendations for how we can optimize health equity using artificial intelligence. This is the primer that I like to use at the beginning of many of my talks to set the stage in explaining the difference between equality and equity. So this is an analogy that comes from the Robert Woods Johnson Foundation. In this top panel here, we have a visual demonstration of equality. In this analogy, all of these characters are using a bicycle and they all need to get to the same place. And this is an example of equality. This is the assumption that everyone benefits from the same supports. As you can see here, everyone is given the same bicycle regardless of their height, their body habitus, their physical ability to get to that end point. That's quite different than the concept of equity. When we have equity, everyone gets the supports that they need to achieve or attain the same end point or end goal. So as you can see here, we have the same characters, but they're given supports that are appropriate for them to be successful. And in this analogy, that is winning or finishing the race. So when we have health equity, everyone has a fair and just opportunity to be as healthy as possible or as healthy as they want to be. And health disparities occur when there are health differences that are closely linked with economic, social, or environmental disadvantage. There are many types of health disparities. And I know that in the medical literature, we like to talk very often about race and ethnicity as reasons for health disparities. But I think it's very important that we understand and acknowledge that disparities can happen on many levels. They can happen in terms of sexual orientation, sex and gender, income, socioeconomic status, physical ability, which is an area that we need a lot more study in, and also urban and rural setting of care. We have what we call a historical battle between equity and technology. And what do I mean by that? What I mean by that is that unfortunately, when we introduce technology into healthcare systems and into how we deliver healthcare, we often worsen disparities, or we actually can create them or worsen disparities. So when we think about this world of artificial intelligence, and we've just got a wonderful overview of what that is within NGI, when we look at models of machine learning and deep learning, there's a lot of excitement about how we are using those more and more in our science. There's this nature outlook that came out not too long ago that we're transforming medicine with the use of artificial intelligence. For those of us who work in health equity, however, we always have a modicum of concern because as healthcare improves for some people through these technologies, it might not improve for all. And my goal here is to show you why in some cases we have to be very careful with bias that might be introduced with AI. This graphic here I think is a very helpful one. We're looking at the research arc from the development of a research question to the analysis, to the dissemination of the research findings. And when you look at that natural research arc, you have potentials for bias at every single step. The first here outlined is problem selection. So we have biases in the questions that we study in science. Also there's bias often introduced at the level of data collection. So if we aren't collecting data in a way that is representing all populations, we're going to have biased results. We also can introduce bias in AI and medicine through how we define our outcomes, how we develop the algorithms that we are using in AI. So specifically some of the default practices we use in the large populations that we use to create our models or even test our models are often not diverse, and that can increase our cause of bias. And also in the post-deployment considerations for artificial intelligence. And that is how we use these models once they have been introduced into a health system or health practice. So I'll provide three examples, drawing from GI, of where we might see some of the bias that is introduced by technologies like artificial intelligence. And the first comes from esophageal cancer. The majority of research in esophageal cancer is in esophageal adenocarcinoma, focusing specifically on technologies to improve the detection of Barrett's and other precursors to this disease. The reality, however, is that this technology growth is going to benefit mostly white populations, because these are the populations that have the highest incidence and mortality from esophageal adenocarcinoma. Now if we take a step back, though, and we look at worldwide impact of esophageal cancers, esophageal squamous cell carcinoma is far more prevalent and mortality-causing than EAC. So we aren't focusing so much attention or technology towards this subset of esophageal cancer, and you can argue that disproportionately, black and Asian individuals aren't benefiting from the technology growth and expenditure that is going towards finding Barrett's and finding individuals with early or curable esophageal adenocarcinoma. So this here is an example of that first category of bias that I presented, which is selection bias. So by the very research questions that we decide to study, we can create biases and disparities in health outcomes. The next example I'll use is from inflammatory bowel disease. So we very often now are looking at research studies that are introducing artificial intelligence and machine learning algorithms, mostly using populations that are white patients to develop these algorithms. These are used to predict progression of disease, severity of disease, response to treatment, and other means as well. We know that over time, we are seeing an increasing incidence of IBD and non-white populations globally. And we also know that when you look at management and outcomes for people who are hospitalized for IBD, those outcomes are much worse for black and Latino patients, especially when you compare them to white patients. So the problem that we hear is a data collection problem or a data collection bias. And the algorithms that we are using do not include often the enriched, diverse IBD patient population that we need to create unbiased models to predict progression of disease and outcomes or response to treatment. So this is another example, again, a little bit further along the research arc, looking at how we collect data can bias our findings and introduce or not help mitigate disparities. The third example is from my area of research, which is colorectal cancer. And we just heard some fantastic updates on the technology that we're using in polyp detection. We know that an endoscopous adenoma detection rate is what we consider a feature of high quality in colonoscopy, and we know that it can save lives by minimizing the number of interval cancers. These computer-aided detection tools can actually increase ADR, and we've seen examples of that in the research, and they're becoming more common in use. We had that question earlier. Should we all go ahead and purchase these for our health systems? It's essential, though, for us to look at how well these technologies are performing in different population subgroups. And I thought it was really interesting because we touched on this in the very last group of lectures with one of the questions that came up about sessile polyps. And the example that I'll use here is that if the machine learning tools that we're using are not very good at detecting sessile serrated polyps, and if African Americans are more likely to get sessile serrated polyps than tubular adenomas, then these models are not going to be as beneficial for African American patients as they are for white patients. And this is just one example of what you feed the model is really going to determine how effective it is in real practice. And that's an example of bias that's introduced by algorithm development. Bias, however, is also an example of bias that's introduced by post-deployment considerations. And that is because medically underserved and vulnerable populations often face barriers in going to healthcare systems, endoscopy units that even have these technologies. So while you might get really, really good at detecting polyps in certain groups of populations who have access and who are affluent, we actually might see worsening of disparities because we aren't using those technologies in the populations who unfortunately need it the most because they're most likely to get and die from colorectal cancer. So in the last bit of my talk, I don't want to leave you with only problems. I'd like to talk a little bit about what we think can be solutions in the world of artificial intelligence and gastroenterology and hepatology. And I thought this was a nice way to lay it out in that there are many different approaches to eliminate bias by using AI. And there are six that are often highlighted in the literature. These are using the appropriate research expertise when you develop your research question and study design, making sure that you include a diverse study population, a diverse study settings, and then also the concept of introducing regulatory measures into AI modeling and research, as well as auditing for pre-deployment of the models and post-deployment of the models. And I'll go into each of these so we can understand what they mean. So these are six solutions, and I'm going to break those down. The first one that I mentioned was having the appropriate research expertise. And by that, I mean that when we start developing AI and machine learning models, we need to have the appropriate people at the table. We need to have health equity experts who are involved from the very point of conception to development to deployment of those models. Those are the individuals who have the training and background to spot where biases might be introduced into the system and the ability to eliminate those biases. And I'm seeing that happen more and more. I don't think before a couple years ago, I didn't really get that many emails about helping people develop their AI, but now it's a very common email that me and my colleagues that focus in health equity get. The second solution that I mentioned on the previous slide was including diverse study populations. So we need to all focus on diversifying the populations that we include when we create our base models for our algorithms, for machine learning, and artificial intelligence. So again, as was mentioned earlier, the model is what we feed it. So if we give the model access to diverse patient populations, the model is going to be more applicable to that group of individuals that is diverse. So we need to have representation of marginalized populations in our training data sets for us to create our models. The third solution is including diverse study settings. So typically AI is developed in quaternary care, health centers where there is minimal diversity. So the idea here is, can we expand the research to sites or settings where there are traditionally underrepresented or vulnerable populations? And that in itself will give us a richer, more diverse pool of patients for training data sets and for model development. And the fourth solution that I mentioned was regulatory measures. And we haven't seen examples of this yet, but this is akin to the idea of regulation that we're starting to see now for children with social media. So this is the idea of determining a fair, clear, specific, quantifiable regulatory set of measures to look at equity and outcomes for the models as we introduce them into practice. This would require researchers who develop AI to report the descriptive data, the performance of the model, and also not just the performance of the model overall, but looking at the performance of the model by sex, by race, ethnicity, by age, and by other factors. Pre-deployment auditing is solution number five. This is the idea of mandating an auditing process and sensitivity analyses to assess how algorithms are performing across subpopulations before we deploy these models into the real world. So if we can develop a set of rules by which we can evaluate models, then potentially these models will introduce less bias into healthcare. And that's a little bit different, but also similar to the post-deployment auditing. This is an auditing process as well, but we're really assessing the algorithm performance after we've deployed the AI or the algorithm into real practice. So I think in the literature, there's a lot more support for pre-deployment auditing, but recognition that sometimes that's not possible, and that's why post-deployment auditing has been brought in as well. And this is a way that you could pretty quickly identify where models are introducing bias into the healthcare systems, and potentially stop that before models are used too widespread where it's really hard to retract on their use. This image here is actually an image that comes out of cardiology, where a set of researchers created an AI algorithm using a very homogeneous population in Boston, and they did a post-deployment auditing. What they did is they started working with community health centers that had diverse population of patients, and they looked at how generalizable and externally valid their model was in those populations. So of those six solutions, I'm going to highlight two as what I call low or lowish hanging fruit. I think the first one is incredibly low hanging fruit. It is very easy, I would say, in most research settings now to find people with expertise in health equity and health disparities. We just need our colleagues that are working in artificial intelligence to prioritize involving these experts from the beginning of developing their model. We are seeing increased involvement, as I mentioned. Sometimes it's coming a little later, so contacting experts after the model is developed and asking, is this equitable? Is this going to cause bias? I would argue that we want to get our experts involved from the very beginning to have a more diverse perspectives as you develop or conceptualize the models. And then also, for us to continue doing this moving forward, we're going to have to focus on increasing the diversity of our workforce, particularly the workforce that spends parts of its time doing research to help address health disparities and artificial intelligence or with artificial intelligence. The other solution that I'm going to say is a priority over the other six is using a diverse study population and setting, and I'm going to call this lowish hanging fruit because I don't think it's as easy as the first recommendation I made, although I think it is something that we can do because there are pathways to do it. So this is the idea of, again, increasing the richness of diversity of the pool of patients that are involved in the development of the training databases or training sets and models. So how can you do this? How can you get more diverse study patients? If you are in a health system or in a research setting where you have access to those patients, it's working on enrolling those patients and helping them feel safe about participating in research. In settings where those patients don't exist, it's about expanding your work to include other healthcare and research settings where you have access to these patients. So we've seen that being done very effectively in the inclusion of federally qualified health centers into model development, faith-based organizations. There's a lot of work that we do in churches. There's even work that's being done in barbershops or hair salons to specifically target black men and women who are high risk for certain disease states. And also using, or partnering, I should say, with community organizations to include those patients that are part of those organizations into the research. So again, here are the six possible approaches that are highlighted in the research. I'm sure that there are more. And we're constantly looking for ways to eliminate bias that is introduced by AI. So there's a little bit of backtracking that we need to do with AI that already exists. And in addition, as we build new AI, making sure that from the start, we have an equitable approach. I'm going to end with just a few key takeaway points. The first that I want to make is that AI does offer important advances in science and medicine. I gave a similar talk a few years ago. And I think that one of the questions from the audience made it sound like I hated AI. I don't hate AI. I think AI is here to stay. I think it phenomenally advances our science and our ability to help take care of patients and get towards personalized medicine. But I think that we have to be very careful in how we develop AI. And I think that there are a lot of unintended consequences of AI that we have not recognized. Second point, we are seeing this explosion in the use of AI in healthcare and research. I think for GI, most immediately in the endoscopic tools that we're using. But I do think, as we've highlighted here today, that there are many mechanisms that we can backtrack and get rid of some of the bias that's already in the system. And then also moving forward, make sure that we prevent bias. Lastly, I'll say that with intention, we can use AI to reveal existing biases, to motivate change in how we do research, and to correct disparities in healthcare overall. And I think that's the new frontier. It's not just being protective in AI and making sure that we aren't introducing bias, but it's taking advantage of AI to actually mitigate disparities. And that's hopefully where I think the science is going. I'm going to end. Just give me a couple more seconds. I want to highlight my partners in this work. I actually come from a background of health equity research, which I've been doing for some time now, but was invited into the space of AI by Dr. Berzin, who's sitting right here. Happy to see you, with an email one day that said, do you want to think about this topic? And I was very grateful to get that email. We developed an author team here. We brought in some young scientists who were interested in this topic, and also a partner at the Harvard Business School. And we were able to summarize a lot of these ideas in a paper that was published and got just about a year ago. So this is the reference for that paper, if you are interested in learning more about AI and GI and hepatology. Thank you very much for your time and attention.
Video Summary
Dr. May, an associate professor of medicine at the David Geffen School of Medicine at UCLA, gave a talk on using artificial intelligence (AI) to mitigate health disparities in gastroenterology. She discussed the importance of health equity and the potential biases introduced by AI technologies in healthcare. She provided three examples from GI where bias could occur: esophageal cancer research focusing on white populations, inflammatory bowel disease research excluding diverse populations, and colorectal cancer detection tools not accounting for demographic differences. Dr. May highlighted six solutions to eliminate bias in AI, including involving health equity experts, including diverse study populations and settings, and implementing regulatory measures and auditing processes. She emphasized the need to prioritize involving experts in health equity and diversify study populations to ensure that AI improves healthcare outcomes for all.
Asset Subtitle
Folasade P. May, MD, PhD
Keywords
artificial intelligence
health disparities
gastroenterology
health equity
bias
diverse study populations
×
Please select your language
1
English