false
Catalog
ASGE Annual Postgraduate Course: Clinical Challeng ...
Artificial Intelligence Ethics and Bias
Artificial Intelligence Ethics and Bias
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Okay, Shravanthi, thanks for that. And again, the nice segue into Mike's talk on biases. We've all been referring to that over the course of the morning, whether it's a selection bias and operator bias, as well as some of the ethics related to that. So again, Mike, if you're ready to share your screen and start your presentation, we're ready for you. Welcome. Okay, good morning, good afternoon, good evening, wherever you are in the globe. So I've been asked to speak on ethics and bias in AI and critically important for us to look at this. These are my disclosures. So AI programs can, as we know, can now diagnose skin cancer more accurately than a board certified dermatologist. And even better than that, the program can do it way faster, way more efficiently requires just a training data set rather than 10 years of expensive medical school. So you might think that it's only a matter of time before doctors are rendered so-called obsolete by AI technology. But if we take a closer look, we can see there's lots of issues around limitations and ethical complexities that we absolutely need to consider. So thinking about these potential ethical pitfalls, we need to identify possible solutions, come forward with some new policy recommendations that we can adopt in our AI journey here, so that patients receive the best care possible. There are many applications of AI we've seen in the three talks already in the medical space. It can be applied really to almost any field of medicine, clinical decision support, actual diagnostic support, or diagnostic decisions patient monitoring, preventative medicine, personalized medicine, and the like. So huge opportunity here, but we need to do it safely and properly. AI is creating a novel set of ethical challenges, particularly in healthcare, and we need to mitigate against these. There's a great capacity of AI to threaten patient safety, privacy, and their preference. So we do need to have guidelines around privacy, data management, quality of care, patient integrity and safety. Data, sorry, AI needs to be used responsibly and accountability is very important. And of course, we need to look at what are the potential inequities and biases in relation to patient care and how can we address them. So this is just a view of some overarching themes for AI in general, not just applied to medicine. It really should be, it's, you should be governed by ethical guidelines and responsible practices. We need to think about using automation responsibly. What's the implication for the job market? Of course, this applies to medicine. There's a great concern that maybe we are going to be increasingly redundant in the patient care. We need to respect privacy. We need to try and avoid biases and discrimination. And I'm going to go through that in some of my slides. Look at reducing the risk of error from algorithms. They're not perfect by any means. There are safety concerns and security risks. We need to consider accountability in the algorithms and transparency that Sravanthi mentioned in her last talk. And of course, we need to remain involved. There needs to be a human in the loop and physician input in patient care. One major theme to be addressed in this talk is how to balance the benefits and the risks of AI technology. You can see here there are many opportunities as to how AI can plug into the system, but also many areas of concern. So we need to move, there's a benefit to swiftly integrating AI into our healthcare system, but we recognize that there are some concerns and we need to minimize ethical risks and consider how AI is to be integrated into clinical practice. So I'm going to take you through some more specific ethical issues right now. This is one that is much quoted. There's an autonomous car there at the bottom with a family of four and it slips on the ice and is about to career into the oncoming car that is being driven by one individual without the autonomous input. And the AI decides that instead of crashing into that car and causing deliberate harm, it careers off the side of the mountain killing everybody in the four person autonomous car. So how does AI make decisions? What is the answer here? That shows that clearly AI is never neutral. Technology is never neutral. The use of AI in healthcare is raising many questions already and we need to have it rooted in principles of fairness, reliability, privacy, transparency, accountability, and the like. We need to be able to trust these new machines, new algorithms, new formulas that we are putting into our daily patient care already. We've seen in Tyler's and Prateek's talk already that these tools are there. They're in our practice now. I like this slide in that trustworthy AI, this is a nice mnemonic for trustworthy AI. It should be accountable. There should be responsibility or responsible use and it needs to be transparent. Another major theme here revolves around the use of AI in medical education. We need to prepare future physicians for integrating AI into the day-to-day practice and in the education of medical students. And some people suggest that we should be reframing medical education from a focus of knowledge recall to a focus on training students to interact with and manage AI machines. Obviously, this would require diligent attention to the ethical and clinical complexities that we have in this space. Who should decide what AI does? Which values should be considered? How do we deal with these dilemmas and how should values be prioritized? These are all very important things, I think, in medical education. I won't spend too much time on this given that Sravanti gave a very nice overview of the concerns of black box algorithms, but we do need to consider the legal and the health policy conflicts that arise with use of AI in healthcare. There are issues with medical malpractice and product liability that arise with these so-called black box algorithms. Because if a user cannot provide a logical explanation of how an algorithm arrived that is given output, that can be a concern. So we do need to move towards transparency and explainability. And this, thankfully, is being addressed in the medical AI, given the so-called glass box approach rather than the black box approach. And we hopefully move along this continuum and we're staying very much towards the left-hand side of this diagram rather than the right. There's lots of opportunity to use AI for good, just in society in general, but of course in healthcare in particular. But there is undoubtedly concern about the negative impacts of AI and even one might argue malicious use of AI. Hopefully, it doesn't really happen in our space of healthcare, but there's definitely that potential. So we need to be very aware of this. What about bias? So bias can come from many different sources, the training data from the types of algorithm that we use. There are contextual biases which occur when well-intentioned experts are vulnerable to making erroneous decisions by extraneous influences that are unconscious biases, whether it's social stereotypes about certain groups of people that individuals form outside of their own conscious awareness. Another definition of bias is a disproportionate weight in favor of or against one thing, person, or group compared with another, usually in a way that's considered to be unfair. And this is a very important thing to remember. If the data is skewed, even by accident, the computer will amplify the injustice. So any technology in business, in medicine, that is using automation will hopefully improve the efficiency and magnify the efficiency. But of course, if there is inefficiency or incorrectness in the setup, then automation will magnify that inefficiency. So this is very important for us to recognize. Bias can be amplified. Here's a very obvious example. This is from two or three years ago with some facial recognition software that was getting better and better. And this article highlights that facial recognition is very good if you're a white male. Here's another example from actually really quite recently, two or three years ago. This is Google Translate. If you want to translate from English into Hungarian, for example, and you type in, he is a nurse, she is a doctor, the Hungarian translation is retranslated back into English, mixing the sexes around. So it assumes that the nurse is a female and that the doctor is a male. So we do need to have very inclusive use of AI and think about everybody, all patients that we are treating and caring for. This is another really quite alarming study that came out a couple of years ago, where a well-used healthcare algorithm was affecting millions with a significant bias against the African-American population in the States. So this study was published in Science, and you will see here this graphical representation showing that the US healthcare system uses commercial algorithms to guide health decisions. And in this study, the authors found that there was racial bias in this algorithm, such that black patients who were assigned the same level of risk by the algorithm, and they were sicker than white patients. And the authors estimated that this racial bias reduces the number of African-American patients identified for extra care by more than 50%. Bias occurs because the algorithm developed was using health costs as a proxy for health needs. So less money was spent on African-American patients who have the same level of need, and hence that this algorithm falsely concluded that those patients were healthier than equally sick white patients. And it took a reformulation of this algorithm so that it no longer used costs as a proxy for needs to eliminate this racial bias in predicting who needs extra care. So there are biases in data. You need to strongly consider this. Very important in medical algorithm development. It's possible that you might think you have an appropriate amount of data for every group you can think of, but some of those groups are often represented less positively than others. There are biases in labels. Tyler went through this in some of his labeling discussion. But of course, your algorithm performance will rely upon the integrity and the representativeness of your labels. There are undoubtedly biases here. There are biases in interpretation. This is, I think, is a 2021 phenomenon now across the globe, where people want to search for information that really confirms their pre-existing beliefs rather than doing a proper analysis of the information available. There are biases in interpretation, coming to a conclusion based on information that is too general and not specific enough. This dog says because all cats have four legs and I have four legs, therefore I am a cat. There is human bias really all along the chain here from the training data collection through to model training and to the outputs that we see. And it forms a feedback loop such that we get this bias network effect or so-called bias laundering. Human data does perpetuate human bias. So as machine learning learns from these data, the result, of course, is this bias network approach. So we need to be very aware of that. So just to sum up, we do have issues when it comes to data. You do need to understand your data. There are skews and correlations. We need to abandon, thankfully we are, I think these days, a single training data sets and testing sets. We need to get them from multiple different sources if at all possible. AI can unintentionally lead to unjust outcomes. So we need to move from majority representation to diverse representation for medical AI. There's no doubt that AI will have widespread ramifications that revolutionize the practice of medicine, transforming patient experience and our daily routines. But hopefully you've seen that there's a lot of work to do in order to lay down the proper ethical foundation for using AI safely and effectively in health care. Ultimately, patients will still be treated by we physicians no matter what, no matter how much AI changes the delivery of that care. And there'll always be a need, I hope, for the human element in the practice of medicine. These are some key references that relate to AI in medicine. And some of these actually have specifically related to our field of gastroenterology. So I turn your attention to those. And with that, I thank you for your attention.
Video Summary
In this video, Mike discusses the ethics and biases associated with AI in healthcare. He highlights the capabilities of AI in areas such as diagnosing skin cancer, but also emphasizes the need to consider limitations and ethical complexities. Mike discusses the overarching themes of responsible AI use, transparency, privacy, and accountability. He addresses the potential risks and negative impacts of AI, including biases and discrimination. Mike also explores the role of AI in medical education and the importance of training students to interact with and manage AI machines. He concludes by emphasizing the need for a proper ethical foundation for the use of AI in healthcare.
Asset Subtitle
Michael F. Byrne, MD
Keywords
AI in healthcare
ethics
biases
diagnosing skin cancer
limitations
×
Please select your language
1
English