false
Catalog
Gastroenterology and Artificial Intelligence: 4th ...
Healthcare AI Ethical Considerations for Personali ...
Healthcare AI Ethical Considerations for Personalized Care
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Session number four is entitled Ethics and Regulatory Affairs in AI, and the moderators are Sarvanti Parasa and Cesare. So Sarvanti, you want to take it off, please? All right. Thank you all. Can you hear me? Okay. Yes? Okay. So our first speaker will be talking about Healthcare AI Ethical Considerations for Personalized Care. It's Dr. Michael Abramoff. Dr. Michael Abramoff is a fellowship-trained retina specialist and is a computer scientist as well as an entrepreneur. He's also a professor of ophthalmology and visual sciences at the University of Iowa and has a joint appointment with the College of Engineering. As you can see, there is a fellow IEEE. That means he's an engineer. So I'll just get started on... He has recorded his presentation, and he'll join us live for the panel discussion. So I'm just going to play his talk. Cesare, any pointers, anything, before I get started with his talk? Very curious, the same as Sarvanti, and thanks for this invitation and this outstanding summit. Hi there. My name is Michael Abramoff, and it's somewhat humbling for an ophthalmologist, a retina specialist like I am, to present at a GI conference. But I think the general principles that we have to work through apply across medicine, including for you. So like I said, very humbling. Glad to be here. Like I said, my name is Michael Abramoff. I'm a retina specialist at the University of Iowa. I am also founder and now executive chairman of Digital Diagnostics, a company that created the first FDA-approved autonomous AI. I chair the foundational principles of AI work group that was created by FDA. I'm also chair of the Healthcare AI Coalition, which is a lobbying group out of Washington, D.C., on healthcare AI policy. I do some disclosures to make, including grants from NIH. My conflicts of interest include digital diagnostics and a couple of pharma startups. Before I start, I want to establish this concept of autonomous versus assistive AI, because it will be somewhat relevant as I go through my talk. Assistive AI is probably something you're very familiar with. It means that the medical decision is made by the clinician and the AI aids the clinician. The liability remains with the clinician. It also means that the patient is already with a doctor. It's already in the care workflow. The real-world value is really to outcome improvement and efficiency improvements for patients and physicians. Autonomous AI is a little bit different, because here the medical decision is made completely by the computer. That also means that the liability is with the AI creator, not anymore with a user, for example, a physician using it. It means that the diagnosis or other care points can be made at the point of care and can be immediate, can be added wherever the patient is. Maybe the patient is not yet with a doctor. The real-world value is there, therefore, not only potential outcome improvement, also better population health, including addressing health disparities. And there's a need for that, because there's giant problems in healthcare that can be solved by autonomous AI. I show a few of them. The most important one, I think, is health disparities, especially the lack of access to effective care for minorities, for rural groups. For example, on the bottom, I'm trying to show this with just a simple map. On the bottom left, the availability of people like me, ophthalmologists, retinal specialists across the US. Bottom left, more blue is more ophthalmologists per county. In the middle, you see the eye care needs. Everyone with diabetes essentially needs an eye exam. More people per county means more red. We can see that while more people with diabetes are in the Southeast, that is not where the ophthalmologists are. There's, even from a geographic perspective, there's disparity, but the same is true for inner cities, for areas, and in general, for different races and ethnicities. In addition, on the right, the graph shows productivity in green of almost all sectors in economy. That is not the case for, in red, outpatient health care, where we are seeing, in fact, fewer patients per hour than maybe 20 years ago. So that's a problem because it drives up costs and lowers efficiency. And finally, there's the health care demands, workforce gap, where fewer physicians are being trained and educated and graduated than there is need from an ever-aging population. So major problems that can be solved by autonomous AI. And on the top of that, very recently, of course, the pandemic made this even more urgent. Health disparities has been gaining a lot of attention recently. And so that's why a lot of my talk will be referring to health disparities as it means you can have good quality health care, but if it's unevenly distributed and not available to everyone who needs it, that's a health inequity and we're solving that. I mentioned that autonomous or AI in general can be a solution for these problems. And so it's maybe useful to understand where we came from. So in the 60s, there was already AI. It's called rule-based AI or expert systems. An example is Mycin, and there were actually PhD theses at Stanford. People actually got their grad work done on simple systems that are rule-based. For example, for prescription of antibiotics, you see an example on the right. That never went into clinical practice. In the 80s, I was part of that. We tried to do machine learning, perceptrons, deprivation was invented, fifth generation project. But still, this did not lead to clinical adoption, in my view, mostly because the inputs were very noisy. They were either typed in by doctors or they were digitized slides. It was just very noisy data, which didn't allow us to get great performance, great accuracy. I think that changed with the availability of low-cost digital sensors, CMOS. Deep learning certainly helped, but I think those algorithms already existed in the 80s. Not as specialized, but definitely the availability of digital objective data at low cost really made a difference and allowed us to proceed to where we are now, where these systems are being clinically used. Healthcare AI is not easy. There's a few reasons compared to other sectors in the economy why that is the case. High-quality data is scarce, especially for normals. That may surprise you, but while we have a lot of data, relatively a lot of data on patients where they have a certain disease that maybe we want to diagnose, getting normals with all the ethical constraints we have, for example, radiation, is more challenging. There may not be enough data for normals. To train AI, machine learning AI, you really need normals as well as disease data. Many diseases are also rare, making cases scarce. For example, melanoma in the eye is only one in a million patients, so there will be only a few cases every year that we can use as training data. Typically, we use tens of thousands to have good working algorithms. Then also, high-quality truth is scarce, either because it's chronic disease, and if we want to use clinical outcome, and we'll come back to that, that may take years to develop, and we may need instead highly qualified and expensive clinicians, experts, and that may just be a problem as well. Finally, there may be a challenging environment, because typically, if we want to essentially improve health equity and improve access, the environment will not be ideal for getting high-quality data. Most AIs require high-quality inputs, for example, images, and it may not always be available where lower-skilled operators and lower-cost hardware is used to create input for the AI, as well as training data. In the interest of time, I will not go into too much detail about diabetes and the diabetic eye exam, but diabetes is a major source of health disparities, a very common disease, you're probably aware. Also, the major source, the most important source of blindness and physical loss, and that is unevenly distributed minorities have more worse outcomes for diabetes, but also more physical loss, more blindness, but have fewer diabetic eye exams, which we know is preventative for physical loss and blindness. So great role for AI, major source of health disparities is almost this trifecta of both higher risk of diabetes, in this case, worse physical outcomes, and even fewer diabetic eye exams than the general population. So that led to this first-ever autonomous AI that was approved by FDA, it's an autonomous AI for the diabetic eye exam, it replaces what a retina specialist like me does, it's an autonomous diagnosis for diabetic retinopathy and diabetic macular dema, these causes of blindness, it's more accurate than me, it can do a point-of-care diagnosis in minutes, wherever the patient is, it just needs an outlet, there's no human oversight, again, it's autonomous, meaning there's no human oversight of the diagnosis, and it predicts visual outcome of patient not treated. This is important that you can relate, in my view, the output of the AI to the outcome for the patient. It's meant for primary care and retail, so not for ophthalmology clinics, it's not to aid a specialist like me in doing a more efficient or better diagnosis, it is to improve population health, improve health disparities by being where the patient is, not where the doctor is, the specialist is. And it's, of course, integrated with EHR for ordering results, etc., totally in the workflow. So now, I'm going back and doing a step back, and we're going back to the algorithms, and let's say you have an algorithm you think is worthwhile to bring into clinical practice. The major lesson I want you to remember is that healthcare AI shouldn't be created in a silo. As you saw, I'm from Iowa, so obviously, I like a physical with a silo in it, and what I'm trying to show here, hey, you have this deep learning algorithm, and you maybe want to bring it to patients, well, what we don't want is rose-colored glasses, unicorns and rainbows, that is not going to work. We cannot stay in that silo because there are so many concerns about healthcare AI that exist in the population that we need to address. And it's, of course, when you say that a computer is making the diagnosis and there's no human oversight like we did, these really come to the fore. I mean, AI is not new in healthcare like I showed, and assistive AI has definitely been around and FDA-approved for more than a decade, but if you say that a computer is without supervision, people really start to be concerned, and you can show that, and there was literature about it. Will this AI benefit me as a patient? Will it exacerbate rather than improve health disparities? What happens to my data? Is there racial or ethnic bias? Who's liable for errors, and who pays for all of this? And this is brought to the fore by a recent paper in Science by my great friend Zia Obermeier about an AI that was deployed and being used clinically to assign care to different groups of patients, and it was harming Black patients because there was a built-in bias. And we can talk about what it is, but it did happen. There's other AIs that have harmed patients, and so not only are these concerns valid, they actually have been borne out in the literature. And so all stakeholders in healthcare are now paying great attention to all of this, and there should be AI creators like me, patient and patient organizations, U.S. Congress, for example, the Government Accounting Office is paying a lot of attention now, HHS is paying a lot of attention, regulators, including federal agencies from HHS, physician and provider organizations like the American Medical Association, and in this case, the American Academy of Ophthalmology, and I'm happy to be here at the HGE meeting, and payers, both HHS, but as well as private payers, they all care greatly about all of this. And so stakeholder acceptance and stakeholder support is crucial as we move AI and especially autonomous AI into clinic. And my fears of backlash, of attacklash, and this fear is not overblown in my view. A historic example, number one, is really gene therapy, where decades ago, we were pretty ready to go into clinical trial with gene therapy, but then there were some poorly overseen unethical gene therapy trials, young people died in these trials, these clinical trials, and it was essentially an effective moratorium, where funding was gone, research institutions were closed, and it died down for 20 years, and only in 2017 was there the first FDA approval of gene therapy, in this case, for retinal disease. And so it set us back at least one decade, and probably two decades. Theranos is another example of unethical behavior in healthcare. More recent, Cerebral, where, you know, a lot of ethical problems there. So again, if we don't do this carefully, if we don't do this the right way, there will, I expect, a backlash, and we lose all these advantages that I hopefully brought across. I'm a living example. In 2010, when I was busy doing science and preparing for meetings with FDA, about a computer making a diagnosis, there was a big editorial about me in the most widely read ophthalmology journal, Ophthalmology Times, my good friend, now good friend, Pete McDonald, Chair of Ophthalmology at Hopkins, and the title was The Retinator, Revenge of the Machines, and it was about how there would be job loss, or the fear of job loss, and the fear of quality of care because of this AI, and that was starting to become visible. So there was this pushback, even from colleagues, is this going to cost my job? And so, transparency, openness, explaining everything you do, publishing, is being crucial on this path, and I was able to turn around, and the American Medical Association had this on their website, on the front page, in 19, this ophthalmologist doing healthcare the right way. I'm not showing this to show you that I'm doing everything right. The point is, we can turn this around, this concern that is there, A, hopefully the point got across that this is a valid concern, and there's other concerns that are valid, we'll get into, and B, but also that it can be addressed, should be addressed, and we can turn it around. That is the point I'm making here. But that starts ultimately with ethics, and I spent, and with my collaborators, spent a lot of time and effort on ethical frameworks for AI, more than a decade, and that has led, as it was shown, to both regulatory frameworks, where we work together closely with FDA, patient organizations, others, as well as, on the right, a payment framework for AI, which has led to, in my view, CMS reimbursement, CPT codes, et cetera. But ultimately, in the center, as you can see, it's about improving health outcomes, improving health disparities, so we should not lose sight of, when I go through all of this path, that ultimately, it's about the patient. There was a choice, a long time ago, when we needed to decide whether we were going to follow this long, and I didn't even know it would be this long path, should we disrupt healthcare, and be like the Uber of healthcare, or should we walk within the system and get stakeholder acceptance, and that led to what you see on the right, press release by FDA, they were very proud of their, this first-ever autonomous AI approval, so I think it was the right choice, but there was a choice. I mentioned bioethics, there's an entire issue of the American Journal of Bioethics about this framework, with 16 papers commenting on our framework, starting from very elementary, millennia-old principles in bioethics, and I think that's crucial, because I mentioned some of the concerns that exist in society, and we, even as providers, as physicians, have, it's really hard to make sure that you consider all concerns, if you don't have an ethical framework, only through these millennia-old ethical frameworks, these ethical principles, then we can make sure there's no holes that we don't realize 10 years later, oh, we should have thought of that, and I won't go into much detail, but essentially, ethical frameworks are built from very basic principles that you probably remember from medical school, like patient benefit or non-maleficence, justice, or what they call equity, autonomy of the patient and responsibility, and so these have to be in a balance, and the balance is one aspect, and I'm not telling you where that balance should be, let's say an AI system, but even a physician or a healthcare system should be in balance for these principles, but the ultimate goal of all these frameworks is that you are able to measure this, how much is a certain AI system in accordance with a specific ethical principle, and you can measure that in various ways, what I'm trying to show here is that there's a tension between the AI system and the different bioethical principles it needs to answer to, and it's impossible to meet each of these principles in 100%, we will always need to find a balance between patient benefit, autonomy of the patient, health equity, treating everyone equally, we cannot solve for that, and that's called a Pareto optimum, it's all in the paper, I won't go into details, but hopefully I'm trying to establish that if we can measure these things, it doesn't answer where should the balance be, but it allows to at least explain where the balance is. So this ethical framework was really helpful, for example, working with FDA, you see a paper of this foundational principles of algorithmic interpretation or AI, together with leadership of FDA, as well as bioethicists, clinicians, patient organizations, even FTC is part of that, towards regulatory considerations, very carefully, how we look at AI from a regulatory safety, efficacy and equity perspective. Another thing it could lead to was a reimbursement framework for AI payments, was published a few weeks ago, in Nature Digital Medicine, I'm tremendously proud of what this was about. And, you know, I don't want to go into too much detail, but essentially, it tries to assign a value to each AI. And there's different ways you can do this from a cost effectiveness perspective, from substitution, meaning, you know, if a physician does it, maximizing value to access, which is an equity aspect that we used, and you see the equations there, what a marginal cost, essentially, what it costs to do one more AI, in this case, a diagnosis. So there's different ways of, of coming to the value. And then from there on, and we chose the access maximizing or equity value. From there, you can work on getting stakeholder acceptance. And it took a lot of work. Here's me in the US Senate Finance meeting room. So the US Senate Finance Committee sets, you know, how much money is going to Medicare, for example. And so it was important that Congress understood these various ways of looking at AI and how to pay for them, and the choices that were being made as we, you know, continue to work with Siemens. So a few details that, you know, may be relevant to when you think about developing AI, what, what will I run into from a regulatory perspective? You know, I cannot tell you what the FDA is going to do. I'm not speaking on behalf of FDA. But it can tell you that there is important considerations, for example, when you can try to tie the output of the AI to patient outcome, meaning, in our case, that we were able with a positive or negative AI output, diagnostic output to essentially predict what would happen to the patient if left untreated. I'm not sure how that would apply in GI, but it can, you know, we can we can talk about it during the panel discussion, how that applies. But again, for payers for patients, typically, their outcome, their clinical outcome is more important, more relevant than whether physicians, clinicians agree or disagree on, you know, what level of the disease has that may be useful, but ultimately, it's about patient outcome. And there, there is ways of doing that. And one way of explaining that is that, you know, physicians not only vary, I and my colleagues vary in about 30% of cases, I vary with myself in about 20% of cases, when I see the same patient, maybe two weeks later, but also systematically, where physicians are trained, where specialists are trained, they will have systematic differences in looking at disease. On the bottom here, and I realized I lost my caption. But this is a hemorrhage in the retina, the orange and the red. 80% of ophthalmologists that I researched, agree that this looks like a hemorrhage, about 50% agree that this looks like a hemorrhage, and only 20% thinks that this looks like a hemorrhage, even though from a outcome perspective, this is also a hemorrhage. So therefore, you can get to a metric for reference standards or truth. Maybe outcome, clinical outcome, or a proxy for outcome is a better standard than a group of physicians like a reading center, let alone an interval physician. And those are the type of considerations we try to work out, and we have worked out in that foundational considerations for AI group and publications. Another way is how you can design AI. In the interest of time, I will go to this slide, because this was, we're developing this together with FDA, the entire life cycle of an AI, you can create bias and introduce bias, but you can also mitigate and so we're working on steps and considerations for how to analyze and mitigate bias across the different phases of an AI, from the concept design development, including training, validation, and then access and monitoring. And also very important was, how do we evaluate AIs? And one important principle is to evaluate it as a service as part of a larger system where the input, for example, images, the equipment used to create the images, the operator skills of the, of whoever is creating the input images, how the output is discussed with the patient. This is a system. And so rather than see the AI in isolation as an algorithm, we worked out that it's more important to see it as a system. And this will be important for reimbursement as well. And so what you see here is a paper I've been discussing for years with FDA. It was in the New York Journal of Medicine in 2007. There was an AI, an assistive AI that was approved by FDA based on a 510k process, where it was compared to radiologists and it performed really well, the accuracy was really high. That was not how it was being used in clinical practice. In clinical practice, it assisted a radiologist in looking at a mammogram to diagnose whether or not there was, patient was suspect for breast cancer needed biopsy. Again, that was not how it was validated. That was not how the 510k process worked. And so Fenton et al decided to look at whether the AI, which everyone was expecting would improve outcomes for these women, actually improved outcomes. So I compared radiologists without the AI to radiologists with the AI. And like I said, everyone expected the AI to improve the radiologist. This was not borne out by the data. In more than 200,000 women, outcomes for women diagnosed by the radiologist-assisted way were worse, were less good than the radiologist alone. You can see that here in this ROC curve, where without CAT, the area under the curve is actually higher than with a CAT, computer-aided diagnosis. So A, you can validate an AI, but it doesn't mean it will actually be used the same way. So it's very important to consider it as a system in the healthcare process you're trying to improve. And B, you need to be very careful how you validate it and evaluate it. Yeah. And so another consideration is maximized data traceability. And let me just go back a few decades, the Henrietta Lexifer, which you may or may not be familiar with, but this is what cell material taken from a patient where at the time to make a diagnosis, you just took out the cells. There was no consent for research or anything. This is 1951. But this cell line was immortal, was eternal, still alive. And these cells continue to be used for research, but also to create drugs, drug panels, and actually billions of dollars have been made from this cell line. And the Lex family, her estate, she later passed away from the disease, was not aware of this, did lawsuits, and was ultimately awarded large sums of money because this was, the cell line was considered part of her estate. And so there's actually books and I think a television series about Henrietta Lex. And so you can see how the thoughts about the use of patient material, in this case, actual cell material, changes over time. And we need to be prepared for that. We need to consider that. And so if you ask about patient data, you know, on the top of this slide, if you ask an audience, everyone always thinks they're on the patient's right data. Patients think they're on their data. Physicians think they're on the patient's data. Payers think, healthcare systems, and even EHR companies all think that they're on the patient's right data. Well, obviously this cannot stand. HIPAA is a little bit complicated on this issue. There's no clear ownership there. But as AI becomes more widely used and more beneficial, and therefore probably also more beneficial from a financial perspective, the lawsuits are just waiting for us. So we need to be prepared for that. And one really important thing is to understand where the data even comes from, what happens to the data, traceability. At least you can then follow where the data went, what happened to the data, and we're able to resolve these complicated issues. I'm not proposing a solution. I'm just saying this will be an important subject in the future as AI becomes more widespread. Liability, I already mentioned it. It was a huge issue. I think it was very important to take a stance that the liability, at least for autonomous AI, should be with the AI creator. The performance of the AI should be guaranteed by the AI creator, in this case, digital diagnostics. And so it should not be with the user who typically, for example, a primary care provider, is using the AI because they're not comfortable doing this diagnostic in the first place. And in fact, I'm proud to say that the American Medical Association in its AI policy adopted this thought, and it now says that autonomous AI creators need to assume liability for performance. So I have explained there was all these considerations that can be solved, but I would be amiss if I did not explain that this had results. For example, in 2019, for the first time ever, the standard of medical care for diabetes, in this case, for my patient organizations, was updated to support the use of autonomous AI. ACQA updated the MIPS and HEDIS care language to support the use of autonomous AI to close the care gap, which before was only possible by a human ophthalmologist or optometrist. Then finally, CPT editorial panel, thanks to the strong support from the American Medical Association, I mentioned that, as well as the American Academy of Ophthalmology. And like I said, initially, my colleagues were very skeptical of AI. And now they're, you know, probably the strongest supporter because it is so beneficial for patients, for population, for population health. Essentially, bringing to eye care providers, to ophthalmologists, those patients that need our expertise, that needs our, you know, high quality care, and make it more accessible. So CPT code for autonomous AI, category one code was created in 1929. Then CMS responded by proposing a reimbursement at first through the max and later a national reimbursement as of this year, where we went in with a proposal for $55 per exam. And, you know, that was ultimately sort of where we ended up. And then, thanks to the support, again, of the American Medical Association, the CPT editorial panel, the ROC, the DEMPAC, which I was a member of the work group, I continue to be a member of the work group on AI from the DEMPAC, Digital Medicine Payment Advisory Group, National Hispanic Medical Association, National Medical Association, all working together as stakeholders to help explain the value of autonomous AI to solve these health disparities. So national reimbursement, private payers for it very quickly. And so, you know, wrapping up, there's now, in my view, this new industry where all the way on the left, remember that deep learning algorithm, that's where it starts, but it's not enough to do the science, we need a biological framework, which now exists. FDA, very important to design a clinical trial, what are the thresholds for performance, solve for liability that has been done, get support from patient organizations, update the standards, work on quality measures, such as MIPS and HEDIS, work on reimbursement, CPT coding. There is now a reimbursement framework for AI reimbursement, probably more will be needed. But ultimately, it's about patient and population outcomes. But that's, you know, there's a lot of evidence, I will not go into that because it's very specialized to eye care, to ophthalmology, to retina. But we are, we have now evidence from randomized clinical trials that health disparities are being removed because of autonomous AI outcomes, clinical outcomes are being improved because of autonomous AI. So yes, there is this path. I think that's the main message, there is a path. But it's tightly tied together, the regulatory and the reimbursement cannot be seen separately. They really are part and parcel of this, this more general stakeholder acceptance for doing AI the right way that we all need to be involved in. Hopefully this came across as being generally enough to be of interest to you, as me as an ophthalmologist and retina specialist speaking here. And I look forward to the panel discussion. Thank you very much.
Video Summary
The video transcript features Dr. Michael Abramoff discussing the topic of healthcare AI and ethical considerations for personalized care. Dr. Abramoff, a retina specialist and computer scientist, highlights the potential of autonomous AI to improve healthcare outcomes and address health disparities. He explains the difference between assistive AI and autonomous AI, emphasizing the need for accountability and liability in autonomous AI systems. Dr. Abramoff also discusses the challenges of developing healthcare AI, such as the scarcity of high-quality data for training, and the importance of measuring AI systems against ethical principles. He presents a regulatory and reimbursement framework for AI and emphasizes the need for stakeholder acceptance, including patient organizations, regulatory bodies like the FDA, Congress, and payers. Dr. Abramoff cites examples of the potential backlash and ethical considerations associated with healthcare AI, and the importance of transparency and openness in addressing these concerns. He also discusses the importance of data traceability and ownership in the context of healthcare AI. Dr. Abramoff concludes by highlighting the positive impact of healthcare AI on improving patient outcomes and addressing health disparities, and emphasizes the need for collaboration and ethical frameworks in the development and implementation of AI in healthcare.
Asset Subtitle
Michael Abramoff, MD
Keywords
healthcare AI
ethical considerations
personalized care
autonomous AI
accountability and liability
regulatory and reimbursement framework
data traceability and ownership
×
Please select your language
1
English