false
Catalog
Gastroenterology and Artificial Intelligence: 4th ...
Ethical Considerations Recorded Presentation
Ethical Considerations Recorded Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hi there, my name is Michael Abramoff and it's somewhat humbling for an ophthalmologist, a retina specialist like I am, to present at a GI conference. But I think the general principles that we have to work through apply across medicine, including for you. So like I said, very humbling, glad to be here. Like I said, my name is Michael Abramoff. I'm a retina specialist at the University of Iowa. I am also founder and now executive chairman of Digital Diagnostics, a company that created the first FDA-approved autonomous AI. I chair the foundational principles of AI workgroup that was created by FDA. I'm also chair of the Healthcare AI Coalition, which is a lobbying group out of Washington DC on healthcare AI policy. I've done some disclosures to make, including grants from NIH. My conflicts of interest include digital diagnostics and a couple of pharma startups. Before I start, I want to establish this concept of autonomous versus assistive AI, because it will be somewhat relevant to, as I go through my talk. Assistive AI is probably something you're very familiar with. It means that a medical decision is made by the clinician and the AI aids the clinician. The liability remains with the clinician. It also means that the patient is already with a doctor. It's already in the care workflow, and the reward values really to outcome improvement and efficiency improvements for patients and physicians. Autonomous AI is a little bit different, because here the medical decision is made completely by the computer. That also means that the liabilities with the AI creator, not anymore with a user, for example, a physician using it. It means that the diagnosis or other care points can be made at the point of care and can be immediate, can be added wherever the patient is. And maybe the patient is not yet with a doctor. The reward value is there, therefore not only potential outcome improvement, also better population health, including addressing health disparities. And there's a need for that, because there's giant problems in healthcare that can be solved by autonomous AI. I show a few of them. The most important one, I think, health disparities, especially the lack of access to effective care for minorities, for rural groups. For example, on the bottom, I'm trying to show this with just a simple map. On the bottom left, the availability of people like me, ophthalmologists, retinal specialists across the US. Bottom left, more blue is more ophthalmologists per county. In the middle, you see the eye care needs. Everyone with diabetes essentially needs an eye exam. More people per county means more red. And you can see that while more people with diabetes are in the southeast, that is not where the ophthalmologists are. Even from a geographic perspective, there's disparity. But the same is true for inner cities, for areas, and in general for different races and ethnicities. In addition, on the right, the graph shows productivity in green of almost all sectors in economy. That is not the case for, in red, outpatient healthcare, where we are seeing, in fact, fewer patients per hour than maybe 20 years ago. So that's a problem because it drives up costs and lowers efficiency. And finally, there's the healthcare demand workforce gap, where fewer physicians are being trained and educated and graduated, then there is need from a ever-aging population. So major problems that can be solved by autonomous AI. And on the top of that, very recently, of course, the pandemic made this even more urgent. Health disparities has been gaining a lot of attention recently, and so that's why a lot of my talk will be referring to health disparities as it means you can have good quality healthcare, but if it's unevenly distributed and not available to everyone who needs it, that's an health inequity and we're solving that. I mentioned that autonomous or AI in general can be a solution for these problems, and so it's maybe useful to understand where we came from. So in the 60s, there was already AI. It's called rule-based AI or expert systems. An example is Mycin, and there were actually PhD theses at Stanford. People, you know, actually got their grad work done on simple systems that are rule-based, for example, for prescription of antibiotics. You see an example on the right. That never went into clinical practice. In the 80s, I was part of that. We tried to do machine learning, perceptrons, deprivation was invented, fifth generation project, but still this did not lead to clinical adoption. In my view, mostly because the inputs were very noisy. They were either typed in by doctors or they were digitized slides. It was just very noisy data, which didn't allow us to get great performance, great accuracy. I think that changed with the availability of low-cost digital sensors, CMOS. Deep learning certainly helped, but I think those algorithms already existed in the 80s. Not as specialized, but definitely the availability of digital objective data at low cost really made the difference and allowed us to proceed to where we are now, where these systems are being clinically used. Healthcare AI is not easy. There's a few reasons, compared to other sectors in the economy, why that is the case. High quality data is scarce, especially for normals. That may surprise you, but while we have a lot of data, relatively a lot of data, on patients where they have a certain disease that maybe we want to diagnose, getting normals with all the ethical constraints we have, for example, radiation, is more challenging. There may not be enough data for normals. To train AI, machine learning AI, you really need normals, as well as disease data. Many diseases are also rare, making cases scarce. For example, melanoma in the eye is only one in a million patients. There will be only a few cases every year that we can use as training data. Typically, we use tens of thousands to have good working algorithms. Also, high quality truth is scarce, either because it's chronic disease, and if we don't want to use clinical outcome, and we'll come back to that, that may take years to develop. We may need, instead, highly qualified and expensive clinicians, experts, and that may just be a problem, as well. Finally, it may be a challenging environment, because typically, if we want to essentially improve health equity and improve access, the environment will not be ideal for getting high quality data. Most AIs require high quality inputs, for example, images, and it may not always be available where lower skilled operators and lower cost hardware is used to create input for the AI, as well as training data. In the interest of time, I will not go too much detail about diabetes and the diabetic eye exam, but diabetes is a major source of health disparities. Very common disease, you're probably aware. Also, the most important source of blindness and physical loss, and that is unevenly distributed minorities, have more worse outcomes for diabetes, but also more physical loss, more blindness, but have fewer diabetic eye exams, which we know is preventative for physical loss and blindness. So, great role for AI, major source of health disparities is almost this trifecta of both high risk of diabetes, in this case, worse physical outcomes, and even fewer diabetic eye exams than the general population. So, that led to this first ever autonomous AI that was approved by FDA. It's an autonomous AI for the diabetic eye exam. It replaces what a retina specialist like me does. It's an autonomous diagnosis for diabetic retinopathy and diabetic macular dema, these causes of blindness. It's more accurate than me. It can do a point of care diagnosis in minutes. Wherever the patient is, it just needs an outlet. There's no human oversight. Again, it's autonomous, meaning there's no human oversight of the diagnosis, and it predicts visual outcome of patient not treated. This is important that you can relate, in my view, the output of the AI to the outcome for the patient. It's meant for primary care and retail, so not for ophthalmology clinics. It's not to aid a specialist like me in doing a more efficient or better diagnosis. It is to improve population health, improve health disparities by being where the patient is, not where the doctor is, the specialist is. And it's, of course, integrated to the EHR for ordering results, et cetera, totally in the workflow. So, now I'm going back and doing step back, and we're going back to the algorithms. And let's say you have an algorithm you think is worthwhile to bring into clinical practice. The major lesson I want you to remember is that healthcare AI shouldn't be created in a silo. As you saw, I'm from Iowa, so obviously I like a physical with a silo in it. And what I'm trying to show here, hey, you have this deep learning algorithm, and you maybe want to bring it to patients. Well, what we don't want is rose-colored glasses, unicorns and rainbows. That is not going to work. We cannot stay in that silo because there are so many concerns about healthcare AI that exist in the population that we need to address. And it's, of course, when you say that a computer is making the diagnosis and there's no human oversight like we did, these really come to the fore. I mean, AI is not new in healthcare like I showed, and assistive AI has definitely been around and FDA approved for more than a decade. But if you say that the computer is without supervision, people really start to be concerned. And you can show that, and there was literature about it. Will this AI benefit me as a patient? Will it exacerbate rather than improve health disparities? What happens to my data? Is there racial or ethnic bias? Who's liable for errors? And who pays for all of this? And this is brought to the fore by a recent paper in Science by my great friend, Ziad Obermeier, about an AI that was deployed and being used clinically to assign care to different groups of patients. And it was harming Black patients because there was a built-in bias. And we can talk about what it is, but it did happen. There's other AIs that have harmed patients. And so, not only are these concerns valid, they actually have been borne out in the literature. And so, all stakeholders in healthcare are now paying great attention to all of this. And it should be AI creators like me, patient and patient organizations, US Congress, for example, the Government Accounting Office is paying a lot of attention now. HSS is paying a lot of attention, regulators, including federal agencies from HSS, physician and provider organizations like the American Medical Association, and in this case, the American Academy of Ophthalmology. And I'm happy to be here at the HGE meeting. And payers, both HSS, but as well as private payers, they all care greatly about all of this. And so, stakeholder acceptance and stakeholder support is crucial as we move AI, and especially autonomous AI to clinic. And my fears of backlash of attack life, and this fear is not overblown, in my view, a historic example, number one is really gene therapy, where decades ago, we were pretty ready to go into clinical trial with gene therapy, but then there were some poorly overseen, unethical gene therapy trials, young people died in these trials, these clinical trials. And it was essentially an effective moratorium, where funding was gone, research institutions were closed, and it died down for 20 years. And only in 2017, was there the first FDA approval of gene therapy, in this case, for retinol disease. And so, it set us back at least one decade and probably two decades. Theranos is another example of unethical behavior in healthcare, more recent cerebral, where, you know, a lot of ethical problems there. So, again, if we don't do this carefully, if we don't do this the right way, there will, I expect a backlash, and we lose all these advantages that I hopefully brought across. I'm a living example. In 2010, when I was busy in doing science and preparing for meetings with FDA, about a computer making a diagnosis, there was a big editorial about me in the most widely read ophthalmology journal, Ophthalmology Times, my good friend, now good friend, Peter McDonald, Chair of Ophthalmology at Hopkins. And the title was The Retinator, Revenge of the Machines. And it was about how there would be job loss, or the fear of job loss, and fear of quality of care because of this AI. And that was starting to become visible. So, there was this, this pushback, even from colleagues, is this going to cost my job? And so, transparency, openness, explaining everything you do, publishing is being crucial on this path. And I was able to turn around, and American Medical Association had this on their, on their website, on the front page, in 19, this ophthalmologist doing healthcare the right way. I'm not showing this to show you that I'm doing everything right. The point is, we can turn this around, this concern that is there. A, hopefully, the point got across that this is a valid concern, and there's other concerns that are valid, we'll get into, and B, but also that it can be addressed, should be addressed, and we can turn it around. That is the point I'm making here. But that starts, ultimately, with ethics. And I spent, and with my collaborators, spent a lot of time and effort on ethical frameworks for AI, more than a decade. And that has led, as it will show, to both regulatory frameworks, where we work together closely with FDA, patient organizations, others, as well as on the right, a payment framework for AI, which led to, in my view, CMS reimbursement, CPT codes, etc. But ultimately, in a center, as you can see, it's about improving health outcomes, improving health disparities. So we should not lose sight of, when I go through all of this path, that ultimately, it's about the patient. There was a choice. A long time ago, when we needed to decide whether we were going to follow this long, and I didn't even know it would be this long path, should we disrupt healthcare, and be like the Uber of healthcare, or should we walk within the system and get stakeholder acceptance? And that led to what you see on the right, press release by FDA, they were very proud of their, this first ever autonomous AI approval. So I think it was the right choice, but there was a choice. I mentioned bioethics, there's an entire issue of the American Journal of Bioethics, about this framework with 16 papers commenting on our framework, starting from very elementary, millennia old principles in bioethics. And I think that's crucial, because I mentioned some of the concerns that exist in society, and we even as providers, as physicians have, it's really hard to make sure that you consider all concerns, if you don't have an ethical framework, only through these millennia old ethical frameworks, these ethical principles, then we can make sure there's no holes that we don't realize 10 years later, oh, we should have thought of that. And I won't go into much detail, but essentially, ethical frameworks are built from very basic bioethical principles that you probably remember from medical school, like patient benefit or non maleficence, justice, or what they call equity, autonomy of the patient and responsibility. And so these have to be in a balance. And the balance is one aspect. And I'm not telling you where the balance should be, let's say an AI system, but even a physician, or a healthcare system should be in balance for these principles. But the ultimate goal of all these frameworks is that you're able to measure this, how much is a certain AI system in accordance with specific ethical principle, and you can measure that in various ways. What I'm trying to show here is that there's a tension between the AI system and the different bioethical principles it needs to answer to, and it's impossible to meet each of these principles in 100%. We will always need to find a balance between patient benefit, autonomy of the patient, health equity, treating everyone equally, we cannot solve for that. And that's called a Pareto optimum. It's all in the paper, I won't go into into details. But hopefully, I'm trying to establish that if we can measure these things, it doesn't answer where should the balance be, but it allows to at least explain where, where the balance is. So this ethical framework was really helpful, for example, working with FDA, you see a paper of this foundational principles of algorithmic interpretation or AI, together with leadership of FDA, as well as bioethicists, clinicians, patient organizations, even FTC is part of that towards regulatory considerations very carefully, how we look at AI from a regulatory safety, efficacy and equity perspective. Another thing it could lead to was a reimbursement framework for AI payments was published a few weeks ago, in Nature Digital Medicine, I'm tremendously proud of what this was about. And, you know, I don't want to go into too much detail. But essentially, it tries to assign a value to HDI. And there's different ways you can do this from a cost effectiveness perspective, from substitution, meaning, you know, if a physician does it, maximizing value to access, which is an equity aspect that we used, and you see the equations there, what a marginal cost, essentially, what it costs to do one more AI, in this case, a diagnosis. So there's different ways of, of coming to the value. And then from there on, and we chose the access maximizing or equity value. From there, you can work on getting stakeholder acceptance. And it took a lot of work. Here's me in the US Senate Finance meeting room. So the US Senate Finance Committee sets, you know, how much money is going to Medicare, for example. And so it was important that Congress understood these various ways of looking at AI and how to pay for them, and the choices that were being made as we, you know, continue to work with CMS. So a few details that, you know, may be relevant to when you think about developing AI, what, what will I run into from a regulatory perspective? You know, I cannot tell you what the FDA is going to do. I'm not speaking on behalf of FDA. But it can tell you that there is important considerations, for example, when you can try to tie the output of the AI to patient outcome, meaning, in our case, that we were able with a positive or negative AI output, diagnostic output to essentially predict what would happen to the patient if left untreated. I'm not sure how that would apply in GI, but it can, you know, we can talk about it during a panel discussion, how that applies. But again, for payers, for patients, typically their outcome, their clinical outcome is more important, more relevant than whether physicians, clinicians agree or disagree on, you know, what level of the disease has that may be useful, but ultimately, it's about patient outcome. And there, there is ways of doing that. And one way of explaining that is that, you know, physicians not only vary, I and my colleagues vary in about 30% of cases, I vary with myself in about 20% of cases when I see the same patient, maybe two weeks later, but also systematically, where physicians are trained, where specialists are trained, they will have systematic differences in looking at disease. On the bottom here, and I realized I lost my caption, but this is a hemorrhage in the retina, the orange and the red. 80% of ophthalmologists that I researched agree that this looks like a hemorrhage, about 50% agree that this looks like a hemorrhage, and only 20% thinks that this looks like a hemorrhage, even though from a outcome perspective, this is also a hemorrhage. So therefore, you can get to a metric for reference standards or truth, maybe outcome, clinical outcome, or a proxy for outcome is a better standard than a group of physicians like a reading center, let alone an interval physician. And those are the type of considerations we try to work out, and we have worked out in that foundational considerations for AI group and publications. Another way is how you can design AI. In the interest of time, I will go to this slide, because this was, we're developing this together with FDA, there's the entire lifecycle of an AI, you can create bias and introduce bias, but you can also mitigate it. And so we're working on steps and considerations for how to analyze and mitigate bias across the different phases of an AI from the concept design development, including training, validation, and then access and monitoring. And also very important was, how do we evaluate AIs. And one important principle is to evaluate it as a service as part of a larger system where the input, for example, images, the equipment used to create the images, the operator skills of the of whoever is creating the input images, how the output is discussed with the patient. This is a system. And so rather than just this is a system. And so rather than see the AI in isolation as an algorithm, we worked out that it's more important to see it as a system. And this will be important for reimbursement as well. And so what you see here is a paper I've been discussing for years with FDA, it was in the New York Journal of Medicine in 2007. There was a AI, an assistive AI that was approved by FDA based on the 510k process, where it was compared to radiologists and it performed really well, the accuracy was really high. That was not how it was being used in clinical practice. In clinical practice, it assisted a radiologist and looking at the mammogram to diagnose whether or not there was patients were suspect for breast cancer needed biopsy. Again, that was not how it was validated. That was not how the 510k process work. And so Fenton et al decided to look at whether the AI which everyone was expecting would improve outcomes for these women actually improved outcomes. So I compared radiologists without AI to radiologists with AI. And like I said, everyone expected the AI to improve the radiologist. This was not borne out by the data. In more than 200,000 women, outcomes for women diagnosed by the radiologist assisted way were worse, were less good than the radiologist alone. You can see that here in this ROC curve, where without CAT, the area under the curve is actually higher than with a CAT computer aided diagnosis. So, A, you can validate an AI, but it doesn't mean it will actually be used the same way. So it's very important to consider it as a system in the healthcare process you're trying to improve. And B, you need to be very careful how you validate it and evaluate it. Yeah, and so another consideration is maximized data traceability. And let me just go back a few decades, the Henrietta Lexifer, which you may or may not be familiar with, but this is what cell material taken from a patient where at the time to make a diagnosis, you just took out the cells. There was no consent for research or anything. This is 1951. But this cell line was immortal, was eternal, still alive. And these cells are continuing to be used for research, but also to create drugs, drug panels, and actually billions of dollars have been made from this cell line. And the Lex family, her estate, she later passed away from the disease, was not aware of this, did lawsuits, and was ultimately awarded large sums of money because this was, the cell line was considered part of her estate. And so there's actually books and I think a television series about Henrietta Lex. And so you can see how the thoughts about the use of patient material, in this case, actual cell material, changes over time. And we need to be prepared for that. We need to consider that. And so if you ask about patient data, you know, on the top of this slide, if you ask an audience, everyone always thinks they're on the patient's right data. Patients think they're on their data. Physicians think they're on the patient's data. Payers think, healthcare systems, and even EHR companies all think that they're on the patient's data. Well, obviously this cannot stand. HIPAA is a little bit complicated on this issue. There's no, you know, clear ownership there. But as AI becomes more widely used and more beneficial, and therefore probably also more beneficial from a financial perspective, the lawsuits are just waiting for us. So we need to be prepared for that. And one really important thing is to understand where the data even comes from, what happens to the data traceability. At least you can now follow where the data went, what happened to the data, and we're able to resolve these complicated issues. I'm not proposing a solution. I'm just saying this will be an important subject in the future as AI becomes more widespread. Liability, I already mentioned it. It was a huge issue. I think it was very important to take a stance that the liability, at least for autonomous AI, should be with the AI creator. The performance of the AI should be guaranteed by the AI creator, in this case, digital diagnostics. And so it should not be with, you know, the user who typically, for example, a primary care provider is using the AI because they're not comfortable doing this diagnostic in the first place. And in fact, I'm proud to say that the American Medical Association, in its AI policy, adopted this thought. And it now says that autonomous AI creators need to assume liability for performance. So I have explained, there was, you know, all these considerations that can be solved. But I would be amiss if I did not explain that this had results. For example, in 2019, for the first time ever, the standard of medical care for diabetes, in this case, for our patient organizations, was updated to support the use of autonomous AI. ACQA updated the MIPS and HEDIS care language to support the use of autonomous AI to close the care gap, which before was only possible by a human ophthalmologist or optometrist. Then finally, the CPT editorial panel, thanks to the strong support from the American Medical Association, I mentioned that, as well as the American Academy of Ophthalmology. And like I said, initially, my colleagues were very skeptical of AI. And now they're, you know, probably the strongest supporter because it is so beneficial for patients, for population, for population health. And essentially bringing to ICAP providers, to ophthalmologists, those patients that need our expertise, that needs our, you know, high quality care, and make it more accessible. So CPT code for autonomous AI, category one code was created in 1929. Then CMS responded by proposing a reimbursement, at first through the max, and later a national reimbursement as of this year, where we went in with a proposal for $55 per exam. And, you know, that was ultimately sort of where we ended up. And then, thanks to the support, again, of the American Medical Association, the CPT editorial panel, the ROC, the DEMPAC, which I was a member of the workgroup, I continue to be a member of the workgroup on AI from the DEMPAC, Digital Medicine Payment Advisory Group, National Hispanic Medical Association, National Medical Association, all working together as stakeholders to help explain the value of autonomous AI to solve these health disparities. So national reimbursement, private payers for it very quickly. And so, you know, wrapping up, there's now, in my view, this new industry, we're all the way on the left, remember that deep learning algorithm, that's where it starts, but it's not enough to do the science, we need a biotech framework, which now exists, FDA, very important to design a clinical trial, what are the thresholds for performance, solve for liability that has been done, get support from patient organizations, update the standards, work on quality measures, such as MIPS and HEDIS, work on reimbursement, CPT coding, there is now a reimbursement framework for AI reimbursement, probably more will be needed, but ultimately, it's about patient and population outcomes. But that's, you know, there's a lot of evidence, I will not go into that, because it's very specialized to eye care, to ophthalmology, to retina. But we are, we have now evidence from randomized clinical trials that health disparities are being removed because of autonomous AI outcomes, clinical outcomes are being improved because of autonomous AI. So yes, there is this path. I think that's the main message, there is a path. But it's, it's tightly tied together, the regulatory and the reimbursement cannot be seen separately. They really are part and parcel of this, this more general stakeholder acceptance for doing AI the right way that we all need to be involved in. Hopefully, this came across as being general enough to be of interest to you, as me as an ophthalmologist and retina specialist speaking here. And I look forward to the panel discussion. Thank you very much.
Video Summary
In this video, Dr. Michael Abramoff, a retina specialist and executive chairman of Digital Diagnostics, discusses the use of autonomous artificial intelligence (AI) in healthcare. He begins by explaining the difference between autonomous AI and assistive AI, where autonomous AI makes medical decisions without human oversight, while assistive AI aids clinicians in decision-making. Dr. Abramoff then discusses the potential of autonomous AI to address healthcare challenges, including health disparities, lack of access to care, and workforce shortages. He presents data showing the distribution of ophthalmologists and eye care needs in the US, highlighting disparities. Dr. Abramoff also discusses the barriers to healthcare AI, including limited high-quality data, rare diseases, and challenges in data collection. He then explains how autonomous AI can be used to improve diabetic eye exams and address health disparities in this area. Dr. Abramoff emphasizes the importance of ethics and transparency in developing and implementing AI in healthcare. He discusses regulatory frameworks, reimbursement considerations, and the need for stakeholder acceptance. He also highlights the importance of evaluating AI as part of a larger healthcare system and addressing concerns such as bias and data traceability. Dr. Abramoff shares examples of how healthcare AI has improved patient outcomes and discusses the potential risks and backlash if AI is not developed and implemented carefully. He concludes by emphasizing the importance of patient-centered outcomes and the need for collaboration among stakeholders to ensure the responsible and beneficial use of AI in healthcare.
Asset Subtitle
Michael Abramoff, MD
Olivia Niederhauser, MD
Keywords
autonomous AI
healthcare
artificial intelligence
health disparities
diabetic eye exams
ethics
×
Please select your language
1
English