false
Catalog
Gastroenterology and Artificial Intelligence: 3rd ...
AI in Healthcare: Current Advances in AI and How i ...
AI in Healthcare: Current Advances in AI and How it Translates to Medical Applications
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Our last lecture before we go into a larger panel discussion will be by Anima Anandkumar. And Anima will be discussing AI and healthcare current advances and how it translates to medical applications. Anima, who's been working quite closely with us in the task force, is a professor at Caltech and director of ML research at NVIDIA. And she is passionate about designing algorithms and applying them to interdisciplinary applications. And so welcome, Anima. We're looking forward to your presentation. Today, I'm happy to talk to you about the ongoing AI revolution in healthcare. When we think about smart hospitals of the future, there are so many areas where AI can revolutionize hospitals, as we know today. From ongoing patient monitoring to contactless controls to body temperature screening and enabling safe social distancing in this age of pandemics. And allowing for fault prediction and prevention in patients and ultimately surgery analytics. In particular, today, I'll tell you about an ongoing project we have on being able to analyze some surgical practices and ultimately provide feedback that is very useful to the surgeons. And this is a project that is in collaboration now with Dr. Andrew Hung at USC Keck Medical. And what we've seen is an interdisciplinary collaboration where we are bringing in AI methodologies and state-of-the-art robotic surgical practices together to assess and ultimately assist surgeons in performing these complex procedures. And the platform we employ is the popular Da Vinci system, which is indeed the premier surgical platform. And here we have the vision system, the surgical card, and the surgeon's console. And this endurist controls is state-of-the-art in being able to provide human hand-like natural movements for the surgeon to precisely control the actions during surgery. And so what also makes robotic surgery ideal for AI is the ease of data gathering, right? Unlike traditional surgeries, with robotic surgery, what we have is already automated data of surgery videos available. And there's also rich information in terms of kinematics available, especially when this is done in simulation, as we'll see in some of the data we have collected so far. And so far, surgery assessment, the skill assessment of surgeons is highly subjective. And the question we asked was, can we make this objective? Can we have AI-based tools to assess the videos and relate that to the patient outcomes? And can we give both intraoperative and postoperative guidance and assessment through the use of AI? And this is indeed an ambitious goal, and we are just at the first steps of this journey. And this starts with classifying suture gestures, right? Can we break down this complex video of surgery into several steps? And that is indeed methodical, right? Even though there may be variations from one patient to another or one surgeon to another, still, there is an overall set of gestures that are needed to perform the surgery. And in this case, it is vesicourethral anastomosis that Dr. Andrew Hung is an expert in. And what he has helped is break down into these set of gestures, also account for right-handed versus left-handed gestures. And so once we have this, we can collect data and divide it up into these set of different actions, right? Starting from needle positioning, targeting, needle driving entry and exit, and repositioning. So this is still quite simplistic compared to the overall surgery, but it's a great beginning point for assessing what AI can do here. And this is collected on this sponge data in simulation. And these are performed by a range of surgeons, both with different levels of experience. And this is done in MIMIC simulation, which is a state-of-the-art simulation, realistic simulation for robotic surgery. And what we have is also having these labeled by four different medical students and having it cross-validated. So this gives us the data to train the AI models. And once we have this, we can ask how to bring in the state-of-the-art neural network architectures to learn on this data. And one thing to keep in mind is these are not just static images, right? These are videos. So we need to not just look at the current frames. We also need to compute like the optical flow. Like, you know, how is the movement changing over time and use that together and fuse the features. And another important aspect is this notion of attention. What the attention layer lets us do is focus the attention to important parts such as the needle. And because you don't want to necessarily treat all the pixels the same, you want to focus on where the action is and that's around the needle. And if we can use what we've done is used a convolutional LSTM to keep track of a hidden state. And that hidden state implicitly can drive this attention map and hence allow the model to focus on the regions of interest and ultimately get good accuracies. And in the end, what we want to do is be able to predict whether the skill was ideal or not, whether this is an acceptable level of skill or not. And this aspect is given to us from Dr. Hong's lab. And what we are now doing is training in a supervised way on that information and a test time on new videos we can assess the skills. And the other aspect that is beneficial that is available to us in this data set is the availability of auxiliary supervision through the instrument kinematics. Because this is done in simulation, we can record all the kinematics very accurately. And this gives us additional information in addition to just the raw videos. And that's another benefit from robot surgery, having access to this information. And what we see from these models is for, you know, we can get to the AOC of as high as 95%, right. And when we are using all both the video and the kinematics and the attention. So the attention is giving indeed a big boost. As you can see, the attention helps us focus on the relevant parts of the image and hence increases the accuracy greatly. And the kinematic data further enhances that. So having this rich information helps us accurately assess skills from videos and from kinematics. And our goal next is to also ask this is done on simulated data. Can we transfer that to the real surgery videos and how do we do that effectively? And also like the richness of the attention map, you can see that in these different stages here. You can see that the focus is happening. The attention is focusing on where the needle action is, which is what is relevant, right? The other parts of the image are not so relevant for assessing the skill. So if it can dynamically change that as we see it happening here, you can get much more accurate skill assessment. And so in summary, what we are doing in this ongoing project is to use AI and machine learning to have automated assessment of surgical skills and robotic surgery. And having rich video information and auxiliary supervision like kinematics can make this highly accurate. And combining this with state of the art neural network models such as attention based models helps us obtain really accurate results. And I think in the next steps, as I said, we want to ultimately transfer it to the real videos. We want to be able to also provide corrective feedback to the surgeons. And so asking how to provide feedback while not increasing the cognitive load and how to personalize the feedback is another aspect we are currently working on. So there are a lot of rich set of problems here. So the next few minutes, what I want to do is quickly give you an overview of the computational platform we are building at NVIDIA for enabling AI in health care applications. And there are so many facets to this, you know, by enabling large scale drug discovery, right? Computationally, how do we solve this at scale? And having medical imaging AI, as we've seen, can be very promising here. So providing good models and assistance for AI based medical imaging and genomics, again, is now undergoing a big data revolution. So how do we process this at scale? And the smart hospitals that I introduced this talk with can be enabled with NVIDIA Clara Guardian. And so just a quick overview of what the NVIDIA Clara imaging platform gives you is a range of accurate pre-trained models on so many different modalities. The assistance for labeling data very quickly, training models at scale and deploying them on a range of platforms. Just to quickly show you here in this AI assisted annotation, you can with just a few clicks quickly use AI to fill in the rest. So this deep grow model is intelligently trying to assess how to grow the region based on a few clicks or few annotations, thereby greatly decreasing the annotation burden on the annotators. And so this is some now available also in 3D. So another aspect with health care is data privacy, right? Data sharing is hugely problematic because of FIPPA and other regulations. And what we have enabled is federated learning that can preserve privacy and still learn across different data sets. And this we have now deployed it at many hospitals and especially around the COVID data sets, how to enable state of the art models. And for the smart hospitals, there are so many assistance we can provide in terms of pose estimation models for assessing patients, speech recognition, heart rate monitoring. So all this will help us have good patient monitoring and still have contactless controls. The drug discovery, as I said, is a huge aspect of health care. And we can have large scale computation enabled with AI in every part of this. As one of the aspects here in collaboration with my collaborators at Caltech and NTOS, what we've done is to use domain rich features like molecular orbitals and graph neural networks to build state of the art AI models for quantum chemistry. And using molecular orbitals, we can train on just small molecules and directly transfer to large ones. And with this, we can directly enable drug like analytics such as geometry optimization and do that much faster than traditional methods. So to conclude, there are such rich opportunities for AI in health care and AI and computation now are at the center stage here. So collaborating with the domain experts and AI experts together will lead to a rich future.
Video Summary
In this video, Anima Anandkumar, a professor at Caltech and director of ML research at NVIDIA, discusses the ongoing AI revolution in healthcare. She highlights various areas where AI can revolutionize hospitals, such as patient monitoring, contactless controls, body temperature screening, safe social distancing, fault prediction and prevention, and surgery analytics. Anandkumar also talks about a project in collaboration with Dr. Andrew Hung at USC Keck Medical, which aims to analyze surgical practices and provide feedback to surgeons using AI methodologies and robotic surgical practices. The project focuses on classifying suture gestures and uses state-of-the-art neural network architectures, including attention-based models, to assess surgical skills from videos and kinematics data. The goal is to transfer these findings to real surgery videos and provide corrective feedback to surgeons. Anandkumar also briefly discusses NVIDIA's computational platform, Clara, which enables AI in healthcare applications, including drug discovery and medical imaging. The platform offers pre-trained models, assistance for data labeling, and federated learning to preserve privacy. Anandkumar concludes by emphasizing the vast opportunities for AI in healthcare and the importance of collaboration between AI and domain experts.
Asset Subtitle
Anima Anankumar, PhD
Keywords
AI revolution in healthcare
patient monitoring
surgery analytics
neural network architectures
NVIDIA's computational platform
×
Please select your language
1
English