false
Catalog
Gastroenterology and Artificial Intelligence: 2nd ...
The World of ML and AI: A Primer of Next Generatio ...
The World of ML and AI: A Primer of Next Generation Technology
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
What we'll do is get right into our next keynote talk. And again, it's my pleasure to introduce Anima Anandkumar. Anima is a professor at Caltech and director of ML research at NVIDIA. In her previous life, she was a principal scientist at Amazon Web Services and has multiple awards to her CV, including faculty fellowships from Microsoft, Google, Facebook, and Adobe. She's part of the World Economic Forum's expert network, and her research focus has been on unsupervised AI optimization and tensor methods. So with that, Anima, I'm going to turn it over to you. Hi, everyone. Thank you. And thank you all for making this on a weekend on a Saturday. So today, I want to give you an overview of various efforts we are doing in terms of pushing AI into healthcare applications, both in my role at NVIDIA, as well as at Caltech, as a professor of computing and mathematical sciences. And so today, it's exciting times with the deep learning revolution. We've seen what I call the trinity of AI, you know, the data algorithms and the GPU compute coming together to create large scale models, consume all these data sets to come up with very important impact in multiple domains. And healthcare is one that is primed for such an impact. And if you think about the recent trends, medical imaging has been at the forefront of this deep learning revolution, and in particular, the use of AI for imaging research. I mean, even if you don't want to completely replace the radiologist, which have deep domain expertise, can we speed up the process? Can we augment them? Can we help them do a much more thorough job in getting the right insights from imaging? That's one of the exciting applications of AI. And in addition to that, we also see the trends of more intelligence moving to the edge. So edge compute has been an important aspect for NVIDIA, how we can build more powerful computing on small devices, you know, whether it's autonomous cars, robots, drones, or medical scanners, and other sensors and devices. And you want to then connect them together. And that's where we have the Internet of Things. And you can have Internet of Medical Things where various medical devices can all be collected. And in real time, you have monitoring, and you have all kinds of important insights about the patient. So these are three important trends we see today on use of AI, use of smart connected devices on the edge. And NVIDIA Clara has made a lot of contributions in enabling healthcare, in democratizing the use of AI, and where AI models can be easily employed by even non-experts, you know, who don't necessarily have to code in the programming languages for using the AI algorithms. And if you see, it started with medical imaging, but we also now have genomic analysis. The drug discovery was recently announced. So how all the popular algorithms for drug discovery, we can enable that, we can speed it up on GPUs and make it available. And finally, like a comprehensive use of AI in all parts of hospitals. So what's known as Clara Guardian for smart hospitals. So ultimately, it's this confluence, right, where AI gets employed in all aspects of our lives. And when it comes to healthcare, from all the way from hospital processes to medical imaging, drug discovery, genomics. So these are some exciting areas where AI is making a lot of impact. And some of these are recent announcements at the GTC that happened just early this month. GTC is the GPU technology conference. So that's the premier event for GPU related products, announcements, and innovations. And if you see Clara discovery was announced there, and it's available where in our NGC, meaning it's available as a container for people to deploy these models and applications in their pipelines. And we also have announced many new supercomputers. The one in Cambridge was announced in a few weeks ago, and more recently, we'll also have one in Italy. So having these latest supercomputers is so important as part of our ability to do the latest research and get these hard problems such as drug discovery sped up and be able to do this in record time, like what we are seeing now for COVID, how we can get all aspects of treatment to vaccines be developed at record time. And that's where we also launched federated learning for COVID, and that has happened in a short amount of time. And indeed, this has been a very impressive effort to get many hospitals to come together to help speed up the patient care for COVID, you know, have the chest x-rays and all kinds of other important metrics such as blood pressure and respiratory rate, all the HR data coming together and having models that are ready for use to consume all these kinds of data. So the global model is well trained. As you can see, it has a high AUC, but hospitals have all kinds of privacy requirements and don't want to share their data with the global model. But instead, through federated learning, it can preserve the privacy and still make use of these powerful AI models. So this Clara federated learning is now available for hospitals to use. And indeed, more broadly, AI for science has been now seeing exciting developments. I know Jensen Huang is our CEO, and in his keynote, one of the slides featured, you know, many of these as exciting use cases for AI. And as you can see, fighting coronavirus, of course, comes very prominently given its importance, drug discovery, drone deliveries, walking robots. So going beyond, you know, the most popular uses of AI in computer vision or language to now harder problems, whether it's human-like chatbots, whether it's adaptive robots, or it's doing drug discovery faster. So we are now pushing the boundaries of what AI can do every year. And the drug discovery project using deep learning method to speed up quantum calculations, I'll show towards the end of the talk on how combining domain knowledge with AI models is so critical, and we cannot have AI models be completely blind to the domain, then they won't be as effective. So these are aspects I'll show as we go through the talk. So to give an idea of what NVIDIA Clara is able to do, on the medical imaging platform, what we have is a whole host of pre-trained models of various different modalities. As you can see, MRI, CT, X-ray. And so having pre-trained models means, you know, you don't have to worry about training models from scratch. You may have only a small amount of data, but you can make use of pre-trained models and then be able to do transfer learning. But to do any form of learning, the models we have today, the algorithms we have today expect supervised data, right? So they expect annotations. And annotations are usually very expensive for medical images, because sometimes it's also 3D. You have to very carefully label the boundaries. It requires also domain expertise to figure out where sometimes, you know, what the labels are. So given this, we ask whether AI can assist the annotation. Of course, the AI model may not be able to accurately do the annotation because, right, of all kinds of reasons. And you want to use this data ultimately to get a better model. So that's the reason why we would like AI to assist. But then the human can look at those annotations and quickly decide if there are corrections to be made. And then once we get to the training, we can, as I said, transfer learning is a very popular form because you can make use of pre-trained models. You can also do federated learning if you do not want to share your data, you know, with the cloud, but be able to preserve privacy and still able to get a better model. And AutoML enables you to automatically tune various hyperparameters and get to good models. And there are various ways to deploy the model, whether it's reconstructing the images, which is usually very expensive. And that's why doing it on the GPUs can really speed up. And also using deep learning for reconstruction helps you get a lot of speed ups. Or you can use this for inferencing other kinds of tasks. Maybe you want to segment it, you want to then write it to disk. Maybe you want to combine multiple different insights together. So there are various ways to then deploy the model. And so as you can see, this is a natural pipeline starting from pre-trained models and having be assisted by AI to then training the models and deploying them. And so there's use of AI in all parts of the pipeline. And there is a variety of AI-assisted annotation models available, both TensorFlow and PyTorch are also supported here. And segmentation models, you can see of various papers that relate to how much these can speed up. And in many cases, you get very significant speed ups. So this shows that this has widespread applicability. And there's, again, a variety of pre-trained models of many different organs, whether it's brain, liver, spleen, chest, and so on. And both 3D and 2D models are available. Yeah, so to just dive a bit more into the details of what the Clara platform for medical imaging supports, there's a whole family of models like I described of various organs. There's again, a set of different AI-assisted annotation capabilities. Maybe you want to fully segment the image, or you just want to put polygon boxes around the object of interest, such as a tumor. So these are aspects that you can use with this tool. And then the training involves either transfer learning, federated learning, or learning from scratch. And we've also enabled a lot of efficient GPU speed ups that make it possible to seamlessly use these models. And lots of different pipelines put together. And then the AGX means now you can deploy these models on the edge devices, such as scanners and other medical devices. And so indeed, having GPU optimization can really save you time, both in training and in deployment. And so enabling all this in Clara platform means you can make use of the latest GPUs in the best possible way. And you can also put together multiple organs and collect the overall insights. And of course, also visualize and render them together. Here's an example of lung spleen, liver, colon, all first having individual models. And then you're reconstructing the tumors. And then you can have the overall visualization. So ultimately, we have to think of the human body in a holistic way. And if we can glean insights from various different models together and combine that, that can be much more effective. And yeah, so some pointers on how to get started with Clara. This is available on the website. I encourage you to go check it out. Lots of great tutorials. And it's available for download today. So the next part of the project that I've been involved, I quickly want to talk about how we've used computer vision in recognizing robotic surgery gestures. And ultimately, right, so we can see how effective was a surgery and how we can predict the outcome based on that. So the idea is there's both videos of live surgery. There's also simulations available where the doctors are getting trained in these simulations. For the current project, we've only used the live surgery. But now we are exploring also how to use the simulated data. Because simulations can greatly augment the lack of real data in such applications. And if we have a very good physically valid simulation, then we can still take care of other feature issues that, yes, this doesn't look red. It has a very different look from the live surgery. But on the other hand, the kinematics and other properties you hope will transfer in a good way, especially if it's like needle driving. So you're only interacting with a surface in a small area. So lots of potential here to combine real and simulated data together. And it's important that we recognize these gestures and rate them ultimately to look at how effective was a surgeon and what can we think of the outcomes based on these different gestures. For instance, it's been shown that if there is a lot of time spent on the random gesture, the one that you can't classify into any of the known current gestures, that tends to end up with more needle driving effort in our attempts and then ultimately could result in tissue trauma, which is bad. So what we did in the current paper that is now published in Journal of Surgery is, can we look at the images and look at the ability to do five-way classification from forehand under to backhand under? So can we have these different gestures being automatically detected by deep learning methods? And we used a two-strip deep learning model and we used LSTM models for looking at temporal features. And so this is a first attempt towards using the action recognition models from deep learning to automatically recognize the gestures. And in another project, to quickly give you an overview of how we can use deep learning to speed up drug discovery and ultimately our understanding of molecules through the computation of quantum energies, what we did was combine molecular orbital features, which is well known in chemistry as being easily transferable across molecules with deep learning models such as graph neural networks. So this combination of traditional features of molecular orbitals with graph neural networks means we get good transferability to even larger molecules. So ultimately, we want to use these methods to compute energies of molecules with thousands of atoms. But if you use the current traditional methods, such as the density functional theory methods, that is the DFT methods, they're extremely expensive. So you cannot scale them up to such large molecules and do it in a reasonable amount of time. And if you want to do drug discovery, you have to keep doing the search over so many different molecules. And so this is where OrbNet can speed it up significantly, but still not lose the fidelity. So you get thousands fold speed up in simulation without paying a cost in accuracy. And you can also do this in a data efficient way. And ultimately, the idea is we can train on small molecules and transfer them to much larger molecules and predict all kinds of properties such as solubility and so on. So what I showed today was an overview of the Clara platform for medical imaging, but also for genomics, drug discovery, and all these important applications in health care. And then I gave you an overview of some of the latest research from using deep learning for understanding surgery videos to using deep learning for speeding up traditional methods for computing quantum energies and ultimately being used for drug discovery. Thank you.
Video Summary
In this video, Anima Anandkumar, a Professor at Caltech and Director of ML Research at NVIDIA, discusses the efforts being made to push AI into healthcare applications. She highlights the use of deep learning in medical imaging research, aiming to augment and speed up the work of radiologists. Anandkumar also discusses the trend of moving more intelligence to the edge, such as using AI in autonomous cars, robots, and medical scanners, and connecting these devices through the Internet of Medical Things. She mentions NVIDIA Clara, a platform that enables the use of AI in healthcare and has applications in medical imaging, genomic analysis, drug discovery, and comprehensive use of AI in hospitals. Anandkumar emphasizes the importance of GPU optimization in healthcare, as it can save time in both training and deployment of AI models. She also mentions the use of federated learning for preserving privacy while still utilizing powerful AI models in healthcare settings. In addition to healthcare, Anandkumar briefly discusses the application of AI in other fields, such as fighting coronavirus, drug discovery, robotics, and quantum calculations. She concludes by encouraging viewers to explore NVIDIA Clara and discusses the use of deep learning models for recognizing surgical gestures and speeding up drug discovery through the computation of quantum energies.
Asset Subtitle
Anima Anandkumar, PhD
Keywords
AI in healthcare
deep learning
medical imaging research
edge intelligence
NVIDIA Clara
GPU optimization
×
Please select your language
1
English