false
Catalog
Video Tip: Artificial Intelligence for Endoscopy | ...
Video Tip: Artificial Intelligence for Endoscopy
Video Tip: Artificial Intelligence for Endoscopy
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
This ASG video tip is sponsored by Braintree, maker of the newly approved Soufflave and Soutab. AI is complex because there is a huge number of terms that are very, very hard to disentangle. And most of us didn't learn any of these in medical school. But the way to distinguish the terms in general is to think about what the AI tool does. There are a variety of important medical applications ranging from computer vision to drug discovery to robotics. And then how the AI does it. This is where we start getting into very complex nested terms where AI is machine learning is a subset of AI, deep learning is a subset of machine learning. And we're going to talk a little bit about what these terms actually mean and how they fit together. The way that we're going to learn about that is with the dog or food computer vision challenge. This is a very classic computer vision challenge that as a human, if you look at these images, you can recognize that the picture of the chihuahua and the blueberry muffin and the golden doodle and the fried chicken, you don't struggle too much in distinguishing. In fact, I would wager that every one of you can get each one of these exactly right 100% of the time. But you can also recognize that this is actually a very challenging problem if you're trying to train a computer to recognize these images. So let's talk about what we can learn about the approach to deep learning and machine learning using the dog or food challenge. Historically, this challenge was approached using a traditional programming approach where a programmer would sit in a dark room and write 100 lines of code. And that code would basically be if then commands to say if a certain number of pixels had this particular shape or color, then it's more likely a dog. And if it was X, Y, or Z, it would be more likely food. And you'd write 200 lines of code, you'd try it, and you'd find that you're 60% accurate and you'd be so angry and write 100 more lines of code. And that is essentially how traditional programming approached computer vision for years. This was even tried for colon polyps. Just coders trying to describe what polyps look like. The big leap with machine learning is the concept that the human can work with a premade algorithm, an algorithm that is already built to recognize certain aspects of the world. And that algorithm can then be exposed to labeled data. And that labeled data can be incredibly voluminous. It could be 100,000 pictures, 200, a million pictures, along with a desired output or label. And when an algorithm is exposed to data, the result is what's called a model. So you'll hear the term model a lot. That basically means a trained algorithm. And here we can find a model that can predict whether it's a dog or not dog. The big leap, though, happens actually with deep learning. What I didn't tell you about traditional machine learning is that the human still has to provide some guidance about the critical features of the image. This is called feature extraction. So in traditional machine learning, we would sit down with a programmer and say, well, the snout and the eyes and the ears, these are important features. Let's make sure that the algorithm focuses on that. Deep learning does that feature extraction step on its own. It identifies the features that are important to its own algorithm on its own. And it may recognize features that we don't think are important at all. And it may recognize patterns that we can't recognize. And so the big excitement here with deep learning is that it can potentially recognize patterns that no human could ever possibly recognize. For instance, if we gave it pictures of 100 million chihuahuas and all their medical and geographic history, a deep learning software might be able to recognize hypothyroid chihuahuas from New Jersey. We may not necessarily want it to do that, but it's going to be able to recognize patterns that are beyond human comprehension. So let's roll into some common AI myths. And, in fact, the very first myth is the importance of is that AI is free of bias. AI actually has incredible amounts of embedded bias that we need to lean into. So myth number one, AI is more robust than human intelligence. So this is actually both true and false. AI tools are incredibly powerful to recognize these patterns I mentioned that we can't recognize as humans. A few good examples of this is that we can have AI tools that can recognize cirrhosis based on an EKG or anemia based on an EKG. These are not patterns that humans can recognize. Perhaps more troublesome, we have AI tools that can recognize or predict race and ethnicity based on chest X-rays or hand X-rays. And you can imagine how complex that may be if you're starting to use models that make other treatment recommendations based on having this sort of embedded data. The flip side, though, is that AI predictions may get totally broken by subtle perturbations. A classic example of this is an AI system that was trained at a university hospital. They tried to apply it at a community hospital down the street that had a slightly different system, GE versus Phillips or whatever, and the system couldn't detect a single lung nodule. If you think about a human radiologist, if a human radiologist is trained at, say, Stanford and then walks down the street to a local hospital, they'll be able to sit down in the dark room and regardless of what the system is, they'll be able to do a pretty good job recognizing lung nodules still. There's a certain aspect of human intelligence that is incredibly robust. We can move in different environments very easily. It is a huge problem with computers that they may not be able to do that. Additionally, AI lacks common sense completely. This is actually the cause of many, many of the errors that ultimately are made by AI because they can be focusing on the wrong thing. A classic example of this is a system that was trained to recognize melanoma. It did it with very high accuracy, but subsequently the researchers recognized that in many cases it was recognizing that melanomas are usually marked with ink next to the melanoma. It wasn't trying to cheat. It just has no common sense, and so it ended up using the ink marking as one of the major tools to recognize whether melanoma was present.
Video Summary
The video discusses the complexity of AI and terms related to machine learning and deep learning. It explains how traditional programming and machine learning differ, highlighting the role of feature extraction in machine learning. Deep learning automates this step, potentially identifying patterns beyond human comprehension. Common AI myths are debunked, emphasizing the embedded bias in AI and its limitations compared to human intelligence. The importance of human common sense in AI development is highlighted through examples of AI errors. The video stresses the need to understand the intricacies of AI for unbiased and effective implementation in various fields.
Keywords
machine learning
deep learning
feature extraction
AI myths
human common sense
×
Please select your language
1
English