false
Catalog
Gastroenterology and Artificial Intelligence: 2nd ...
The Algorithms An Alphabet Soup_Bagci
The Algorithms An Alphabet Soup_Bagci
Back to course
Pdf Summary
The document discusses the drawbacks of current AI algorithms and the need for Explainable AI (XAI) and trust in AI systems. It highlights the failures of non-robust algorithms in AI and the importance of having algorithms that are explainable and not fragile. The concept of interpretable deep learning (DL) is introduced, which refers to the ability to interpret DL models in a domain-specific manner. Current approaches for XAI are discussed, including altering pixels to observe the impact on predictions and the use of attention mechanisms. However, these approaches have drawbacks such as being post-hoc and lacking reasoning or visual attributes.<br /><br />The document proposes the idea of inherently explainable algorithms that are robust and not fragile. It presents an example of a human-in-the-loop XAI system that combines human expertise with AI to achieve better results. The example involves the integration of multi-parametric image data and the use of eye tracking to identify areas at risk for missed diagnosis. The document suggests that eye tracking can also be used for fast segmentation and ground truth labeling.<br /><br />Overall, the document emphasizes the need for transparency, trust, and understanding in AI algorithms. It concludes that XAI can increase transparency and build trust by providing interpretations and understanding of how DL models work. It also mentions the importance of fair and causal models in AI.
Keywords
AI algorithms
Explainable AI
trust in AI systems
interpretable deep learning
XAI approaches
altering pixels
attention mechanisms
inherently explainable algorithms
human-in-the-loop XAI
transparency in AI
×
Please select your language
1
English