false
Catalog
ASGE Annual Postgraduate Course: Clinical Challeng ...
AI Research and New Product Pipeline Networking Se ...
AI Research and New Product Pipeline Networking Session
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
So, we're going to keep this relatively informal. It's really a networking session, talking about AI research options and ideas in AI. And then we'd like to give you all an opportunity to sort of give us a few words about your company. And then we have a few questions about, and we'd love audience participation to both from our virtual audience as well as in person. If you have questions on how you'd like to engage with industry, questions on regulation for Nicholas, any of those. So why don't we just get started? Oh, we do have a, perfect. Let's start in the back row. I think so. Go for it. Yeah. My name is Phil Hoffman. I'm a CVX Diagnostic. I'm the CEO there. I'm probably the outsider at this group because our technology is not an optical AI product. So we've been, we're thrilled to be here because we've been talking about AI for about 10 years. And I can tell you the first five years we had to stop talking about it because we got doors slammed on us because they were scared and didn't know what it was. So the more this committee and this group can bring AI to light, the happier we are. We're basically a pathology AI product. So we work in GI. We are, and thanks to the ASGE, we're in their guidelines for the screening and surveillance of Barrett's esophagus and dysplasia. Basically what we're doing is an advanced biopsy where interoperable during the procedure, physician takes an advanced biopsy with specialized brushes, sends us that sample. We put it through a rigor. We put it through, we image everything. We basically put it through a rigor where we do two proprietary technologies. One is called EDF, where we literally do a CT scan of very thick tissue samples that we get. The key to our AI is what we've done is we have a huge library that we have taught a neural net to identify across the entire images, all the cells at risk. So basically what we then do is present a tile with our pathologists with the images that are at most risk. So at the end of the day, we're basically enabling our pathologists and enhancing their ability to do a great job. That's basically what we're doing is trying to make sure that they are identifying and not missing a thing. So they'll get a tile view from most at risk, least at risk, and across the board. Our inner observer scores are huge. And basically what we've done is we've shown that multiple peer review journal pieces, we've got over 200% added yield for additional Barrett's found screening patients, and over 275% added yield as a display. That's on an N of over 13,000 patients done in the community. So we're working in AI. We try and walk this down a pathway with our GI friends. It's not something that they have to do. We really focus on ease of use and how easy it is to integrate into a practice. There's no capital outlay whatsoever. We provide the tools for them at no charge. It only takes about three to four minutes during a procedure. And we like to enhance the opportunity at finding pre-cancerous cells. So a patient could go on to get RFA. We're trying to avoid the opportunity where it goes on to cancer and it's a resection. That's what we're doing in AI. We've been doing it for over 10 years. And every so often we upgrade our neural network. We have a library of over 250,000, and we look in the positive neighborhood to teach this AI and teach our neural net to get smarter every other year. Thanks, Bill. Hi, I'm Austin Chang. I'm an advanced endoscopist and I'm also chief medical officer for Medtronic GI. We make the GI Genius system, the first market CADI system that has been shown several times today. So very excited to be in this space and also very excited to have a great partnership with ASG and support this program today, as well as some other initiatives like the health equity assistance program, which includes not only ASG, but also Amazon Web Services to provide units across the country to certain underserved areas. So very happy about that. And then I'll also pass it on to Dustin in a little bit. My colleague here with Medtronic, I don't know if we want to let Dustin first. I'll just introduce myself, Dustin Atkinson, I manage the AI program with Dr. Chang. I won't repeat the stats about Medtronic, but we're happy to be here. It's good to see the collective group of industry, society, all coming together to advance AI. It's like there's a lot of momentum in this space. And so these events are really, really amazing to keep that momentum going. So I look forward to the discussion. I'm Andrew Barbarino from Fujifilm. So I'm a product manager for our core GI portfolio here in the U.S. And my job is kind of equal parts upstream and downstream. So working with our R&D team, we have a team both here in the United States, as well as in Japan, where most of our imaging technology is developed. And I guess I really want to start by saying I've really enjoyed this discussion, the dialogue, and all the information today. It's a challenge. Our company, since I started with Fuji, I've been really passionate about this topic and really have been thrilled to see the investment in developing a strong AI pipeline. But it's challenging for a lot of the reasons we've talked about today. There are unanswered questions. And for us, of course, a global business, there are different needs and different level of impact that we can have in different parts of the world. So it's really valuable to me to kind of understand what are the needs, what are the expectations in terms of data. And while it was extremely helpful, I have so many questions coming out of today as well. So I look forward to more discussion. And thank you again for having me. Hello, everyone. Hi, everyone. This is Sean from AI Medical Service. First of all, thank you for the opportunity to be here. It's very interesting discussions that have taken place so far, and I've met a lot of interesting people. So thank you for that. AI Medical Service is a Japanese medical startup company. We're known for a lot of the research that we've published regarding various AI algorithms that have been built for H. pylori status, esophageal squamous cell carcinoma, gastric cancer detection, et cetera. Making a product and releasing that into the market, we decided on gastric cancer because of the great clinical need that exists in that space. Particularly in the United States, we have most gastric cancers diagnosed at stage three or four. And by offering a product which can help identify and detect those subtle lesions, perhaps we can move the trend towards stage one and two and have a big impact on survival and disease prevalence and things like that. So I'm happy to be here today. Currently, we are pursuing regulatory approval for that gastric cancer product in Japan with PMDA, and as well, are engaging in a regulatory strategy and approval process for the FDA, in addition to engaging in lots of collaborative research with institutions in the United States. So I'm hoping that I can expand that network and meet some of you wonderful folks today. Thank you. Thank you. Oh, I could use one. Hello, everyone. Frank Feliciano from Olympus Corporation of the Americas. I've been with Olympus for 25 years, and most of my career at Olympus has been focused in upstream marketing. And the biggest thing about Olympus from my perspective has been the introduction of NBI, which has been a little bit of a disappointment for me personally, because of the way it wasn't really embraced the way I would have liked to have seen it be embraced. However, having said that, I really believe that AI is going to change that, because the pivvy is what we were really hoping to achieve with NBI. And I think it can be achieved with AI, as we've seen earlier. You know, we have a long history of success in the GI space. Our focus, as you well know, has been historically on specialized imaging quality and handling, specifically insertion. Recently, however, we have determined that digital solutions are needed to continue the evolution of innovation quickly and cost effectively. And this has not been a core strength of Olympus, but one that we are definitely dedicated to achieving. About three years ago, we embarked on a digital transformation across our entire medical and surgical portfolio of products. And this is a journey. This is not an endpoint for us. And we really believe that this will change dramatically. What we can bring to market. We have employed a broad spectrum of engineering, data science, regulatory expertise in an effort to achieve this ambitious goal. So we're coming from a classic capital products company and refocusing our energies towards a more of a digital solutions organization. Our vision to this end is to develop a platform that can holistically sit at the intersection of our hardware, hospital IT, and the cloud to be able to bring this value to patients, providers, and insurers within the GI space. We acknowledge that a combination of organic and inorganic solutions is necessary to address the many pain points that we have not been able to address in the past. But we look with great promise to achieve those. As you probably, many of you are probably asking, you know, well, where is Olympus in AI? And we have solutions that are launched in Australia, in Europe. So it's not that we're not present. It's just that we're not yet in the U.S. And there's a lot of good reasons. And this is not the forum to really discuss those. But we do have a presence. And we are anxiously looking forward to that presence here in the U.S. in the coming near term. Thank you. Good afternoon. My name is John Temple. I'm the VP of sales and marketing at Endosoft. And I want to thank the ASGE and all of the speakers today for a tremendously engaging day, thought-provoking day to really help all of us in industry and all of our partners to really think about where we are and where we need to get to and ultimately how we're going to do that as partners. Similar to a couple of the companies up here and probably different than some of the companies here, Endosoft is a software company. We've been in the GI space for about 27 years, developing endoscopy report writers and everything that goes from the beginning to the end of a patient experience has to be reported. That's been our focus on our specialty. We do it for many specialties outside of GI, but GI is where we're best known. Being a software company has allowed us to be nimble. It has allowed us to look at where healthcare is going and the challenges associated with healthcare. The most scarce resource in healthcare is time. It is the clinician's time being compressed between insurance companies and their own hospital systems and the patient's demand of their time. So we as a software company do have a polyp detection device. We have a polyp sizing module as well that goes hand in hand with our polyp detector. And we have a natural language processing module that is going to take everything we're seeing in all of these beautiful images that come from Olympus and Fuji and Pentax and even Boston Scientific and Ambu. And we're taking these images and everything we see is being recorded and everything we see that can be captured is being inputted into your endoscopy report automatically. From our perspective, AI is going to not just help us find polyps and size polyps and classify polyps and determine what to do with them, but it's going to help give you a time-neutral report so that when you walk out of the procedure room, your report is in essence done. It is reviewed and signed off on. That time neutrality is going to be a tremendous savings for healthcare systems, providers, clinicians who will now be able to spend more time with their actual patients. The same patients more time or more patients with the same amount of time. And so the scarcity of time in our healthcare environment has really never been addressed. The incredible technology that comes to market and is being introduced has challenges. Challenges of cost, training, reimbursement, and is it going to take more time? And those are the four killers of technology. And we think AI in many forms with all of the companies up here and literally hundreds of others out there in healthcare is going to transform healthcare in such a positive way where, again, we're going to be focusing on the patients and their outcomes and be able to increase the throughput of the patients that are able to be served by all of you. So Endosoft is very pleased and very thankful for the opportunity to be here with the ASGE and we have partnered with them and continue to partner with the great work that they are doing in trying to educate and bring to light where the challenges and opportunities are that we can all work on together. So on behalf of all my colleagues at Endosoft, thank you all for allowing us to be here and to be a part of this very exciting initiative. Thank you. Thanks, all. Nicholas, do you want to say it? You've been introduced a couple of times. Okay. Sounds good. Well, I'm going to open up first. I have a bunch of questions, but I'd like to start with the audience. Does anyone have a question for anyone on the panel? Raj? Just a quick question. I see everyone moved software apart, but they have vendor hardware that allows you to, makes it more difficult to use different vendors' software, for example. So if I like the CADE system and one vendor and the CADE, or in the software, you know, different vendors, they're essentially inoperable. Is there any sort of analog and perhaps some ideology for a vendor-neutral hardware solution that then allows you to get software solutions from different companies based on what you like to use? And, or does anyone from the panel have any opinions on potentially having software solutions that are not tied to a hardware box? So I'll just repeat that very concisely for the online audience. But there are different hardware systems, and then there's software that's been developed in different areas. And so is there a, can you give us maybe, Nicholas, if you have an idea of an analogy in another field where you can kind of mix and match different algorithms? Or, and then from the panelists, do you have ideas on how you see the field moving forward? So in radiology, there is some intercompatibility, intermixing of AI systems. The hardware themselves in imaging has DICOM standards, so the images themselves are stored in a standardized way. So in reality, the AI systems can be applied to a wide range of hardware systems. There is intercompatibility to some extent in endoscopy, so some processors and scopes can be interchanged. It's not fully interchangeable. And it gets down to a question, again, are the companies that produce the software tying it to their hardware, and that's what they're going to do it, and that's a choice for the companies to do. But again, as much as clinicians and healthcare systems encourage them to make them more intercompatible, that's probably a better long-term future for getting a lot more innovation going in this area. And I think it would tie back to, as we talked about earlier, how much information you have about the product. So if it was trained on 80% of one kind of hardware and only 20% of another kind, and you use that other kind that's in the minority, maybe you have to audit it or trial it and then see how it works for you. Yeah, and I'd just add, typically when these are going on the market, there is a range of scope processor or hardware acquisition systems that they're compatible with. And so if you especially get a new scope or a new system that comes on, it may not be labeled or compatible with that AI until somebody comes back and provides at least some additional data showing up. And I know we've touched on both parts, just to make sure, I'm talking more about the PC that runs the AI, not the processor or the scope. Yeah, that part of it I don't believe is going to be a problem. Most of these are standardized systems. There is sometimes some specialized hardware to make them run fast enough for integration into the endoscopy procedure, but that's pretty standard. Certainly the AI itself are going to be running on GPs and other systems that are produced by Nvidia or other companies. So those platforms, especially as we start a little bit different in GI, in radiology, a lot of people are thinking about doing this cloud-based. So again, the solutions are even more abstracted from the hardware. Again, with the time constraints, it's not quite so easy to do that, but I don't think that that's probably going to be a major problem. So maybe I'll ask Frank, Frank is a significant hardware producer. Do you envision a state where you might have a smaller company's AI for a specific disease process that you would then, as a provider, I would be able to have an Olympus system, but then I'd be able to add something onto it, but your processor would run that other AI too? And this is years down, but how do you envision that? Yeah, no, that's a great question. And it's certainly one that we have considered very early on. In fact, the current AI platform that Olympus has launched in Europe and in Australia is capable of running multiple algorithms actually. And we have thought very sincerely about getting algorithms that are developed by other parties that could run on our platform. It's a matter of providing the appropriate specifications, just as in the case with the iPhone, where you have apps that are loaded on there. Now, the biggest difference with the iPhone is that you don't have a regulatory body that you have to deal with. And that really complicates the matter, but yes, it is being given careful consideration. I think that the cloud is definitely a way to free yourself to some degree from the hardware requirement. The other complication in GI is that it has to be real time. Detection has to be real time. Cat X does not have to be real time. You don't need to know at the moment you detect a polyp, whether it's an adenoma or a non adenoma. You want to know by the end of the procedure, which gives you ample time from a latency point of view to go to the cloud, do some processing and come back. So there are some gives and takes that make what you're asking a very, very realizable situation. Sure. Anyone else have a different thought? Yeah, I would just add, I think that, you know, it's something we're exploring internally, certainly. And I think that, you know, in terms of developing new solutions, somebody had asked about different imaging. I think it was one of the attendees, different imaging modalities, right? And that could create complexity in terms of integration. If you're working with a different image modality in terms of, you know, developing an algorithm and those things would vary. I'm also curious, I guess this is turning question back around a little bit, but I know you brought up a good analogy with the iPhone, you know, does data then become a concern, right? Because if we're talking about implementation across platforms, I say the iPhone analogy because, right, this was a real discussion where, you know, Apple had access to data because it was provided to a company that had an application on Apple's hardware. So I'm just wondering if that's a consideration. And, you know, just with that, of course, adding that Fujifilm, this is something, you know, we also have considered, we have other imaging modalities, you know, that I don't work so closely with, but radiology, cardiology, we have an ultrasound division. So, you know, certainly we could try to integrate those things and work with other vendors as well. But I'm curious about the, you know, data security, if there's concerns there. So in the topic of hardware, you know, obviously we need hardware to really be able to do that. You know, obviously we need hardware to run the algorithm, right? But as we move into the realm of diagnostic algorithms, there has to be a component of traceability, right? I mentioned the need for photo documentation, right? If we are providing a diagnosis to be able to trace back that diagnosis. And this is open for anybody in the panel. You envision a future where we require endoscopy suites to have automated video recording as part of this process, and that video has to be stored, right? And it has to maybe be analyzed, cloud-based offline analysis, right? So what is it that industry would like to see from the GI suite, right? You guys would like to see automated video acquisition, cloud-based storage, not only for AI development, but also for implementation of AI. Any thoughts? Yeah, I think, you know, this is something we deal with already, even without AI. And, you know, we have expectations that come from healthcare systems about typically it's security or, you know, what data are we storing. So in terms of implementation, and I guess, you know, I think about that more, it's developments, right? So, you know, can we improve algorithms with access, you know, if there's more data storage? There was a question earlier about long-term outcomes. Can we look at historical video and run an algorithm over historical video to then understand, you know, the long-term outcomes from sort of day one of launch versus having to wait 10 years? Is that a possibility? I don't know that there's any expectation, although I do know, of course, and as all of the faculty know, you know, to develop these algorithms, we require access to images, whether it's historical, you know, saved content, or, of course, real-time clinical studies. But those things will help us build the algorithms. As far as implementation, I'm not aware of expectations in terms of video storage. But that thought has crossed my mind a couple of times. One of the other topics was from Dr. Parasa talking about explainability, right? So if you're a clinician, you make a decision with an AI-assisted software, and then you need to go back and evaluate that decision and explain why was it that this polyp was characterized as neoplasia, you know, right? So I think that expectation from us wouldn't be there unless, of course, there was some regulatory reason that those, you know, those expectations were placed on the vendor. But I don't think that's been the case, you know, so far. So I have a question just for, oh, go ahead. I was just going to add, with some gastric cancer screening programs, there is a double check in place in which photo documentation from the first actual exam is passed on to a secondary physician, and they will double check some of those still images. I think there's a chance to not only expand that beyond still images, but also to video and to implement basically AI, much in the same way that you would use it in a capsule setting to reduce the amount of time that it takes to also reduce the double checking time or to increase the accuracy or throughput of that process. I think that's pretty exciting as well. So I have a question both for Industry and then also for Nicholas. You know, as you, we have now initial prototypes of devices that have gone out and been approved, but they're going to get better with time. So what is the plan from an industry perspective for an iterative learning process? How do you, how will that take place? And then will the FDA reevaluate these products at certain amounts of time? Because you would presume that no matter what they were trained on, you'd want them to get better with time. So what are your plans for ongoing learning, if you will, just like we do as physicians, how does your AI plan to keep learning? Can I take that or at least take a shot at it? So I think it's fair to say that there's not going to be a company, a single company that's probably going to have the best algorithm for every space within GI and healthcare. I think it's going to be a collaborative effort. And as such, as AI is software, software, we think is going to be what needs to evolve rapidly. And we see it evolving rapidly in front of our eyes. Not being tied to hardware, I think is going to speed that process dramatically. So Endosoft specifically is a hardware neutral provider. We provide software that is clinical decision support software. It can go in any computer, in any room, in any hospital. You don't have to be an Endosoft customer. You don't have to be an Olympus, Pentax, or Fuji customer. It's completely neutral. If we're going to see AI develop and iterate as fast as possible, I think all of us up here and out there are going to have to have the ability to evolve that software quickly and not have to put it in another box and have that entire new box certified by the FDA every time. So we don't necessarily have all the answers on how to go through that process because some of these things are going to have to go through the FDA. But the finite definition of putting software in a box, I think is going to be very limiting in how fast things can iterate. We all update our iPhones, if I can keep with that analogy. Apple doesn't send us a new iPhone every time we get new software. They update our software in our existing piece of hardware. And I think if we can all start thinking about whatever hardware we've got out there, as long as it meets minimum spec and all those things, if we can put software algorithms for different modules from all the talented research that's going on, we can probably build and get to the end point of where we want to be significantly quicker than if we have to be limited by the current process of hardware. Great. Nicholas, your thoughts on that? Yeah. So I think it's an important question. Right now, FDA is looking into how to deal with this idea of having more flexibility around modifications. And currently, modifications that impact performance come in for new 510ks or new submissions. But we have a white paper out that talks about trying to figure out ways to be more flexible on what types of information we might see pre-market in order to allow more flexibility for making modifications with updated data. And we hope to have a guidance out on that relatively soon, to try at least a draft guidance that would at least lay out some pathways that we might be able to do that. So I think it's a really important question. It's a really difficult question because small changes, what we want to make sure is that the AIs continue to at least stay the same or get better. And of course, you always run the risk that the AIs, even though you're modifying them, are actually going to perform worse because of other things with your data or other factors that go on. And so how to be competent in the performance, I think, is a challenge. Having access to more continuous monitoring of performance would really be a benefit as far as how we could do that or how everyone could do that and understand that we're comfortable with this performance or this change. The other type of implementation that people talk about is Mayo might have a population and there's an AI specific to Mayo and Michigan has a different population and a different AI. So how do we optimize for different subpopulations or different clinics and again, how to make that functional and practical and how would a company deal with 50 or 100 different versions of software and how to do that? There's a lot of practical issues, but again, it's very logical, right? The populations are different, so let's adapt these a little bit differently for each population. So another sort of useful advantage of AI, but practically implementing and especially from a regulatory perspective, it's really challenging. Thank you. Shyam? I have a request from the industry panelists that have the more proprietary products. How receptive would your company be, so to speak, with respect to a scope agnostic or a hardware agnostic AI? We've seen just an example to pick on a couple of companies that aren't here, Boston Scientific and Ambu, right? They both have their disposable duodenoscopes. If they come out with AIs for each, it relatively becomes challenging to understand their training sets, their validation sets, their test sets that ultimately can have an impact, good or bad with respect to the interventions being performed. So is the software development that's being suggested something that you would see it from an industry standpoint as being something you'd want to develop if in a way that the FDA could regulate or even societies could have some influence over to being hardware agnostic? That's a tough one, Shyam. It really is a tough one. I can tell you that our vision, as I said early on, is really to provide a holistic platform. We're very fortunate to have a top-of-the-line GI platform, and we have an extraordinarily strong market share. And I feel that by going into the software arena is a way to maintain that, to grow that, and to provide better solutions to you all. We do have to think about a way to be able to be more accepting of all that's out there and figure out if there's a way that we could make this plug and play. I remember very early on, four or five years ago, when this was beginning to become something that was likely to really become today's discussion, which it is, it was plug and play. Yeah, we're just going to do this, drop it in. And that's a great ambition to have, but it's really difficult to do. And I do see that having too much, too many solutions is going to make it really hard for the customer, for the doctor, for the patient. This will probably work its way out, as it always does, but the right technology will obviously emerge from this. It's just going to be, which one is it? And I do agree with John about the importance of a collaborative initiative here. But with that, I think we might be too early for that. Okay. Well, I think our time is up. I know we have a lot of young investigators that are attending both in person as well as virtually. And so we encourage you to send chat questions if you'd like particular contact information, but if you're in person to please go up and meet all of our industry partners and see how you might be able to collaborate with them on research and other ideas you might have. Thank you all.
Video Summary
The video is a recording of a networking session focused on AI research options and ideas in the field. The session includes representatives from various companies involved in AI development for medical purposes, specifically in the field of gastroenterology. The companies present their products, discuss their work in AI, and share their goals and strategies for integrating AI into medical practice. Some key points from the session include:<br /><br />- CVX Diagnostic: CVX Diagnostic has developed a pathology AI product for gastroenterology. They use specialized brushes to collect tissue samples during a procedure, and their AI system analyzes and identifies cells at risk for further examination. The goal is to enhance pathologists' ability to identify pre-cancerous cells and improve patient outcomes.<br /><br />- Medtronic GI: Medtronic GI has developed the GI Genius system, the first CAD system for gastroenterology. They have partnered with ASGE and other organizations to support initiatives in AI and healthcare equity. They are focused on providing time-neutral reports for clinicians, enabling them to spend more time with patients.<br /><br />- Fujifilm: Fujifilm is working on developing a strong AI pipeline for gastroenterology. They have been investing in AI research and are focused on addressing the challenges and needs in different parts of the world. They are dedicated to developing a platform that can integrate hardware, hospital IT, and the cloud to bring value to patients, providers, and insurers in the GI field.<br /><br />- AI Medical Service: AI Medical Service is a Japanese medical startup company known for their research on various AI algorithms, including those for gastric cancer detection. They are pursuing regulatory approval for their gastric cancer product and engaging in collaborative research with institutions in the United States.<br /><br />- Olympus Corporation of the Americas: Olympus is in the process of digital transformation and developing digital solutions for the GI space. They are focusing on developing a platform that can sit at the intersection of their hardware, hospital IT, and the cloud to provide value to patients, providers, and insurers. They acknowledge the importance of organic and inorganic solutions to address pain points in the industry.<br /><br />- Endosoft: Endosoft is a software company specializing in developing endoscopy report writers. They are focused on providing clinical decision support software that can be used with any computer or hardware system. They aim to help clinicians save time and increase patient throughput by providing time-neutral reports.<br /><br />The video ends with a discussion about hardware agnosticism, regulatory considerations for ongoing AI learning, and the need for collaboration and standardization in the AI industry for gastroenterology.
Asset Subtitle
Moderators: Cadman Leggett, MD, Nayantara Coelho-Prabhu, MD,FASGE
Keywords
AI research
gastroenterology
medical purposes
pathology AI product
CAD system
gastric cancer detection
digital solutions
×
Please select your language
1
English