CHICAGO - Artificial intelligence (AI) is once again a hot topic at the RSNA meeting, with nearly 80 vendors showing AI solutions in the Machine Learning Pavilion at McCormick Place. There is an equal number of vendors on the RSNA floor displaying their AI wares.
The Machine Learning Pavilion alone doubled in size from RSNA 2017, and the number of vendors showing AI products in their booths has at least tripled from last year. Some meet the criteria of being what many perceive as "true" AI -- software that provides diagnostic assistance based on the use of various algorithms -- while others claim to be AI-aided, AI-empowered, or other variants with an AI precursor.
Many vendors are claiming some connection to AI without worrying about things like having 510(k) clearance from the U.S. Food and Drug Administration (FDA). Or their algorithm has been trained on a dataset so small it invites ridicule -- or at least skepticism.
Other than the FDA, AI has no regulations around it, just like PACS in the early days. No one has a singular, clear-cut definition of what constitutes an AI algorithm and what doesn't. Indeed, what is and isn't AI is open to interpretation.
This directly relates to whether FDA 510(k) clearance is required in the U.S. The level of regulatory scrutiny of such systems depends on their intended use and associated risks. One of the areas where AI can provide tangible improvement in healthcare is pattern recognition -- either based on a predefined algorithm or, more powerfully, based on machine learning. This capability means that in a medical setting, AI could be applied to interpreting images and, thus, serve as a diagnostic tool.
In late 2017, the FDA adopted guidance titled "Software as a Medical Device (SAMD): Clinical Evaluation," which states that software intended to "treat or diagnose" is considered to represent a higher risk (and consequently is subject to more stringent regulatory oversight) than applications that "drive" or "inform" clinical management.
This distinction has led many vendors to claim their software is not intended for diagnostic use, allowing them to circumvent or at least minimize their need to go through the FDA review process. Even with that clarification made, fewer than one in five vendors at RSNA 2018 that are showing AI software for clinical applications had 510(k) clearance for their products, and nearly half of those that had FDA clearance received it in the past 60 days.
Likewise, few AI developers cited any sites using their software clinically. Because no FDA or CE Mark clearance is needed outside the U.S. and Canada, the number of non-U.S. companies showing AI was eye opening. Look for Asia, Europe, Australia, and other global regions to embrace AI significantly more quickly than the U.S. market.
One of the more interesting things I've noticed at RSNA is vendors' perception that AI will be embraced solely because of its potential to improve patient care. Few will debate that this is a huge plus, but the reality is improved patient care alone won't justify AI to the chief financial officer and others controlling healthcare's purse strings, except perhaps in select facilities like teaching institutions. Neither will AI's much-hyped benefit of improving radiologist efficiency, unless there is a concurrent increase in the volume of studies done (and associated technical fees) or the radiologists are not contracted employees.
The biggest return on investment for AI seems to be its potential to improve exam efficiency, such as in the case of MRI, or reduce radiation dose with CT. Noise-reduction algorithms have been developed that can reduce MR exam times by 90%. Even a 25% reduction would be huge, so obviously the potential for increased revenue needs to be considered.
With CT, any reduction in radiation not only has obvious patient safety benefits but also increases the life of x-ray tubes. This correlates to a direct cost savings that can be significant. There are numerous other benefits of AI, but tangible dollar savings need to be shown before AI will be embraced by most facilities.
The sheer number of AI applications required to address all areas is vast. Most vendors have focused on a small area of interest -- diagnosing multiple sclerosis or dementia, for example, and not addressing other neurological areas. This could lead to a situation in which clinical users will have to have several neuro AI applications loaded to address everything related to the brain.
When you follow this through and include all of the other major areas being developed -- mammography, lung, liver, chest, etc. -- you can easily have dozens of AI apps on a server. These apps also need to launch automatically when a study that correlates to the anatomical region is launched.
Interfacing these apps to the PACS typically -- but not always -- requires the PACS vendor to validate the software to ensure that it doesn't have a negative impact on the PACS. None of this is being done now, although there are a few PACS vendors that are evaluating some of the internally designed or third-party AI solutions in their PACS offering. This is how AI needs to be used: as an application that is seamlessly interfaced with PACS and not a solution that needs to be manually selected and launched each time.
Basic machine-learning tasks may be performed on a newer server that has the multiple apps already installed on most PACS (mammography, orthopedic, etc.). However, deep-learning processing -- the applications that help treat and diagnose patients -- require a separate high-end server or must be run in the cloud, with the cloud being the best and most cost-effective solution. Because more than half the PACS today are candidates for upgrades, adding any application -- let alone a processing-intensive algorithm such as AI -- would have a significantly negative effect on PACS performance even with load balancing implemented.
The FUD factor with AI -- fear, uncertainly, and doubt -- seems to have subsided some, with radiologists now looking more closely at how AI can become an ally rather than the enemy. Nearly every AI presentation on Sunday at RSNA 2018 was packed, and I expect this will remain throughout the show as people evaluate AI's potential good instead of any potential harm it can do to the way radiology is practiced today.
My one takeaway is that AI will augment our human abilities as opposed to displacing them. That said, we need to understand that humans are the last analog items in an increasingly digital world.
While the evolving AI market holds great promise, most experts agree that we are five years or more from widespread AI use, although there are limited applications now where AI is playing a role in day-to-day radiological interpretation. Still, nearly all agree that the bottom line is we can either merge with AI or be left behind.
Michael J. Cannavo is known industry-wide as the PACSman. After several decades as an independent PACS consultant, he worked as both a strategic accounts manager and solutions architect with two major PACS vendors. He has now made it back safely from the dark side and is sharing his observations.
His healthcare consulting services for end users include PACS optimization services, system upgrade and proposal reviews, contract reviews, and other areas. The PACSman is also working with imaging and IT vendors developing market-focused messaging as well as sales training programs. He can be reached at [email protected] or by phone at 407-359-0191.
The comments and observations expressed are those of the author and do not necessarily reflect the opinions of AuntMinnie.com.