As radiology artificial intelligence (AI) algorithms move toward clinical use, there are three possible models for how AI will best fit into the specialty's workflow, according to a talk on Thursday at the U.S. National Institute of Biomedical Imaging and Bioengineering (NIBIB) Artificial Intelligence in Medical Imaging workshop.
As more AI algorithms are being developed, radiology needs to decide the best model for providing access to AI image analysis results and to utilize AI to triage high-priority cases for review by radiologists, said Dr. Paras Lakhani of Thomas Jefferson University.
He discussed AI radiology workflow examples at the Bethesda, MD, meeting, which was co-sponsored by the National Cancer Institute, the National Institute on Aging, the National Institute of Dental and Craniofacial Research, the American College of Radiology, the RSNA, and the Academy for Radiology and Biomedical Imaging Research.
There are three possible models for AI use in radiology, according to Lakhani.
1. AI on demand
In an AI "on-demand" model, radiologists would request the analysis of an image from the AI system when they want it.
"This could be for a particular image; it could be for a series of images for a patient like a CT scan, or even part of an image such as a liver lesion on a CT," he said.
After the AI results have been requested by the radiologist on the PACS software, the PACS would alert the AI server to the request. The AI results would then be sent back to the PACS or directly to the radiologist, he said. They could also be sent to the RIS or electronic health record (EHR), therefore supporting a PACS- or RIS/EHR-driven workflow at an institution.
This AI "on-demand" workflow option offers a number of advantages. Radiologists would be in control of soliciting the relevant AI interpretations; this could reduce false-positive AI interpretations because only select cases would receive AI analysis. On the downside, this approach requires a manual step. And there's also the potential for missed findings, as not all cases would be sent to the AI server, he said.
2. Automatic AI analysis
Another option is to automatically send all studies from a certain modality or a certain exam type immediately for AI analysis, Lakhani said. To help radiologists prioritize their reading order, the algorithm could then highlight exams on their worklist that could potentially have a critical result, or even just flag cases that may be abnormal.
If the AI system is receiving images directly from the PACS or imaging modality, there are two workflow options: a radiologist would view the preliminary AI findings before they are added to the final report before being sent to the EHR or RIS, or the AI results would be routed directly to the EHR or RIS and immediately made available for clinicians, Lakhani said.
This first option would benefit from having the radiologist ensure the accuracy of the final report; a discrepancy management system wouldn't be needed, he said. However, this workflow model would reduce turnaround time because the AI results would not be sent directly to the EHR or RIS.
"It would be a modest decrease in turnaround time, but it's still faster than what we have today, where we're viewing studies first in, first out," he said.
3. Discrepancy management
Choosing instead to route AI results directly to the EHR or RIS would require some type of discrepancy management system to deal with cases where the radiologist disagrees with the preliminary AI findings, according to Lakhani.
"[Today] we have overnight trainees -- residents, fellows -- who do report preliminary findings, and the next day, an attending staff radiologist sometimes has discrepancies and we have systems in place to manage that," he said. "So if you have this sort of [AI] system where you are very confident in and are sending [AI] reports directly to the EHR or RIS, you would [still] need a discrepancy management system to sort of reconcile those."
This option would provide the fastest turnaround time of preliminary results, which would be very important for critical findings such as intracranial hemorrhage or tension pneumothorax, he said. However, it would require a discrepancy management system for false-positive and false-negative preliminary AI results.
"You would need a pretty accurate system -- if you're going to go this route -- that's highly sensitive and highly specific," Lakhani said. "Clinicians could act on those inaccurate preliminary results, and there are potentially negative downstream effects."
Furthermore, this workflow option may increase calls to the radiology reading room, and it could also have medicolegal ramifications for discrepancies, according to Lakhani.
A combination of these workflows may wind up being used, he noted.
"Either way, radiologists will have an important role to ensure accuracy, safety, and quality," he said.