When you think of artificial intelligence, what comes to mind? It could be the iconic android C-3PO, IBM’s Watson computer, or even customer service chatbots. But what about artificial intelligence in healthcare? We’ve seen this technology take hold in areas like precision surgery, drug research and development, and disease detection.
Now, healthcare (and other) technology leaders are stepping away from artificial intelligence and toward a new term – augmented intelligence.
What is augmented intelligence?
Augmented intelligence captures the same sort of cutting-edge technology use in healthcare that we’ve already seen from artificial intelligence applications.
The new label is intended to distance the reality of assistive technology and machine learning from the perception of artificial intelligence many people might already have. It also emphasizes the assistive role of these devices and algorithms in diagnosing and treating patients; augmented intelligence, as it’s conceived of here, isn’t intended to replace clinicians, but to make care delivery more efficient.
Artificial intelligence was assumed to be a tool that would eventually automate mundane human tasks. However, many of the technologies fell short of expectations – potentially putting patients at risk, and leaving providers and industry leaders disillusioned. This setback has led many healthcare leaders to further distance new “augmented intelligence” software from the existing perception of “artificial intelligence.”
Even the American Medical Association (AMA) is making the shift, stating that augmented intelligence more accurately represents the role these new technologies play. In terms of policy, the AMA has already set some ground rules and aspirations concerning augmented technology implementations. The AMA seeks to:
- Leverage healthcare engagement to improve patient outcomes and physician satisfaction, and set priorities for AI in the industry
- Integrate physician experiences into the development and implementation of AI
- Encourage the development of high-quality AI that is designed with clinicians and other end-users in mind and safeguards patient data privacy
- Promote education for care providers, medical students, and administrators on the applications and limitations of AI
- Study the legal implications of healthcare AI, including liability, and advocate for the appropriate oversight to ensure safety and equitable use of these technologies
How is augmented intelligence being used in healthcare?
Radiologists are already using machine learning and assistive technologies as diagnostic tools. Retinal scanners in smartphone apps can detect “white eye,” or retinal reflections, in infants – often a sign of cancer or cataracts. Facial recognition technology is also being used to diagnose infants. Boston-based company FDNA developed an AI that cross-references photos of infant faces to flag potential genetic conditions.
Cancer detection appears to be the other major care segment seeing improvements from AI. Freenome, in California, leverages AI in blood screenings to detect cancer in its early stages. With the help of this technology, care providers hope to speed up diagnoses and develop more effective, targeted treatments. Beth Israel Deaconess Medical Center is also using AI in blood screenings. In this application, enhanced microscopes scan blood samples for bacteria like E. coli faster than manual analysis would allow—detecting evidence of bacteria with as much as 95 percent accuracy.
Machine learning and predictive analytics can also be used to improve clinical decision support systems. These systems filter medical information, patient data, and inferencing processes to present clinicians with actionable information for improving care delivery. According to Definitive Healthcare, more than 6,000 hospitals report having installed a clinical decision support platform.
Top 10 hospitals using clinical decision support by net patient revenue
Rank | Hospital name | Definitive ID | Net patient revenue |
1. | NewYork-Presbyterian/Weill Cornell Medical Center | 541974 | $5,951,047,108 |
2. | Cleveland Clinic Main Campus | 3120 | $5,164,424,360 |
3. | Kaiser Permanente - Fontana Medical Center | 526 | $4,404,479,570 |
4. | Stanford Hospital | 588 | $4,132,132,686 |
5. | NYU Langone Tisch Hospital | 2843 | $4,101,296,000 |
6. | AdventHealth Orlando | 873 | $3,769,768,374 |
7. | Kaiser Permanente - Los Angeles Medical Center | 366 | $3,653,264,495 |
8. | UCSF Helen Diller Medical Center at Parnassus Heights | 560 | $3,620,962,130 |
9. | University of Texas MD Anderson Cancer Center | 4017 | $3,480,505,919 |
10. | Vanderbilt University Medical Center | 3742 | $3,442,776,569 |
Fig 1 Data taken from Definitive Healthcare’s Hospitals & IDNs platform and Technology Insights (LOGIC). Platform accessed Jan 14, 2020; most recent data available.
Challenges in augmented intelligence
Of course, this technology is still new, and regulations have not yet caught up to it. Though implementation remains slow across the industry, the potential for widespread use compels us to consider what risks this technology may pose. Integrating private patient records with any type of technology—particularly one as advanced as AI—poses certain cybersecurity risks.
Another of these risks is the potential for malevolent image altering. Though admittedly an unlikely scenario, researchers have expressed concern that hackers could tamper with digital radiology scans and “trick” an AI into flagging a patient for lung cancer. Of course, clinicians would review these flagged patients accordingly—offering specialist referrals and follow-up tests, as needed— but researchers still suggest that developers become aware of potential threats like these.
Additionally, some experts are concerned that AI will exacerbate existing biases in healthcare delivery and patient treatments. Software and algorithms are developed by human beings – human beings with unconscious preferences, imperfect data, and gaps in knowledge about some patient populations. This could mean social determinants of health are overlooked when programming diagnostic tools, or that innate biases accidentally exacerbate existing health disparities.