Radiology Rounds - Volume 16 Issue 5 - May 2018
Print PDF Archives Subscribe

Current Developments in Artificial Intelligence and Diagnostic Imaging

  • Artificial intelligence (AI) is poised to transform health care and radiology.
  • Researchers and clinicians at Mass General have developed AI-based techniques for a range of applications, including vertebral column labeling and spinal stenosis detection and diagnosis of retinopathy of prematurity.
  • Investigators have also shown they can use AI to achieve higher-quality diagnostic images with CT, PET, and MRI.


Moving Toward Broader Adoption of Artificial Intelligence for Diagnostic Imaging
From Detecting Spinal Stenosis to Preventing Childhood Blindness: Applications of Artificial Intelligence in Radiology
Producing Higher-quality Diagnostic Images with Artificial Intelligence
Further Information
References


Speculation has grown in recent years about the ways that artificial intelligence (AI)—a machine’s ability to recognize patterns and apply those patterns in meaningful ways—could transform the field of radiology. At Massachusetts General Hospital, AI is already being applied for clinical care. Researchers and clinicians at the hospital have been developing and implementing AI-based approaches for an array of applications, setting the stage for adoption of these approaches across the clinical spectrum.

Moving Toward Broader Adoption of Artificial Intelligence for Diagnostic Imaging

AI holds tremendous promise for radiology, but for AI-based techniques to make the most pronounced impacts in clinical care, the algorithms need to be readily accessible to radiologists, preferably through their existing workflows. In March, Partners HealthCare and Nuance Communications announced a partnership that will help to make this a reality by optimizing a process for rapid development, validation, and use of AI at the point of care. Development on the Partners side of the collaboration will take place at the Massachusetts General Hospital and Brigham and Women’s Hospital Center for Clinical Data Science (CCDS).

Since the launch of the center in 2016, the CCDS team has made tremendous strides in developing AI techniques for a range of health care applications, from the design of smarter imaging technologies to the automation of the reporting of imaging findings. However, broad adoption of the tools requires a scalable means of implementing them. Through the new collaboration, researchers will be able to take advantage of an open platform that will enable them and other technology partners to incorporate AI algorithms into a cloud-based medical imaging network, which is already used by 70% of radiologists in the US. Integrating with the radiologists’ existing workflows will greatly expand the reach of any new tools and increase the likelihood of adoption.

From Detecting Spinal Stenosis to Preventing Childhood Blindness: Applications of Artificial Intelligence in Radiology

Mass General researchers have introduced a number of new AI tools. Among these is an algorithm developed for automatically labeling the vertebral column and grading spinal stenosis. Lumbar spinal stenosis is a major cause of low back pain, having a global prevalence as high as 42% according to the World Health Organization. MRI is used most frequently as the diagnostic tool for spinal stenosis, and for assessing degenerative disease. MRI  examinations have lengthy acquisition times, are costly, and have high inter-reader variability. Providing an assist to radiologists, automating spine-labeling and grading of stenosis may expedite interpretation, improve report consistency, and decrease inter-reader variability. The CCDS and MGH collaborators developed a U-net model for spine localization and labeling, and a ResNeXt-50 neural network for characterization of spinal stenosis. Performance was excellent, surpassing the best results reported in the literature.

A second algorithm performs a fully automated analysis of body composition for CT in cancer patients using convolutional neural networks. The amounts of muscle and fat in a person’s body, known as body composition, are correlated with cancer and with cardiovascular risk. Currently, measuring body composition requires time-consuming manual segmentation of CT images by an expert reader. The CCDS and colleagues at MGH and at the Dana Farber Cancer Institute developed a multi-stage machine learning algorithm that automatically measures muscle, visceral and subcutaneous fat with results comparable to expert human readers.

Even more recently, MGH researchers, in collaboration with investigators at Oregon Health & Science University and Northeastern University, reported an algorithm that can automatically and more accurately diagnose retinopathy of prematurity (ROP), the leading cause of childhood blindness worldwide. Ophthalmologists typically diagnose ROP, which stems from abnormal blood vessel growth near the retina, through visual inspection of an infant’s dilated eye. There is a shortage of ophthalmologists skilled in detecting the condition, though, and even among those who are skilled, the assessments are largely subjective.

The new algorithm, which was trained to differentiate between healthy and diseased vessels in images of infants’ eyes, addresses these limitations by providing an objective means of diagnosing ROP. In a paper published in JAMA Ophthalmology this month, the research team demonstrated a 91% accuracy rate in diagnosing the condition with the algorithm. This finding represented a substantial improvement over even ophthalmologists with ROP expertise, who accurately diagnosed the condition 82% of the time. The authors of the paper are now developing the algorithm further, for example by exploring whether it can diagnose ROP in images of parts of the retina other than vessels.

Producing Higher-quality Diagnostic Images with Artificial Intelligence

Another AI-based technique is helping to improve the quality of diagnostic images for several imaging modalities. Radiologists need to work with the highest-quality images possible—greater detail enables them to make more-accurate diagnoses—but acquiring enough data to produce such images may not always be feasible. It can involve increased radiation doses for computed tomography (CT) and positron emission tomography (PET) or uncomfortably long scan times for magnetic resonance imaging (MRI). Now, in a paper published in the journal Nature in March, a team of investigators at Mass General have described a technique that helps to overcome these obstacles.

Figure 1. A new artificial-intelligence-based approach to image reconstruction—called AUTOMAP—yields higher quality images from less data, reducing radiation doses for CT and PET and shortening scan times for MRI. Shown here are MR images reconstructed from the same data with conventional approaches (left) and AUTOMAP (right). Images courtesy of Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital.
Figure 1. A new artificial-intelligence-based approach to image reconstruction—called AUTOMAP—yields higher quality images from less data, reducing radiation doses for CT and PET and shortening scan times for MRI. Shown here are MR images reconstructed from the same data with conventional approaches (left) and AUTOMAP (right). Images courtesy of Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital.

Dubbed AUTOMAP (automated transform by manifold approximation), the technique uses artificial intelligence to determine the most appropriate image-reconstruction algorithm in a given scenario, thus generating high-quality images without having to acquire additional data. The researchers noted that AUTOMAP could also play a role in accelerating other artificial intelligence applications toward clinical use, including diagnostic imaging applications such as those described above, as these applications will benefit from the higher-quality images the technique will provide.

Further Information

We would like to thank Mark Michalski, MD, and Katherine Andriole, PhD, of the Center for Clinical Data Science at Massachusetts General Hospital and Brigham and Women’s Hospital for their advice and assistance in preparing this article. Jayashree Kalpathy-Cramer, PhD, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, and Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women’s Hospital, is co-senior author of the JAMA Ophthalmology paper. Matt Rosen, PhD, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, is senior author of the Nature paper.

References

Brown JM, Campbell JP, Beers A, et al. (2018). Automated diagnosis of plus disease in retinopathy of prematurity using deep convolutional neural networks. JAMA Ophthalmology.

Lee H, Tajmir S, Lee J, et al. (2017). Fully Automated Deep Learning System for Bone Age Assessment. J Digit Imaging 30(4): 427-441.

Zhu B, Liu JZ, Cauley SF, et al. (2018). Image reconstruction by domain-transform manifold learning. Nature 555(7697): 487-492.

 

 



©2018 MGH Department of Radiology

Gary Boas, Author
Raul N. Uppot, M.D., Editor