Abder-Rahman Ali, PhDAbder-Rahman Ali, PhD, a research fellow in the Department of Radiology at Massachusetts General Hospital, is the first author of a new study in 37th Conference on Neural Information Processing Systems (NeurIPS 2023) -- Self-Supervised Learning: Theory and Practice workshop -- Self-Supervised Learning Meets Liver Ultrasound Imaging. Anthony Samir, MD, is senior author. 

Summary:

This study introduces two self-supervised learning methods, SimCLR+LR and SimCLR+ENet, for enhancing liver ultrasound imaging. These methods leverage a large set of unlabeled abdominal ultrasound images to improve liver view classification and liver and poor probe contact segmentation, respectively. The proposed methods demonstrate superior performance compared to traditional supervised learning and state-of-the-art methods. The methods could optimize liver ultrasound workflow and shear wave elastography (SWE) reliability in a real-time clinical setting. Moreover, they reduce the time and cost associated with data labeling in liver ultrasound images.

What Question Were You Investigating with this Study?

How self-supervised learning can enhance liver view classification and liver and poor probe contact segmentation using a large set of unlabeled abdominal ultrasound images, and assessing the effectiveness of such methods in a clinical setting.

What Methods or Approach Did You Use?

SimCLR+LR for Liver View Classification:

SimCLR+LR: In this method, the SimCLR framework is initially trained on a large set of unlabeled liver ultrasound images. This training teaches the model to understand different views of the liver without specific labels. After this self-supervised learning phase, the model is fine-tuned using a smaller labeled dataset for liver view classification, employing Linear Regression (LR) for the downstream task (i.e., liver view classification).

SimCLR+ENet for Liver Segmentation and Poor Probe Contact Detection:

Similar to the first approach, SimCLR was used for initial feature learning. ENet, combined with CascadePSP for refinement, was then employed for liver segmentation and detecting poor probe contact. The method stands out for its use of physics-inspired augmentations and a two-stage training process that effectively addresses the challenges in liver ultrasound imaging with limited labeled data.

What Were the Results?

SimCLR+LR for Liver View Classification: This method achieved an accuracy of 95.1% for classifying liver views ("with liver") and 93.9% for non-liver views ("without liver").

Overall, the accuracy of SimCLR+LR was 94.5%, significantly outperforming other models like ResNet-18 (70.2%) and MLP-Mixer (86%). Notably, SimCLR+LR, even when trained with just one labeled image per class, performed comparably to MLP-Mixer trained on a full dataset.

SimCLR+ENet for Liver Segmentation and Poor Probe Contact Detection:

For liver segmentation, SimCLR+ENet achieved an average Dice coefficient of 90.58% and a lower average Hausdorff distance of 21.71, indicating superior performance compared to U-Net.

For poor probe contact detection, the model showed an overall accuracy of 65.4%, with a sensitivity of 49.1% and specificity of 82.1%, indicating moderate success in detecting poor probe contact and better performance in identifying its absence. The model’s limitation was not considering the zooming factor during training, which could be addressed in future work.

What are the Clinical implications?

1) Efficiency and Cost Reduction: The introduction of the self-supervised learning methods, SimCLR+LR and SimCLR+ENet, for liver view classification and liver segmentation/poor probe contact detection in abdominal ultrasound images significantly reduces the time and cost associated with data labeling.

2) Improved Diagnostic Performance: The proposed methods have demonstrated superior performance compared to state-of-the-art (SOTA) methods. This implies more accurate and reliable diagnostic results in ultrasound imaging, particularly for liver-related conditions.

What are the Next Steps?

Extending the work to multi-class organ classification and improving the quality of training data using data-centric AI approaches. This advancement aims to broaden the application of the developed self-supervised learning methods beyond liver imaging to include various other organs, enhancing the scope and accuracy of medical diagnostics using ultrasound imaging.

Paper Cited:

Self-Supervised Learning Meets Liver Ultrasound Imaging. Abder-Rahman Ali, Anthony E. Samir; Neural Information Processing Systems (NeurIPS) Workshop on Self-Supervised Learning: Theory and Practice, 2023.

About the Massachusetts General Hospital

Massachusetts General Hospital, founded in 1811, is the original and largest teaching hospital of Harvard Medical School. The Mass General Research Institute conducts the largest hospital-based research program in the nation, with annual research operations of more than $1 billion and comprises more than 9,500 researchers working across more than 30 institutes, centers and departments. MGH is a founding member of the Mass General Brigham healthcare system.