Applications of deep learning in radiology
I-Classification
- Classification with deep learning usually utilizes target lesions depicted in medical images, and these lesions are classified into two or more classes.
Fig. 12: Differences between classification using conventional algorithms and deep learning algorithms. Note that Σ indicates combination of data inputs.
References: Deep Learning in Radiology McBee, Morgan P. et al. Academic Radiology, Volume 25, Issue 11, 1472 - 1480
- For example, deep learning is frequently used for the classification of lung nodules on computed tomography (CT) images as benign or malignant (Fig. 13a).
- It is necessary to prepare a large number of training data with corresponding labels for efficient classification using CNN.
- For lung nodule classification, CT images of lung nodules and their labels (i.e., benign or cancerous) are used as training data. Figure 13 b, c show two examples of training data of lung nodule classification between benign lung nodule and primary lung cancer; Fig. 14b shows the training data where each datum includes an axial image and its label,
- Fig. 14 shows the training data where each datum includes three images (axial, coronal, and sagittal images of a lung nodule) and their labels.
- After training CNN, the target lesions of medical images can be specified in the deployment phase by medical doctors or computer-aided detection (CADe) system.[4]
Fig. 13: A schematic illustration of a classification system with CNN and representative examples of its training data. a Classification system with CNN in the deployment phase. b, c Training data used in training phase
References: Yamashita, R., Nishio, M., Gian, R. K., & Togashi, K. (2018, June 22). Convolutional neural networks: an overview and application in radiology. Retrieved from https://link.springer.com/article/10.1007/s13244-018-0639-9.
Fig. 14: A schematic illustration of a classification system with CNN and representative examples of its training data. a Classification system with CNN in the deployment phase. b, c Training data used in training phase
References: Yamashita, R., Nishio, M., Gian, R. K., & Togashi, K. (2018, June 22). Convolutional neural networks: an overview and application in radiology. Retrieved from https://link.springer.com/article/10.1007/s13244-018-0639-9.
Fig. 15: Figure 7 Chest CT image with a pulmonary nodule as input into a CNN for analysis using deep learning technique.
References: Santos, Koenigkam, M., Júnior, F., Raniery, J., Wada, Tadao, D., … Azevedo, P. M. de. (n.d.). Artificial intelligence, machine learning, computer-aided diagnosis, and radiomics: advances in imaging towards to precision medicine. Retrieved from http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0100-39842019000600011&tlng=en
Another example of understandable deep learning algorithms for accurate, highly sensitive, CT detection & classification of intracranial hemorrhage (ICH) on unenhanced head CT scans was developed :
Fig. 16: Brain tion by AI hemorrhage detection and classification by AI
References: https://bioengineeringcommunity.nature.com/users/203140-michael-h-lev-md-faha-facr/posts/42310-explainable-radiologist-mimicking-deep-learning-for-detection-of-acute-intracranial-haemorrhage-from-small-ct-datasets
II-Segmentation
- Segmentation can be defined as the identification of pixels or voxels composing an organ or structure of interest
- A effective deep learning approach is based on a CNN that directly produces a full-resolution segmentation output (Fig 17).
Fig. 17: CNN for segmentation
References: https://www.youtube.com/watch?v=_74_292I-5s&feature=youtu.be
- Fig 18 demonstrates that it is possible to automatically identify (segment) cartilage and meniscus tissue in the knee joint and extract measures of tissue structure such as volume and thickness, as well as tissue biochemistry, by a method know as MR relaxometry.[6]
Fig. 18: to perform automatic segmentation of cartilage and meniscus, they developed a deep learning model based on the U-Net convolutional network architecture using 638 image datasets. Performance of the automatic segmentation was evaluated using the Dice coefficient overlap with manual segmentation done by many radiologists, which took more than an hour for each data set. The models averaged five seconds to generate automatic segmentations with excellent agreement with those done by radiologists. The precision and agreement between measures of cartilage thickness and biochemistry provided by the deep learning models and those using manual methods was also excellent. Measure of relaxation times (biochemical information) and morphologic characterization of joint tissues (such as thickness and volume of cartilage) are not available in the clinic today, due to the long analysis times required.
References: Deep Learning Attacks Joint Degeneration and Osteoarthritis: Musculoskeletal Imaging Research Published in 'Radiology'. (2019, August 6). Retrieved from https://radiology.ucsf.edu/blog/deep-learning-attacks-joint-degeneration-and-osteoarthritis-musculoskeletal-imaging-research.
Detection
To detect abnormalities within medical images. Abnormalities can be rare and they must be detected among many normal cases.
A common strategy to train a CNN for detection in this setting is to generate a surrogate dataset based on small patches extracted from the original images. Patches will typically be sampled in equal number from the target class and the background class, providing a simple mechanism to mitigate the class imbalance naturally occurring in detection tasks.[5]
Fig. 19: Training with patches. Detection tasks in the medical field are commonly solved by training convolutional networks on a surrogate dataset composed of small patches extracted from the original images. Just as for classification, the CNN can be pretrained on an existing database and fine-tuned for the target application. Cv = convolution, FC = fully connected, MP = max pooling.
References: Chartrand, G., G, H., Becker, Kooi, Havaei, Anthimopoulos, … Setio AA. (2017, November 13). Deep Learning: A Primer for Radiologists. Retrieved from https://pubs.rsna.org/doi/full/10.1148/rg.2017170077.
Fig. 20: Image shows feature extraction example: Automated detection of pneumothorax .
Further machine learning research has also been performed for detection of critical findings such as pneumothorax (Fig 6), fractures, organ laceration, and stroke, pulmonary and thyroid nodules .
References: Choy, G., Khalilzadeh, O., Michalski, M., Do, S., Samir, A. E., Pianykh, O. S., … Dreyer, K. J. (2018). Current Applications and Future Impact of Machine Learning in Radiology. Radiology, 288(2), 318–328. doi: 10.1148/radiol.2018171820
Fig. 21: CheXNet is a 121-layer convolutional neural network that takes a chest X-ray image as input, and outputs the probability of a pathology. On this example, CheXnet correctly detects pneumonia and also localizes areas in the image most indicative of the pathology.
References: Academic search engine for paper. (n.d.). Retrieved from https://scinapse.io/papers/2770241596
- Current clinical applications of machine learning in radiology are summarized fig 22 and 23.
Fig. 22: summary of current clinical applications of machine learning in radiology .
References: author
Fig. 23: summary of current clinical applications of machine learning in radiology .
References: author
Fig. 24: A schematic illustration of the system for denoising an ultra-low-dose CT (ULDCT) image of phantom and representative examples of its training data. a Denoising system with CNN in deployment phase. b Training data used in training phase. SDCT, standard-dose CT
References: Yamashita, R., Nishio, M., Gian, R. K., & Togashi, K. (2018, June 22). Convolutional neural networks: an overview and application in radiology. Retrieved from https://link.springer.com/article/10.1007/s13244-018-0639-9.
What are the Key segmentsi n AI medical based segments?
Fig. 25: Key segments in AI based medical imaging
References: The Best AI-based Medical Imaging Tools - 2020 Reviews, Features, Pricing, Comparison. (2020, January 14). Retrieved from https://www.predictiveanalyticstoday.com/what-is-ai-based-medical-imaging/
AI vs Radioloist performance on chest X ray :
Fig. 26: AI vs Doctor: Lung Tumor Recognition on a Chest X-ray
References: MD, C. L. (2019, May 9). AI vs Radiologists: Performance on Chest X-Rays. Retrieved from https://www.clearvuehealth.com/b/ai-radiology-xray/
Fig. 27: AI vs Doctor: Pneumonia Recognition on a Chest X-ray
References: MD, C. L. (2019, May 9). AI vs Radiologists: Performance on Chest X-Rays. Retrieved from https://www.clearvuehealth.com/b/ai-radiology-xray/
Fig. 28: ROC Performance Evaluation for Doctors vs AI
References: MD, C. L. (2019, May 9). AI vs Radiologists: Performance on Chest X-Rays. Retrieved from https://www.clearvuehealth.com/b/ai-radiology-xray/
Machine learning approaches faces a variety of challenges (summarized in fig 25).
Fig. 29: overview of the challenges of machine learning in radiology
References: Machine Learning approaches along the Radiology Value Chain – Rethinking Value Propositions by Peter Hofmann, Severin Oesterle, Paul Rust1 , Nils Urbach 1 Student December 2018 to be presented at: 27th European Conference on Information
Future of Machine Learning in Radiology :
Fig. 30: AI areas of impact for medical imaging practice.
References: https://onlinelibrary.wiley.com/doi/full/10.1002/jmrs.369
- Availability of a large amount of electronic medical record data allows for creation of an interdisciplinary data pool.
- Machine learning extracts knowledge from this big data and produces outputs that could be used for individual outcome prediction analysis and clinical decision making. This could make way for personalized medicine (or precision medicine)[5]
Fig. 31: This plot outlines the performance levels of artificial intelligence (AI) and human intelligence starting from the early computer age and extrapolating into the future.
References: https://www.nature.com/articles/s41568-018-0016-5?draft=collection#Sec11
AI/Machine learning : A friend or Foe?
Fig. 32: AI , DL :friend or foe ?
References: https://www.diagnosticimaging.com/article/ai-and-future-radiology?utm_source=bibblio_recommendation
- Neither side of this debate is correct.
- Machine learning is most likely to become a complement to, rather than a substitute for, radiologists.
- The reason is simple; while machine learning has already proven its ability to match or exceed the performance of radiologists on some tasks, these tasks represent only a small portion of the responsibilities of a radiologist.
- Algorithms won’t replace the human touch involved in discussing a diagnosis with a patient.
- And because algorithms are hyper-focused, they won’t be able to provide a holistic and exhaustive diagnosis.[8]
- Machine learning will provide quantitative tools that will increase the value of diagnostic imaging as a biomarker, increase image quality with decreased acquisition times, and improve workflow, communication, and patient safety.
Fig. 33: Embracing Innovation
References: author
- We predict that today's generation of radiologists will be replaced not by ML algorithms, but by a new breed of data science-savvy radiologists who have embraced and harnessed the incredible potential that machine learning has to advance our ability to care for our patients.[9]