Aims and objectives
Diagnosing patients using PET/CT images is a time-consuming task requiring frequent switching between modalities.
Fusion images are produced by an overlay of color-coded PET on CT images [1].
Interpretation of these fusion images can be hampered due to a huge variation of background pixel intensity (CT),
non-uniformly perceived color choices and limited context sensitivity.
Here,
we apply the latest machine learning techniques to produce improved fusion images which potentially simplify the task of diagnosis [2].
Methods and materials
The creation of a fusion image involves a number of different steps.
The standard model is enhanced with the addition of a “Machine Learning” based component illustrated in figure 1.
For this machine learning,
we apply a proven approach from pathology and microscopy based on convolutional neural networks and described in detail by Ronneberger et al [2].
The deep neural network was trained with labeled CT and PET information.
Data from the Cancer Imaging Archive [4] was used as initial training material.
The output has...
Results
Initial results of the new fusion approach compared to standard fusion on two example images from one patient with metastasized NSCLC are shown in attached figures (2 and 3).
Our initial results show more readily interpretable fusion images with integrated tumor tissue properties compared with the standard fusion.
The images provide the ability to examine PET and CT information in the same image without having to sacrifice contrast or content from either.
Conclusion
We show the viability of incorporating machine learning approaches to improve the visualization of fused PET-CT data.
In addition this approach makes the first steps to improve the interpretability of complex neural networks.
We plan to extend the training and perform a large scale reader study to show the viability and improved accuracy using such tools.
Additionally these visualizations will be used as the basis for making a fully automated staging of Lung Cancer patients.
References
1.
Zaidi H,
ed.
Quantitative Analysis in Nuclear Medicine Imaging.
Boston,
MA: Springer US; 2006.
doi:10.1007/b107410.
2.
Xie M,
Jean N,
Burke M,
Lobell D,
Ermon S.
Transfer Learning from Deep Features for Remote Sensing and Poverty Mapping.
3.
Cheng J-Z,
Ni D,
Chou Y-H,
et al.
Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans.
Sci Rep.
2016;6:24454.
doi:10.1038/srep24454.