Keywords:
Computer Applications-General, CT, Liver, Artificial Intelligence, Abdomen, Image verification
Authors:
R. Remtulla1, S. L. Mihalcioiu2, J. W. Luo2, B. Gallix2, J. J. R. Chong2; 1Montreal, Quebec/CA, 2Montreal, QC/CA
DOI:
10.26044/ecr2019/C-3057
Methods and materials
Study Population
A retrospective case-control review was performed to export multiple consecutive CT abdomen examinations over a 10-year period at two academic tertiary care hospitals; in order to maximize diversity and dataset variability.
A random sample of images were divided into 6-classes (Fig. 1).
These classes were [A] enhanced slices including liver during the arterial phase [B] slices not including liver during any phase of enhancement,
[C] enhanced slices including liver during the delayed phase,
[D] non-axial slices,
[E] non enhanced slices including liver,
and [F] enhanced slices including liver during the portal venous phase.
Image Pre-Processing & Training Labels
Full DICOM studies were exported with no pre-selection by series type or description.
All DICOM images were then converted to a downsampled 256x256px with all associated meta-information removed.
During this conversion,
standard filter manufacturer window width/window level settings are maintained.
Individual slice image were anonymized as per standard protocols.
The dataset was then pre-sorted into the 6 label classes for training.
Neural Network Configuration
We employed an ImageNet pre-trained deep convolutional neural network (DCNN) with Inception-v2-Resnet classification architecture [9].
The dataset was split using 70% training,
10% validation,
and 20% splits.
Networks were trained using stochastic gradient descent (SGD) with an initial learning rate of 0.003 alongside weight decay and 5 x 20-epoch cosine annealing schedules for a total of 100 epochs.
Implementation of the neural network was done under TensorFlow on Python 3.6,
and training was performed on a Titan X (Pascal) workstation.
Standard data augmentation consisting of standard affine transformations (random cropping,
rotation,
shearing,
and horizontal flipping) was performed.
All radiographs underwent histogram normalization prior to training.
Neural Network Objective and Subjective Validation
The trained network was evaluated using areas under the receiving operating characteristic curves.
Further validation of the trained network was performed using Saliency and Class Activation Maps to determine whether relevant image regions were utilized to make classification determinations.