All CT scans were performed on a 64-row multi-detector CT scanner Somatom Sensation 64 (24x1.2mm,
slice thickness 1.5mm by Siemens,
Automatic current modulation (CareDose4D) was used for raw data acquisition and filtered back projection was used for image reconstruction.
Three tube current-time products of 25,
50 and 100 mAs (Miliampèresecond) were combined with 80,
120 kVp (peak Kilovoltage),
resulting in 9 different pairings: 8 low dose levels were compared against the standard dose CT of 100mAs/120kVp.
Phantom and nodules
An anthropomorphic lung phantom (Chest Phantom N1,
Kyoto Kagaku Co.,
Japan) was used based on an average Japanese man with a body weight of 70 Kg.
It is an accurate life-size anatomical model of a male human torso with synthetic pulmonary vessels,
which are spatially traceable (right and left),
trachea and abdomen (diaphragm) block.
Phantom size is 43 cm x 40 cm x 48cm (height).
The soft tissue material (polyurethane,
gravity 1.06) and synthetic bones (epoxy resin) have x-ray absorption rates close to those of human tissues.
Arms-abducted position of the torso suits CT scanning.
Four artificial solid nodules with a density of +100 Hounsfield units [HU] (5,
12 mm) and 4 artificial ground glass nodules (GGN) with a density of -630 HU (5,
12 mm) by Kyoto Kagaku Co.
Japan) were used.
For each dose level 100 nodules were randomly distributed into 40 phantoms.
-side and the size were also randomly assigned to each nodule.
the average nodule size reached the mean of the four sizes (8.75 mm).
The 100 nodules consisted of 75 solid and 25 ground glass nodules,
simulating the screening prevalence.
To prevent recognition bias nodules in the phantom were rearranged for each exposure level,
equaling a total of 900 nodules (675 solid and 225 GGN) for the 9 examined protocols.
Each nodule was registered on an excel-sheet,
indicating exposure level,
exact location by slice position of nodule center,
side and density of nodule (answer key).
Images were sent to a PACS-workstation (Picture Archiving and Communication System R11.4.1,
Sweden) and 3 CAD-workstation: CAD1 was syngoCT-CAD (Siemens,
CAD2 was LMS-Lung/Track (Lesion-measurement-solution Version 6,
France) and CAD3 was Lung Nodule CAD (Prototype,
Two blinded radiologist with 4 and 2 years experience in chest CT imaging read the scans on the PACS-workstation.
Readers had to register the lung nodules by indicating the location (side of the lung,
exact slice position of the nodule center).
A third radiologist with 2 years of chest CT experience run the CAD analysis.
He fed the CAD with the 1.5 mm slices and documented the true positive,
false negative and the false positive nodules that were found by each CAD by comparing each CAD-positivity with the answer key.
The 2 radiologists and the 3 CADs were analyzed individually or as standalone tool as first reader at each dose level.
Sensitivities of these 5 first readers were compared using McNemar test .
The sensitivity of each low dose level was then tested against the standard dose level using the Chi-square test (Z-test of proportions) .
Each of the 5 readers was paired with the other 4 readers to test the double-reading outcome at each dose level: the number of nodules detected by at least one reader of the pairing defined the combined sensitivity.
The combined sensitivity of the ten pairings was analyzed with McNemar test for each dose level.
Each low dose level was tested against the standard dose with the chi-square test (Z-test of proportions)  to find the lowest acceptable dose level without loss of sensitivity compared to the standard CT.
Inter-observer agreement was calculated for the detection of the true positive nodules and the classification into ground glass and solid nodules were taken into account for each dose level separately.
Radiologists were compared among each other and agreement between CADs was calculated separately.
agreement of each radiologist with each CAD was calculated.
Mean agreements for the radiologists,
for the CADs and the radiologists against CAD were determined.
Inter-observer comparison was performed calculating agreements levels using Fleiss’ Kappa statistics [28,
29]; Kappa strength of agreement: < 0.20 poor,
0.21 - 0.40 fair,
0.41 - 0.60 moderate,
0.61 - 0.80 good,
0.81 - 1.00 very good agreement .
Inter-observer agreement between ‘radiologist-radiologist’ and ‘radiologist-CAD’ was tested using the comparison of correlation coefficient .