This poster is published under an
open license. Please read the
disclaimer for further details.
Keywords:
Cancer, Observer performance, Mammography, Breast
Authors:
K. Schilling, J. The, S. Griff, L. Oliver, R. Mahal, M. Saady, M. V. Velasquez; Boca Raton/US
DOI:
10.1594/ecr2015/C-1281
Methods and materials
The methods used for this study are outlined in Figure 1.
Eight experienced breast imaging radiologists,
from a single facility,
visually assigned 100 digital screening mammographic studies into one of four BI-RADS breast composition categories.
12 studies comprised women with breast implants and were excluded from the study,
leaving 88 studies in the final analyses.
For each pair of readers (i.e.
28 pairs),
the inter-reader agreement was assessed using Cohen’s kappa coefficient (k).
The corresponding raw (for processing) images were processed using fully-automated quantitative software (VolparaDensity™ v1.4,
Volpara Solutions,
Wellington,
New Zealand) to generate volumetric breast density and an associated Volpara Density Grade (VDG),
equivalent to BI-RADS density category.
After a 2-week washout period,
all radiologists re-assessed breast density from the same 88 digital mammograms,
with the VDG scores from VolparaDensity available during the reading session.
As described above,
Cohen’s kappa coefficient was again used to assess inter-reader agreement for each pair of readers.
To assess whether the inter-reader agreement improved with the use of automated density software as an aid,
the Wilcoxon rank-sum test was used to compare distributions of kappa scores without and with VolparaDensity..