Keywords:
Image verification, Trauma, Computer Applications-Detection, diagnosis, Conventional radiography, Pelvis, Artificial Intelligence
Authors:
J. Gregory1, J. W. Luo2, C. H. MO1, J. J. R. Chong2; 1Montreal/CA, 2Montreal, QC/CA
DOI:
10.26044/ecr2019/C-3471
Aims and objectives
The Advanced Trauma Life Support (ATLS) protocol is widely considered the standard of care for the management of acute trauma cases (Kortbeek et al.
2008).
The protocol involves a primary survey which addresses the detection and management of life-threatening issues,
which includes pelvic radiography to identify post-traumatic injuries which can be a source of severe bleeding and potentially death (Thiyam et al.
2015). Therefore,
rapid and accurate interpretation of pelvic radiographs is essential in the evaluation of trauma patients to help with a wide range of time-sensitive decisions from prioritizing surgeries to correct CT protocolling.
The interpretation of such pelvic radiographs is typically performed by non-radiologists such as emergency physicians and trauma and/or orthopedic surgeons,
and at times trainee residents of these specialties,
which may inadvertently lead to missed or delayed detection of findings such as subtle fractures or subluxations (McLauchlan et al.
1997; Lim et al.
2006). As there are limited resources available in radiology,
timely interpretation of plain films by radiologists may not always be possible. The implementation of a practical tool for first-line care providers to aid in the efficient detection of traumatic pelvic radiographic abnormalities may lead to improved patient outcomes and workflow in trauma centers.
A number of groups have demonstrated the ability of convolutional neural networks (CNNs) to detect fractures and perform other specific tasks with orthopedic radiographs,
at times with performance approaching expert physician performance.
Olczac et al.
(2017) was one of the first groups to use artificial intelligence to assess orthopedic radiographs,
when they demonstrated that the VGG-16 network was able to detect fractures of the hand,
wrist,
and ankle with an accuracy of 83%,
on par with two senior orthopedic surgeons.
Kim et al.
(2017) showed that their Inception-v3 network demonstrated areas under the curve (AUCs) of 0.954 for distal radius/ulna fractures. Lindsey et al.
(2017) had a network detect wrist fractures with an AUC ranging from 0.97-0.99 depending on different input parameters,
and demonstrated that its use as an aid can decrease the misinterpretation rate of emergency physicians by 47%. The use of neural networks in the detection of hip fractures has also been investigated. One DenseNet implementation demonstrated an AUC of 0.99 in the detection of hip fractures (Gale et al.
2017).
Intertrochanteric fractures were detected by the VGG-16 network with an AUC of 0.98 (Urakawa et al.
2018),
and most recently Adams et al.
(2018) used AlexNet and GoogLeNet and showed that increasing training sample size increased AUCs from 0.91 to 0.95 and 0.93 to 0.98 respectively.
While these networks demonstrated excellent performance on a variety of specific tasks (i.e.
detection of a specific type of fracture),
employing these networks into clinical practice will require the validation of a network to detect and identify all types of post-traumatic abnormalities in a given radiograph in a specific clinical presentation,
including different types of fractures as well as dislocations.
In this study,
we evaluate the efficacy of a DCNN for detection of acute fracture or dislocation on a series of acute pelvic trauma anteroposterior radiographs.