Keywords:
Bones, Extremities, Conventional radiography, Digital radiography, Neural networks, Computer Applications-General, Education, Education and training
Authors:
A. Matsushima1, C. Tai-Been2, T. Okamoto1, S.-Y. Hsu2, J. Ryu-I2, N. Itayama1, T. Ishibashi1, K. Fukuda1; 1Tokyo/JP, 2Kaohsiung City/TW
DOI:
10.26044/ecr2021/C-10411
Conclusion
In this study, the best results were obtained for image classification using ResNet101 with a batch size of 6 and an input size of 256 x 256 pixels. This result does not affect the resolution of the image to be used, to the extent considered in this study.
Compared with the other networks, the residual network tended to have higher accuracy with a smaller batch size because that residual networks optimize residue mapping rather than optimizing the original unreferenced mapping. We obtained an accuracy 0.9293 for the case of images taken by student as a practice. Radiological technologists engaged in clinical practice were asked to judge the acceptability of the clinical images. The results of visual evaluation by radiological technologists with 1, 2, 4, and 11 years of clinical experience were an accuracy of 0.7400, 0.8733, 0.8667, and 0.9667, respectively. Therefore, the CNN results were comparable to those of radiological technologists with 5-10 years of experience.
In this study, we used the degree of overlap of the femoral condyles as the criterion, and it is necessary to consider whether this criterion was reflected in the CNN results.
Grad-CAM was used to visualize and validate the region of interest in the fully connected layer of the CNN model used in this study [3]. Figure. 21 shows the results of visualization of the five class classifications using the Grad-CAM level. This image shows that the target area plays a major role in the classification decision. In the case of the articular surface shifted horizontally and vertically, the high Grad-CAM levels shift to the region of the patella and tibia. This is consistent with the area on which the radiological technologist focuses when considering image correction. The above results show that our decision index is retained in the CNN results. In addition, the patella and tibia are also used as indicators for knee joint classification using the CNN. For this reason, the ROI should be set to include the patella and tibia.
We aim to build a support system for useful clinical application in the future.