Keywords:
Computer applications, Oncology, Musculoskeletal bone, MR-Diffusion/Perfusion, Neural networks, Experimental, Computer Applications-Detection, diagnosis, Segmentation, Experimental investigations, Cancer, Image registration, Image verification
Authors:
V. K. Anand1, G. KRISHNAMURTHI1, R. Balaji2; 1Chennai/IN, 2CHENNAI, TAMILNADU/IN
DOI:
10.1594/ecr2018/C-2274
Results
Data were split in train set and test set in the ratio of 7:3.
Classifiers were trained on training set and test sets were used to check the performance of classifiers on unseen data sets.
Data sets were snuffled and split randomly.
In this fashion five set of data set were created and used for training and testing classifiers.
The mean of performance of all five dataset reported in the performance table.
Nomenclature of Data sets.
- T1 dataset means feature extracted from T1 images.
- T2 dataset means feature extracted from T2 images.
- DWI600 dataset means feature extracted from DWI images with b-value 600s/mm2 .
- DWI1200 dataset means feature extracted from DWI images with b-value 1200s/mm2 .
- All dataset means combination of features which were extracted from all sequence images.
Performance of classifiers were tabulated as follows.
Performance of classifiers were tabulated as follows.
Table 1. Performance of classifiers on T1 dataset.
Classifiers |
Accuracy |
Sensitivity |
Specificity |
Precision |
F1-Measure |
Nearest Neighbors |
69.23 |
82.83 |
56.71 |
63.94 |
71.33 |
Linear SVM |
66.15 |
70 |
71.42 |
|
60.98 |
RBF SVM |
87.69 |
81 |
94.28 |
93.8 |
86.04 |
Decision Tree |
72.3 |
71.66 |
73.85 |
75.35 |
69.26 |
Random Forest |
63.07 |
62.66 |
64.92 |
62.9 |
60.42 |
AdaBoost |
72.3 |
71 |
73.85 |
70.09 |
70.17 |
Naive Bayes |
83.07 |
82.83 |
82.42 |
79.97 |
81.29 |
LDA |
78.46 |
58.83 |
97.14 |
96.66 |
70.51 |
QDA |
60 |
65.16 |
52.92 |
58.53 |
61.61 |
MLPClassifier_Relu |
47.69 |
20 |
80 |
|
11.11 |
MLPClassifier_identity |
86.15 |
85.33 |
84.57 |
87.06 |
84.53 |
MLPClassifier_tanh |
78.46 |
74.33 |
81.71 |
82.97 |
77.39 |
Table 2. Performance of classifiers on T1 dataset.
Classifiers |
Accuracy |
Sensitivity |
Specificity |
Precision |
F1-Measure |
Nearest Neighbors |
49.33 |
34.67 |
72.14 |
62.66 |
39.12 |
Linear SVM |
42.66 |
20 |
80 |
|
12.72 |
RBF SVM |
42.66 |
20 |
80 |
|
12.72 |
Decision Tree |
63.99 |
70.55 |
61.78 |
71.81 |
68.48 |
Random Forest |
57.33 |
56.26 |
59.28 |
60.58 |
57.54 |
AdaBoost |
58.66 |
65.64 |
48.92 |
60.51 |
62.1 |
Naive Bayes |
83.99 |
83.5 |
83.92 |
86.92 |
85.11 |
LDA |
40 |
34.02 |
48.92 |
46.07 |
38.22 |
QDA |
55.99 |
61 |
43.92 |
56.51 |
57.83 |
MLPClassifier_Relu |
45.33 |
61.55 |
30 |
|
47.23 |
MLPClassifier_identity |
52 |
24.15 |
82.5 |
|
25.42 |
MLPClassifier_tanh |
50.66 |
51.42 |
58.57 |
|
40.57 |
Table 3. Performance of classifiers on DWI600 dataset
Classifiers |
Accuracy |
Sensitivity |
Specificity |
Precision |
F1-Measure |
Nearest Neighbors |
65.33 |
71.91 |
59.64 |
69.87 |
69.26 |
Linear SVM |
42.66 |
20 |
80 |
|
12.72 |
RBF SVM |
70.66 |
62.5 |
90 |
|
64.99 |
Decision Tree |
54.66 |
63.79 |
38.21 |
57.61 |
60.25 |
Random Forest |
68 |
60.74 |
81.07 |
79.33 |
64.93 |
AdaBoost |
61.33 |
62.01 |
61.78 |
68.05 |
64.26 |
Naive Bayes |
81.33 |
77.37 |
88.92 |
89.36 |
82.12 |
LDA |
54.66 |
50.25 |
62.14 |
64.68 |
55.27 |
QDA |
58.66 |
52.3 |
65.71 |
66.19 |
58.21 |
MLPClassifier_Relu |
72 |
69.87 |
75.71 |
77.97 |
73.16 |
MLPClassifier_identity |
69.33 |
77.72 |
64.64 |
76.39 |
73.74 |
MLPClassifier_tanh |
66.66 |
71.88 |
65.71 |
80.87 |
68.9 |
Table 4. Performance of classifiers on DWI1200 dataset
Classifiers |
Accuracy |
Sensitivity |
Specificity |
Precision |
F1-Measure |
Nearest Neighbors |
64 |
83.5 |
40.71 |
64.03 |
71.88 |
Linear SVM |
42.66 |
20 |
80 |
|
12.72 |
RBF SVM |
42.66 |
20 |
80 |
|
12.72 |
Decision Tree |
52 |
46.55 |
65 |
63.59 |
50.17 |
Random Forest |
50.66 |
47.98 |
58.21 |
59.46 |
50.3 |
AdaBoost |
61.33 |
64.05 |
65 |
72.42 |
63.75 |
Naive Bayes |
84 |
87.04 |
83.92 |
86.42 |
85.63 |
LDA |
38.66 |
35.38 |
50.35 |
53.21 |
36.66 |
QDA |
55.99 |
61.55 |
51.78 |
60.98 |
59.01 |
MLPClassifier_Relu |
54.66 |
59.74 |
52.5 |
|
51.21 |
MLPClassifier_identity |
72 |
70.68 |
66.78 |
84.28 |
69.15 |
MLPClassifier_tanh |
54.66 |
80 |
35.35 |
|
58.84 |
Table 5. Performance of classifiers on T1 dataset
Classifiers |
Accuracy |
Sensitivity |
Specificity |
Precision |
F1-Measure |
Nearest Neighbors |
72.3 |
44.83 |
97.14 |
96.66 |
55.95 |
Linear SVM |
73.84 |
80 |
77.14 |
|
69.57 |
RBF SVM |
78.46 |
80 |
84.28 |
|
71.42 |
Decision Tree |
56.92 |
46.33 |
67 |
56.9 |
50.11 |
Random Forest |
61.53 |
65.83 |
63.92 |
65.83 |
57.25 |
AdaBoost |
63.07 |
62.16 |
65.28 |
61.94 |
59.84 |
Naive Bayes |
70.76 |
71.16 |
71.71 |
69.88 |
69.03 |
LDA |
41.53 |
39.66 |
42.92 |
40.47 |
37.68 |
QDA |
55.38 |
55.5 |
50.07 |
52.88 |
52.22 |
MLPClassifier_Relu |
92.3 |
84.66 |
97.14 |
97.14 |
88.07 |
MLPClassifier_identity |
92.3 |
89.49 |
94.28 |
95 |
90.8 |
MLPClassifier_tanh |
73.84 |
96.66 |
54.14 |
69.09 |
78.45 |
For individual data set Naive Bayes gave best accuracy except for T1 dataset.
On T1 dataset RBF SVM gave accuracy of 87.69% with 81% sensitivity,
94.28% specificity,
93.8% precision and 86.04% F1- measure.
MLP classifiers with identity activation function gave highest accuracy of 92.3% on all dataset.
For the same 89.49% sensitivity,
94.28% specificity,
95% precision,
90.8% F1-measure were found.