Archives
Pifithrin-α b iframe width height src https www youtube com embed
r>
Table 1 shows that the classification accuracy of our Inception networks is much better than that of the competing classifiers (CNN) [1] for the four classes, with a better accuracy of at least 4.2%. These results prove that our Inception models effectively improved the performance of the breast cancer classifier. This is because these models can extract more key breast cell features compared to CNN. CNN consisted of four narrow convolution layers that were not enough to extract unique characteristics of breast cancer cells, which was not an easy task because of a wide variety of H&E stained sections. On the contrary, our Inception models can extract detailed information from breast cell types that indicate the similarities of breast cancer Pifithrin-α to normal breast cells. Each model was trained by a very deep network that was crucial for capturing the natural hierarchy of objects. Low-level features were captured in the first layer, and object parts were extracted at higher layers. Furthermore, the residual learning framework eased the training of these networks and enabled them to extract higher feature levels, leading to improved performance in recognition tasks.
We also evaluated the performance of the gradient boosting trees classifiers using deep features from the Inception mod-els, as shown in Table 1. Inception-300x300+GBT, Inception-450x450+GBT and Inception-600x600+GBT are more accurate
Table 2
Sensitivity for four-class classification on the challenging database of
H&E stained histological breast cancer images.
Method
Normal
Benign
In situ
Invasive
Table 3
Sensitivity for two-class classification on the challenging
database of H&E stained histological breast cancer images.
Method
Non-carcinoma
Carcinoma
CNN
CNN+SVM
Model Fusion
than Inception-300x300, Inception-450x450 and Inception-600x600, respectively. This is because the gradient boosting trees classifier considerably improves the accuracy improvement of the classification of breast cancer features in the deep learning models.
Table 2 indicates that each Inception network has its own advantages and disadvantages in detecting breast cell types. We can observe that Inception-300x300+GBT is the best classifier for verifying normal breast cells with a sensitivity of 100% while Inception-450x450+GBT achieves the highest accuracies in detecting benign tumors and invasive carcinomas, with sensitivities of 100% and 98.9%, respectively. For the non-carcinoma/carcinoma tissue classification task, Inception-600x600+GBT achieved a higher accuracy rate than Inception-300x300+GBT and Inception-450x450+GBT. The sensitivity of Inception-600x600+GBT in detecting carcinomas was 100% and its specificity was 97.2%. This can be explained by the fact that, although the Inception-ResNet-v2 network is the state-of-the-art object detection method, it is not able to capture fully multi-scale context information of different breast cancer types. Table 1 demonstrates that the fused model can achieve a 96.4% accuracy in the cases of four class problems, which is the best among the competitive approaches using deep learning models. The fused model accuracy was at least 4.2% higher than any of the single models. This proves once again that this model can exploit the deep network architecture of multi-resolution input images to aggregate multi-scale contextual information, and can also utilize the advantages of its single models, as mentioned in our previous research [44].
Regarding binary classification, the accuracies of our classifiers considerably increased compared to the four-class prob-lem. This is because the normal and benign classes are not much different, and the in situ class also shares similar features with those of the invasive class. The results prove that the fused model was the best in reference to the algorithms in-cluded in the experiment of binary classification, and achieved a total accuracy of 99.5%. Table 3 also demonstrates that the sensitivity of the fused model used to detect carcinomas is 100% and its specificity is 97.2%.