TY - GEN
T1 - A Deep Features-based Radiomics Model for Breast Lesion Classification on FFDM
AU - Liang, Cuixia
AU - Bian, Zhaoying
AU - Lyu, Wenbing
AU - Zeng, Dong
AU - Ma, Jianhua
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/11
Y1 - 2018/11
N2 - The radiomics model can be used in breast cancer detection via calculating texture features in the lesion. However, the texture features are explicitly designed, or handcrafted in advance, and this would limit their ability to characterize the lesion properly. This paper aim to build a deep features-based radiomics model to classify benign and malignant breast lesions on full-filed digital mammography (FFDM). Specifically, the presented model considers the texture features learned from the deep learning network. This study consists of 106 retrospective data in both craniocaudal (CC) view and mediolateral oblique (MLO) view. First, 23 handcrafted features (HCF) are extracted from breast lesion, and 4096 deep features (DF) are extracted from the pre-trained deep learning model. Given that CC view and MLO view provide different breast lesion information, we consider combine two extracted features as combined-views. After T-test selection, a suitable feature set of HCF is selected. Finally, a multi-classifiers model is trained on the combination of HCF and DF. The experiment results demonstrate that the presented model can achieve better classification performance (AUC=0.946) compared with HCF only (AUC=0.902) and DF only (AUC=0.832).
AB - The radiomics model can be used in breast cancer detection via calculating texture features in the lesion. However, the texture features are explicitly designed, or handcrafted in advance, and this would limit their ability to characterize the lesion properly. This paper aim to build a deep features-based radiomics model to classify benign and malignant breast lesions on full-filed digital mammography (FFDM). Specifically, the presented model considers the texture features learned from the deep learning network. This study consists of 106 retrospective data in both craniocaudal (CC) view and mediolateral oblique (MLO) view. First, 23 handcrafted features (HCF) are extracted from breast lesion, and 4096 deep features (DF) are extracted from the pre-trained deep learning model. Given that CC view and MLO view provide different breast lesion information, we consider combine two extracted features as combined-views. After T-test selection, a suitable feature set of HCF is selected. Finally, a multi-classifiers model is trained on the combination of HCF and DF. The experiment results demonstrate that the presented model can achieve better classification performance (AUC=0.946) compared with HCF only (AUC=0.902) and DF only (AUC=0.832).
UR - https://www.scopus.com/pages/publications/85073122118
U2 - 10.1109/NSSMIC.2018.8824722
DO - 10.1109/NSSMIC.2018.8824722
M3 - 会议稿件
AN - SCOPUS:85073122118
T3 - 2018 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2018 - Proceedings
BT - 2018 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2018 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2018
Y2 - 10 November 2018 through 17 November 2018
ER -