Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
, 32 (4), 665-671

Beyond Human Perception: Sexual Dimorphism in Hand and Wrist Radiographs Is Discernible by a Deep Learning Model

Affiliations

Beyond Human Perception: Sexual Dimorphism in Hand and Wrist Radiographs Is Discernible by a Deep Learning Model

Sehyo Yune et al. J Digit Imaging.

Abstract

Despite the well-established impact of sex and sex hormones on bone structure and density, there has been limited description of sexual dimorphism in the hand and wrist in the literature. We developed a deep convolutional neural network (CNN) model to predict sex based on hand radiographs of children and adults aged between 5 and 70 years. Of the 1531 radiographs tested, the algorithm predicted sex correctly in 95.9% (κ = 0.92) of the cases. Two human radiologists achieved 58% (κ = 0.15) and 46% (κ = - 0.07) accuracy. The class activation maps (CAM) showed that the model mostly focused on the 2nd and 3rd metacarpal base or thumb sesamoid in women, and distal radioulnar joint, distal radial physis and epiphysis, or 3rd metacarpophalangeal joint in men. The radiologists reviewed 70 cases (35 females and 35 males) labeled with sex along with heat maps generated by CAM, but they could not find any patterns that distinguish the two sexes. A small sample of patients (n = 44) with sexual developmental disorders or transgender identity was selected for a preliminary exploration of application of the model. The model prediction agreed with phenotypic sex in only 77.8% (κ = 0.54) of these cases. To the best of our knowledge, this is the first study that demonstrated a machine learning model to perform a task in which human experts could not fulfill.

Keywords: Artificial intelligence; Bone development; Machine learning; Sexual development; Sexual dimorphism.

Figures

Fig. 1
Fig. 1
Age and sex distribution of the study subjects included in the final dataset. The numbers on top of each bar indicates the number of radiographs in each age category. The number on top of the red portion of each bar indicates the percentage of females in each age category. The total numbers of each sex are shown at the top-right corner
Fig. 2
Fig. 2
Data preprocessing pipeline. An overview of data preprocessing engine that normalizes radiographs to have a uniform size of 512 × 512 pixels, segments a region of the hand and wrist using a segmentation CNN, and enhances image contrast using contrast limited adaptive histogram equalization (CLAHE)
Fig. 3
Fig. 3
Age-stratified test accuracies. Test accuracies were shown as percent accuracy and stratified by age. The black dotted line indicates the overall accuracy across all 1531 radiographs in the test dataset
Fig. 4
Fig. 4
t-SNE visualization of the representations from the last convolutional layer of the model for bone sex classification. Here, we show how the algorithm clusters males and females. Radiographs with attention maps are linked to the corresponding points

Similar articles

See all similar articles

Cited by 1 article

References

    1. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542:115–118. doi: 10.1038/nature21056. - DOI - PubMed
    1. Poplin R, Varadarajan AV, Blumer K, Liu Y, McConnell MV, Corrado GS, Peng L, Webster DR. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng. 2018;2:158–164. doi: 10.1038/s41551-018-0195-0. - DOI - PubMed
    1. Gertych A, Zhang A, Sayre J, Pospiech-Kurkowska S, Huang H. Bone age assessment of children using a digital hand atlas. Comput Med Imaging Graph. 2007;31:322–331. doi: 10.1016/j.compmedimag.2007.02.012. - DOI - PMC - PubMed
    1. Lee H, Tajmir S, Lee J, Zissen M, Yeshiwas BA, Alkasab TK, Choy G, Do S. Fully automated deep learning system for bone age assessment. J Digit Imaging. 2017;30(4):427–441. doi: 10.1007/s10278-017-9955-8. - DOI - PMC - PubMed
    1. Lee H, Troschel FM, Tajmir S, Fuchs G, Mario J, Fintelmann FJ, Do S. Pixel-level deep segmentation: artificial intelligence quantifies muscle on computed tomography for body morphometric analysis. J Digit Imaging. 2017;30(4):487–98. doi: 10.1007/s10278-017-9988-z. - DOI - PMC - PubMed

LinkOut - more resources

Feedback