Purpose: Existing automated spine alignment is based on original X-rays that are not applicable for teleradiology for spinal deformities patients. We aim to provide a novel automated vertebral segmentation method enabling accurate sagittal alignment detection, with no restrictions imposed by image quality or pathology type.
Methods: A total of 428 optical images of original sagittal X-rays taken by smartphones or screenshots for consecutive patients attending our spine clinic were prospectively collected. Of these, 300 were randomly selected and their vertebrae were labelled with Labelme. The ground truth was specialists measured sagittal alignment parameters. Pre-trained Mask R-CNN was fine-tuned and trained to predict the vertebra level(s) on the remaining 128 testing cases. The sagittal alignment parameters including the thoracic kyphosis (TK), lumbar lordosis (LL) and sacral slope (SS) were auto-detected, based on the segmented vertebra. Dice similarity coefficient (DSC) and mean intersection over union (mIoU) were calculated to evaluate the accuracy of the predicted vertebra. The detected sagittal alignments were then quantitatively compared with the ground truth.
Results: The DSC was 84.6 ± 3.8% and mIoU was 72.1 ± 4.8% indicating accurate vertebra prediction. The sagittal alignments detected were all strongly correlated with the ground truth (p < 0.001). Standard errors of the estimated parameters had a small difference from the specialists' results (3.5° for TK and SS; 3.4° for LL).
Conclusion: This is the first study using fine-tuned Mask R-CNN to predict vertebral locations on optical images of X-rays accurately and automatically. We provide a novel alignment detection method that has a significant application on teleradiology aiding out-of-hospital consultations. These slides can be retrieved under Electronic Supplementary Material.
Keywords: Automated analysis; Mask R-CNN; Out-of-hospital consultation; Spinal deformity; Teleradiology; Transfer learning.