The large number of available MRI sequences means patients cannot realistically undergo them all, so the range of sequences to be acquired during a scan are protocolled based on clinical details. Adapting this to unexpected findings identified early on in the scan requires experience and vigilance. We investigated whether deep learning of the images acquired in the first few minutes of a scan could provide an automated early alert of abnormal features. Anatomy sequences from 375 CMR scans were used as a training set. From these, we annotated 1500 individual slices and used these to train a convolutional neural network to perform automatic segmentation of the cardiac chambers, great vessels and any pleural effusions. 200 scans were used as a testing set. The system then assembled a 3D model of the thorax from which it made clinical measurements to identify important abnormalities. The system was successful in segmenting the anatomy slices (Dice 0.910) and identified multiple features which may guide further image acquisition. Diagnostic accuracy was 90.5% and 85.5% for left and right ventricular dilatation, 85% for left ventricular hypertrophy and 94.4% for ascending aorta dilatation. The area under ROC curve for diagnosing pleural effusions was 0.91. We present proof-of-concept that a neural network can segment and derive accurate clinical measurements from a 3D model of the thorax made from transaxial anatomy images acquired in the first few minutes of a scan. This early information could lead to dynamic adaptive scanning protocols, and by focusing scanner time appropriately and prioritizing cases for supervision and early reporting, improve patient experience and efficiency.
Keywords: Artificial intelligence; Cardiac magnetic resonance imaging; Machine learning; Neural networks.