Neonatal pneumoperitoneum is a life-threatening condition requiring prompt diagnosis, yet its subtle radiographic signs pose diagnostic challenges, especially in emergency settings. To develop and validate a deep multi-task learning model for diagnosing neonatal pneumoperitoneum on radiographs and to assess its clinical utility across clinicians of varying experience levels. Retrospective diagnostic study using internal and external datasets. Internal data were collected between January 1995 and August 2018, while external data were sourced from 11 neonatal intensive care units. Tertiary hospital and multicenter validation settings. Internal dataset: 204 neonates (546 radiographs), external dataset: 378 radiographs (125 pneumoperitoneum cases, 214 non-pneumoperitoneum cases). Radiographs were reviewed by two pediatric radiologists. A reader study involved 4 physicians with varying experience levels. A deep multi-task learning model combining classification and segmentation tasks for pneumoperitoneum detection. The primary outcomes included diagnostic accuracy, area under the receiver operating characteristic curve (AUC), and inter-reader agreement. AI-assisted and unassisted reader performance metrics were compared. The AI model achieved an AUC of 0.98 (95 % CI, 0.94-1.00) and accuracy of 94 % (95 % CI, 85.1-99.6) in internal validation, and AUC of 0.89 (95 % CI, 0.85-0.92) with accuracy of 84.1 % (95 % CI, 80.4-87.8) in external validation. AI assistance improved reader accuracy from 82.5 % to 86.6 % (p < .001) and inter-reader agreement (kappa increased from 0.33 to 0.71 to 0.54-0.86). The multi-task learning model demonstrated excellent diagnostic performance and improved clinicians' diagnostic accuracy and agreement, suggesting its potential to enhance care in neonatal intensive care settings. All code is available at https://github.com/brody9512/NEC_MTL.
Keywords: Artificial intelligence; Multi-task learning; Neonatal intensive care unit (NICU); Neonatal pneumoperitoneum; Radiograph.
Copyright © 2025 Elsevier Ltd. All rights reserved.