Accurate Identification of the Trabecular Meshwork under Gonioscopic View in Real Time Using Deep Learning

Ophthalmol Glaucoma. 2021 Nov 16;S2589-4196(21)00270-2. doi: 10.1016/j.ogla.2021.11.003. Online ahead of print.

Abstract

Purpose: Accurate identification of iridocorneal structures on gonioscopy is difficult to master, and errors can lead to grave surgical complications. This study aimed to develop and train convolutional neural networks (CNNs) to accurately identify the trabecular meshwork (TM) in gonioscopic videos in real time for eventual clinical integrations.

Design: Cross-sectional study.

Participants: Adult patients with open angle were identified in academic glaucoma clinics in both Taipei, Taiwan, and Irvine, California.

Methods: Neural Encoder-Decoder CNNs (U-nets) were trained to predict a curve marking the TM using an expert-annotated data set of 378 gonioscopy images. The model was trained and evaluated with stratified cross-validation grouped by patients to ensure uncorrelated training and testing sets, as well as on a separate test set and 3 intraoperative gonioscopic videos of ab interno trabeculotomy with Trabectome (totaling 90 seconds long, 30 frames per second). We also evaluated our model's performance by comparing its accuracy against ophthalmologists.

Main outcome measures: Successful development of real-time-capable CNNs that are accurate in predicting and marking the TM's position in video frames of gonioscopic views. Models were evaluated in comparison with human expert annotations of static images and video data.

Results: The best CNN model produced test set predictions with a median deviation of 0.8% of the video frame's height (15.25 μm) from the human experts' annotations. This error is less than the average vertical height of the TM. The worst test frame prediction of this model had an average deviation of 4% of the frame height (76.28 μm), which is still considered a successful prediction. When challenged with unseen images, the CNN model scored greater than 2 standard deviations above the mean performance of the surveyed general ophthalmologists.

Conclusions: Our CNN model can identify the TM in gonioscopy videos in real time with remarkable accuracy, allowing it to be used in connection with a video camera intraoperatively. This model can have applications in surgical training, automated screenings, and intraoperative guidance. The dataset developed in this study is one of the first publicly available gonioscopy image banks, which may encourage future investigations in this topic.

Keywords: Artificial intelligence; Convolutional neural networks; Gonioscopy; Iridocorneal angle; Minimally invasive glaucoma surgeries.