Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 Nov;15(11):917-920.
doi: 10.1038/s41592-018-0111-2. Epub 2018 Sep 17.

Label-free Prediction of Three-Dimensional Fluorescence Images From Transmitted-Light Microscopy

Affiliations
Free PMC article

Label-free Prediction of Three-Dimensional Fluorescence Images From Transmitted-Light Microscopy

Chawin Ounkomol et al. Nat Methods. .
Free PMC article

Abstract

Understanding cells as integrated systems is central to modern biology. Although fluorescence microscopy can resolve subcellular structure in living cells, it is expensive, is slow, and can damage cells. We present a label-free method for predicting three-dimensional fluorescence directly from transmitted-light images and demonstrate that it can be used to generate multi-structure, integrated images. The method can also predict immunofluorescence (IF) from electron micrograph (EM) inputs, extending the potential applications.

Conflict of interest statement

Competing Interests The authors declare that they have no competing financial interests.

Figures

Figure 1:
Figure 1:
Label-free imaging tool pipeline and application using 3D transmitted light-to-fluorescence models. a) Given transmitted light and fluorescence image pairs as input, the model is trained to minimize the mean squared error (MSE) between the fluorescence ground truth and output of the model. b) Left to right, an example of a 3D input transmitted light image, a ground-truth confocal DNA fluorescence image, and a tool prediction. c) Distributions of the image-wise Pearson correlation coefficient (r) between ground truth (target) and predicted test images derived from the indicated subcellular structure models. Each target/predicted image pair in the test set is a point in the resultant r distribution; the 25th, 50th and 75th percentile image pairs are spanned by the box for each indicated structure, with whiskers indicating the last data points within the 1.5x interquartile range of the lower and upper quartiles. The number of images (n) was 18 for the cell membrane, 10 for the DIC nuclear envelope, and 20 for all other distributions. For a complete description of the structure labels, see Methods. Black bars indicate maximum correlation between the target image and a theoretical, noise-free image (Cmax; for details see Methods). d) Individual subcellular structure models are applied to the same input and combined to predict multiple structures. e) Localization of DNA (blue), cell membrane (red), nuclear envelope (cyan) and mitochondria (orange) as predicted for time lapse transmitted light (bright-field) input images taken at 5-minute intervals (center z-slice shown); a mitotic event with stereotypical reorganization of subcellular structures is clearly evident. Similar results were observed for two independent time-series input image sets. All results shown here are obtained from new transmitted light images not used during model training.
Figure 2:
Figure 2:
Label-free imaging tool facilitates 2D automated registration across imaging modalities. We first train a model to predict a 2D myelin basic protein immunofluorescence image (MBP-IF) from a 2D electron micrograph (EM) and then register this prediction to automate cross-modal registration. a) An example EM image with a highlighted subregion (left), the MBP-IF image corresponding to the same subregion (middle), and the label-free imaging tool prediction of the same subregion given only the EM image as input (right). b) The EM image of the subregion to be registered (top left) is passed through the trained 2D model to obtain a prediction for the subregion (bottom left), which is then registered to MBP-IF images within a larger field of view (bottom right) (see Methods for details). Only a 20 μm × 20 μm region from the 204.8 μm × 204.8 μm MBP-IF search image is shown; predicted and registered MBP-IF are overlaid (in green) together with the EM image. c) Histogram of average distance between automated registration and manual registration as measured across 90 test images, in units of pixels of MBP-IF data. This distribution has an average of 1.16 ± 0.79 px, where manual registrations between two independent annotators differed by 0.35 ± 0.2 px.

Comment in

Similar articles

See all similar articles

Cited by 16 articles

See all "Cited by" articles

References

    1. Skylaki S, Hilsenbeck O & Schroeder T Challenges in long-term imaging and quantification of single-cell dynamics. Nat. Biotechnol 34, 1137–1144 (2016). - PubMed
    1. Chen B-C et al. Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution. Science 346, 1257998 (2014). - PMC - PubMed
    1. Selinummi J et al. Bright field microscopy as an alternative to whole cell fluorescence in automated analysis of macrophage images. PLoS One 4, e7497 (2009). - PMC - PubMed
    1. Ronneberger O, Fischer P & Brox T U-Net: Convolutional Networks for Biomedical Image Segmentation. in Lecture Notes in Computer Science 234–241 (2015).
    1. Collman F et al. Mapping Synapses by Conjugate Light-Electron Array Tomography. Journal of Neuroscience 35, 5792–5807 (2015). - PMC - PubMed

Publication types

MeSH terms

LinkOut - more resources

Feedback